id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
265221330
pes2o/s2orc
v3-fos-license
Pregnant Questions: The Importance of Pragmatic Awareness in Maternal Health Question Answering Questions posed by information-seeking users often contain implicit false or potentially harmful assumptions. In a high-risk domain such as maternal and infant health, a question-answering system must recognize these pragmatic constraints and go beyond simply answering user questions, examining them in context to respond helpfully. To achieve this, we study assumptions and implications, or pragmatic inferences, made when mothers ask questions about pregnancy and infant care by collecting a dataset of 2,727 inferences from 500 questions across three diverse sources. We study how health experts naturally address these inferences when writing answers, and illustrate that informing existing QA pipelines with pragmatic inferences produces responses that are more complete, mitigating the propagation of harmful beliefs. Introduction Humans have varying information needs when they ask questions (Taylor, 1962).Sometimes these needs are easily inferred from the surface form, such as in factoid questions (e.g."Who is the 44th president of the United States").However, in a question such as "Is there a good non-dairy baby milk I can supplement for my newborn?", addressing the underlying false assumption "Newborns can safely drink non-dairy milk" becomes part of satisfying the unexpressed information need.Complete answers to these types of questions must not only address the surface question itself, but also "question the question", critically examining its pragmatic needs. These needs become magnified in sensitive domains, such as consumer health or the legal domain.In these settings, addressing pragmatic needs of questions involves proactively addressing false assumptions or implications in questions to ensure Figure 1: We ask public health experts to identify assumptions and implications from questions and find that incorporating them in a QA pipeline produces a more complete answer.that the asker does not continue holding inaccurate beliefs that they may act on.For example, a complete answer to the question about non-dairy milk for newborns should address that while nondairy milk is viable for older babies, newborns and infants need human breast milk or dairy-based formula because they offer complete nutrition. Language models have been shown to exhibit sycophancy (Sharma et al., 2023), sometimes adjusting responses to align with a human user's view.However, helpful QA systems should not only challenge false or subjective assumptions in questions (Kim et al., 2022) by verifying them against a vetted corpus, but also infer the asker's intent to make sure that its answer satisfactorily addresses their deeper information needs (Taylor, 1962), just as humans do. 1 We construct a dataset2 of 2,727 assumptions 1 "Pregnant" in our title also refers to its secondary definition, "full of meaning" (as in "a pregnant pause") alluding to the idea that questions are laden with implicit beliefs.Is it okay for my to color my hair after giving birth? Hair dye chemicals can pass through breast milk from mother to child.( False/Unsure ) What is the advantage for not having an epidural during the labor? Avoiding an epidural contributes to a more "natural" and unmedicated birthing experience.( False/Unsure ) What cough medicine is appropriate for breastfeeding mothers?Some cough medicines can be secreted in breast milk.( True ) Reddit Is it safe to lay on my stomach at 28 weeks of pregnancy? Sleeping on the stomach while pregnant may have potential risks ( False/Unsure ) Is it bad to use different bottles/nipples during feedings? Using different bottles or nipples for feeding may compromise the baby's latch.( False/Unsure ) How can I increase the time between feedings for my 3-month-old baby? It may be possible to sleep through the night while still ensuring the baby is fed.( True ) Natural Questions When does the fetus begin to develop memory? Fetuses have the ability to form memories. ( True ) What causes a rupture in the amniotic sac?There may be ways to prevent early amniotic sac rupture. ( False/Unsure ) When do the clinical manifestations of an ectopic pregnancy appear? There may be clinical manifestations of an ectopic pregnancy that do not appear early on.( True ) Table 1: Health experts identify pragmatic inferences from questions from three sources: Reddit, Natural Questions (Kwiatkowski et al., 2019), and questions asked to our domain-specific QA system, ROSIE (Mane et al., 2023).They also determine the veracity of each inference and provide supporting evidence from a trusted web document.and implications in 500 questions ( §2) collected from three diverse sources to study (1) how humans embed such assumptions and implications in questions, and (2) the extent to which they are naturally addressed in answers written by public health experts.We then ground assumptions and implications, two primary ways humans embed beliefs in questions, in existing linguistic theory of presuppositions and implicatures respectively ( §3).We refer to presupposition and implicature collectively as pragmatic inference.While recent work has focused on the task of detecting and addressing false presuppositions in open-domain QA (Yu et al., 2022), we find that false beliefs of question askers are more likely to present as implicatures than presuppositions ( §4).We experiment with inducing pragmatic behavior in existing QA pipelines with state-of-the-art retrieval and machine reading models ( §5).On questions with at least one highly plausible false pragmatic inference, our expert annotators rated responses from our pragmatic QA system as more helpful and informative. Thus, QA systems of the future must proactively address assumptions and implications in questions as they are increasingly deployed in sensitive domains. Collecting Assumptions and Implications in the Wild In contrast with factoid QA, systems deployed in sensitive domains such as consumer health must proactively mitigate harm.In these settings, correcting false assumptions is not optional: systems must provide contextual answers that balance information completeness with brevity. Access to high quality healthcare in the United States vastly differs across socioeconomic backgrounds (Becker and Newsom, 2003).Such users are often likely to turn to accessible internet resources and-as of late-general purpose chatbots (Palanica et al., 2019).This motivates us to focus on maternal and infant care, a challenging area of consumer health where patients are concerned with both their own physical health as well as the health of their child. To effectively study and induce pragmatic behavior in QA systems, the evaluation questions we choose must reflect real-world experiences and situations for which there may not be a straightforward answer explicitly addressed in a single web document.For example, answers to Natural Questions (Kwiatkowski et al., 2019, NQ)-a popular open-domain question-answering dataset-can be found directly in short extracted text snippets from Wikipedia (Table 1).In contrast, effectively answering the subjective questions sourced from Reddit requires commonsense reasoning and domain knowledge while identifying the asker's intent. We carefully construct a dataset of questions from three distributionally distinct sources: a domain-specific QA system we design and deploy to pregnant and postpartum participants we recruit (Mane et al., 2023), Reddit, and NQ.Then, we introduce an annotation scheme to elicit assumptions and implications from these questions, validate their plausibility, and finally collect supporting evidence to determine their veracity.Our final dataset contains 2,727 assumptions from 500 evaluation questions (Table 2).We also include 150 development questions used to train annotators and develop our QA systems. Gathering a Diverse Set of Maternal and Infant Health Questions Maternal Health QA System.We source questions come from a maternal and infant healthspecific question answering system that we build (Mane et al., 2023), henceforth referred to as ROSIE.Users ask questions pertaining to pregnancy or infant health and are instructed that the QA system does not have any personalized knowledge of their individual medical history or pregnancy.This system operates over a corpus of web documents we construct3 from trusted sources including United States governmental and hospital organizations on maternal and infant health, and spans salient topics such as pregnancy and postpartum symptoms, developmental milestones, and infant safety.Our end-to-end QA system, ROSIE, uses a passage retriever and reranker to provide web passages as answers to study participants via a mobile application.We randomly sample 200 anonymized questions asked to ROSIE for our evaluation set and 50 questions for our development set. Reddit.While the questions asked to ROSIE do reflect real-world experiences, they are asked to an automatic system and thus tend to include less situational detail or implicit content.We turn to Reddit 4 to capture long-tail questions that are about the diverse set of unique situations a new or expect-ing parent goes through.Table 1 highlights some distributional differences between questions from Reddit and other data sources.Our questions come from four popular subreddits about maternal and infant health: r/BabyBumps, r/breastfeeding, r/NewParents, r/Mommit, and r/beyondthebump from the pushshift5 dump. We develop a series of heuristics as a recalloriented first step to identify questions with false or subjective assumptions.We begin by selecting questions where an upvoted comment shows assumption-correcting behavior or where a user invokes their medical expertise, identified by a select list of discourse markers (Appendix A).Of these, we only retain posts beginning with a "wh" word, filtering a few hundred thousand posts down to 2,858 questions. As Reddit encourages community participation, many questions are "community seeking" as opposed to information-seeking.To identify information-seeking questions, we use GPT-3.5 (Ouyang et al., 2022) to filter medical questions from non-medical questions (Prompt A.1) then manually vet the final set of 297 questions.We randomly sample 200 questions for our evaluation dataset and 50 questions for our development set, discarding the rest. Titles of Reddit posts are often a hook or a summary of the entire post.Using the 50 development questions, we use to minimally edit the titular question to include crucial details from the post description, providing a series of exemplars (Prompt A.2).These rewrites mainly include the age of a newborn or the stage of pregnancy from the description, but sometimes include small situational details that contextualize the question.Two authors validate all rewrites, keeping the original title wherever both authors agree that the rewrite changed the communicative goal of the asker. Natural Questions.Lastly, we include maternal and infant health questions from NQ to study pragmatic aspects of factoid-style questions.We embed all questions in the train set of NQ using the sentence-transformers (Reimers and Gurevych, 2019) implementation of all-mpnet-base-v2 (Song et al., 2020), including unanswerable questions (Asai and Choi, 2021).We identify 2500 answerable questions and 2500 unanswerable questions as maternal health-related by identifying the top 100 nearest neighbors of 50 randomly sampled questions from the development sets of Reddit and ROSIE. 6From this set, we randomly sample 1007 questions for our evaluation set and 50 for our development set.Though obtained with a nearest neighbors approach, these questions greatly differ from those obtained from our previous sources, as they reflect the factoid QA-oriented tasks and goals of the original dataset creators (Table 2). Collecting Human Answers from Health Experts.We recruit a team of twenty health experts using Upwork8 to annotate our data including obstetricians and gynecologists (OB/GYNs), nurses, lactation consultants, and public health experts, many of whom have experience with patients.In addition, many of these expert annotators are either currently pregnant or postpartum or have been in the past.We ask a subset of six experts to write helpful and informative long-form answers to all 500 questions in our dataset (Figure 6, bottom panel).While annotators write answers from scratch, they must provide supporting web documents from the same list of verified sources we use to build the corpus for ROSIE. Identifying Assumptions and Implications Inferring possible assumptions, implications, and asker beliefs from patient questions in our domain are challenging.In the past, others have extracted assumptions using shallow signals from the surface form of a question (Kim et al., 2021;Parrish et al., 2021).While some assumptions or implications in our dataset can be inferred directly from the question expression, others require deeper domain or experiential knowledge (Table 1). Eliciting these assumptions and implications from non-linguists is challenging as existing linguistic frameworks ( §3.1) are inaccessible or cumbersome for those unfamiliar with the theoretical concepts behind them.As such, we operationalize large-scale data collection by asking five annotators from a different subset of our expert annotator pool to first write a list of subquestions that an answer to the original question would address (Figure 6, top panel).Doing so primes annotators to reason about the intent behind a question as well as the information needs of an asker.Then, we ask them to write a set of sentences reflecting possible beliefs or assumptions that the patient may hold (or, alternatively, beliefs that any complete answer to the question must address).We emphasize that the assumptions they write can be either medically or scientifically true or false. Then, we consolidate the set of subquestions and human-written assumptions and beliefs into a single set of assumptions and implications using GPT-3.5 (Prompt B.1). Annotating Inference Veracity Lastly, we ask a new subset of eight expert annotators to annotate whether each assumption and implication in our dataset is medically or scientifically true, false, or subjective and provide a supporting web document from our list of verified sources along with a passage from the document (Figure 6, middle panel). Validation.To verify that the assumptions and implications we extract are plausibly inferrable from the question, we recruit an additional pair of health experts, which we refer to as expert validators, to rate inferences.9We sample 100 assumptions and implications judged as false or subjective, and 100 true inferences and ask our expert validators to rate the plausibility of an inference on a 1-5 Likert scale based on how likely the question asker is to believe the assumption or implication.Henceforth, we refer to this sample of 200 inferences coming from 152 unique questions as INFERENCE-SAMPLE.Both annotators judge the majority of our inferences as plausible, with 80% and 95% rated with a score of at least 3 (see Figure 4 for the rating scale).Spearman's correlation between the two annotators is 0.69.See Appendix C for more detail. Grounding Assumptions and Implications in Linguistic Theory Assumptions and implications in our dataset map to two well-studied phenomena in linguistic pragmatics: presupposition and implicature (Grice, 1975;Stalnaker et al., 1977).We begin with a short primer of both types of pragmatic inference ( § 3.1) and then discuss the implications of both types in a QA setting ( § 3.2). Two Types of Pragmatic Inference: Presupposition and Implicature A sentence S is a pragmatic inference of a question Q if, depending on the context and conversational goals of discourse participants (Jeretic et al., 2020), a human would believe that the asker of Q believes or assumes S to be true.Henceforth, we refer to the assumptions and implications that we collect in our dataset as pragmatic inferences.We review the two most relevant types of pragmatic inferences: presupposition and implicature. Presupposition.Presuppositions are implicit assumptions in utterances taken for granted by discourse participants (Beaver, 1997).The question "What vitamins should I stop taking after becoming pregnant?"presupposes "I was taking vitamins before becoming pregnant."Presuppositions can often be detected solely by the presence of a lexical or syntactic trigger (Levinson et al., 1983).In the example above, the word stop presupposes that an activity was already in motion.We refer to these presuppositions as "trigger-based". As we observe during the collection of our dataset, domain or world knowledge is often needed to capture presuppositions in real-world data that are not apparent from lexical or syntactic cues (Abusch, 2002).For example, the question "Are multiple ultrasounds dangerous for my baby?" does not directly result in non-trivial trigger-based presuppositions.However, the asker of the question presupposes that the effects of an ultrasound are additive and hence asks about whether that additive effective is dangerous. Implicature.Implicature is a type of pragmatic inference that is suggested by an utterance as opposed to part of its literal meaning (Grice, 1975).Consider the question "Do most babies fit in newborn clothes?"While the speaker understands that newborn clothes fit some babies, their question implies that not all babies fit in newborn clothes.As we discover, a significant portion of inferences in our dataset are implied from questions rather than presupposed, but detecting and generating implicatures remain understudied in NLP. Some implicatures are related to lexical items or syntactic structure of utterances.For example, the statement "These prenatal vitamins are in gumdrop form, but are healthy" implies that gumdrops are usually not healthy.Others are a function of a speaker's intent, beliefs, and other contextual ele-ments (Zheng et al., 2021).While they are a part of the content of an utterance, these implicatures are not at-issue (e.g. the main point under discussion (Potts, 2004;Koev, 2018)) and are not encoded by the linguistic properties of a sentence (Allott, 2018).Consider the question "How can you tell the difference between postpartum depression and exhaustion?".Reasoning about asker belief, we may conclude that they are implying that the two conditions should be treated differently, as one is more serious than the other. Presupposition and Implicature in QA In a natural setting, as we discover, humans embed both presuppositions and implicatures nearly equally in questions ( §4).However, from a linguistic perspective, they represent different levels of an asker's commitment to the propositional content of the inference (Peters, 2016).Presuppositions are already a part of an asker's world model.In contrast, implicatures are likely beliefs that may be negated in an asker's subsequent utterances.Consider the question "Is it normal for my baby to move more than usual when closer to due date?" with both the presupposition "There are factors that contribute to changes in fetal movement as the due date approaches" and the implicature "It may not be necessary to be concerned if there is a significant increase in fetal movement close to the due date."While both are false, the presupposition is stronger, and is clearly in need of addressing in a potential answer.As illustrated, these distinct phenomena must be dealt with differently when answering a question. Related Work. Existing work in pragmatics in QA focuses on open-domain question answering.Kim et al. (2021) present the first study of presuppositions in Google search queries using the Natural Questions (NQ) dataset (Kwiatkowski et al., 2019) that are unanswerable due to false presuppositions.However, their system only addresses trigger-based presuppositions, overlooking the type of deeper presuppositions present in our dataset derived from world or domain knowledge.Other work has looked at Google queries with questionable assumptions (Kim et al., 2022) and false presuppositions in open-domain Reddit questions (Yu et al., 2022).Computational studies of implicature have only focused on specific types, such as scalar implicature (e.g., some X → not all X) (Schuster et al., 2020;Zheng et al., 2021;Kabbara and , 1992) visualizing the distribution of expert-annotated pragmatic inferences in INFERENCE-SAMPLE with their veracity, inference type, and whether or not they were addressed or not in the expert-written answer to the question from which they came from.When users make false or subjective inferences, they are more likely to do so as an implicature.Moreover, when an inference is false, it is more likely to be naturally addressed in an answer by public health experts. ung, 2022; Jeretic et al., 2020).As a result of the context induced by our domain, implicatures in our dataset extend beyond scalar implicature. 4 How do people ask and answer questions? Before we investigate the behavior of QA systems, we first study how humans embed pragmatic inferences in their questions ( §4.1) as well as to what extent they are naturally addressed by human public health experts ( §4.2). Pragmatic Inference Type: Understanding Speaker Commitment When users ask questions, how strongly are they committed to the inferences that experts identify in their questions?Presupposition is a phenomenon based on mutual acknowledgment of facts: when a human makes a presupposition, not only are they presuming the content of the inference, they are also signaling the belief that their interlocutor (here, a QA system) should believe it too. On the other hand, implicatures are a softer way for humans to express uncertainty.For example, "Which immunity injections can I skip for my baby?" and "Is it sufficient if my baby takes most immunity injections" have the same underlying inference ("It is okay to pick and choose vaccines"), but is taken for granted in the first (presupposition), whereas loosely suggested in the second (implicature).We want to distinguish inferences-separating implicatures from presuppositions-in our questions to better characterize so that we can prioritize ad-dressing stronger false inferences. Annotation Framework.A pair of authors independently annotate all inferences in INFERENCE-SAMPLE as a presupposition or implicature by first determining whether it is a proposition about the world that the asker believes to be true, without which the question would not be felicitous (presupposition) or whether it involves deriving asker belief through communicative principles (implicature).Between authors, Cohen's kappa is κ = 0.85, indicating strong agreement.Author annotators adjudicated the final inference type (see Figure 2 for overall distribution), but individual annotator labels and adjudication rationales are preserved as a part of our dataset. Findings.Presuppositions and implicatures are balanced in INFERENCE-SAMPLE (Figure 2), with a slight majority of inferences as implicatures, indicating that many inferences that health expert annotators identify are more subtle.When an inference is true, it is almost equally likely to be a presupposition or an implicature.However, when users make false or subjective (veracity marked "Unsure") inferences, they are more likely to do so via implicature (Figure 2).Past work has looked into generating and verifying presuppositions in open-domain QA, but identifying and addressing implicatures in an effort to make answers information-complete remains heavily underexplored.This finding highlights a key strength of our work: the ensuing context from our specific domain tests the usefulness of pragmatic inference in QA by allowing us to extract a greater range of inferences. In settings that lack such context (e.g.single-turn open-domain QA), we are restricted to leveraging lexical or syntactic signals from the surface form of the question (Kim et al., 2021) since reasoning about asker belief is not possible without other contextual signals.For example, in the absence of additional context, the question "Should I push grandparents for flu shot and tdap?" may give rise to inferences involving the safety or effectiveness of these vaccines for the elderly.However, upon learning that this was asked in a web forum by a postpartum mother, we may reason that she believes her infant may be at risk for contracting the flu or other diseases if their grandparents handle them unvaccinated. Addressing Inferences in Expert Answers When health experts are tasked with answering questions, how likely are they to naturally address inferences that users implicitly make?Studying whether or not answers naturally address pragmatic inferences ( § 4.2) gives us better insight into the types of inference health experts, and in turn models, should prioritize when answering questions. Annotation.We ask two annotators from our expert annotator pool to determine whether each inference in INFERENCE-SAMPLE is addressed, either implicitly or explicitly, by the human-written answer to its source question. Findings.The majority of inferences in INFERENCE-SAMPLE are addressed by the humanwritten answer naturally (Figure 2).Importantly, when an inference is false, it is more likely to be naturally addressed.Moreover, a significant number of true inferences are also addressed by an answer, indicating that health experts not only aim to correct false or subject inferences but also prioritize completeness.This key finding supports one of the main arguments of this work: QA systems must address pragmatic inferences in their answers, just as humans do. 5 Inducing Pragmatic Behavior in QA Inducing pragmatic behavior in QA systems is not straightforward.Existing systems are not trained to proactively reason about asker beliefs, since many popular QA datasets do not necessitate this type of behavior (e.g.factoid QA). We experiment with eliciting model answers that address the pragmatic needs of questions, such as refuting false inferences, using the pragmatic inferences in our dataset.We inject inferences at each stage of the classic QA pipeline: passage retrieval, reranking, and machine reading ( §5.1) and evaluate outputs against expert-written answers with both automatic and human evaluation ( §5.2). Experimental Setup Corpus.We use the corpus from Mane et al. ( 2023) of 408,000 documents from verified web sources on maternal health and infant care and augment the corpus with the sources that our expert annotators found while both writing answers and determining the veracity of inferences. Baseline Models.As a baseline system, we use a retrieval, reranking, and reading-based QA pipeline.Contriever (TIzacard et al., 2022), an unsupervised dense passage retriever, identifies top relevant documents (n = 100) in our corpus given a question.Those documents are reranked using TARTfull (Asai et al., 2022), a multi-task retrieval system with a cross-encoder architecture (Instruction E.1).TART is instruction-tuned, equipping it with the flexibility to redefine passage relevance for different tasks.We feed the top five reranked documents to three different reader models: FLAN-T5-XXL (Chung et al., 2022, 11 (Jiang et al., 2023) (an open source large language model, Prompt E.4), and GPT-3.5 (Prompt E.4). Augmenting Systems with Pragmatic Inferences. In addition to retrieving the top 100 passages using the question as input, we retrieve the top 100 passages for each pragmatic inference of the question (i 1 ...i k ) as well.Then, for each pragmatic inference i, we rerank the top 100 passages using a new inference-informed instruction (Instruction E.2) and select the top passage post-reranking.We augment the top five reranked passages from the question with these k top passages from each pragmatic inference to feed to each reader (Prompt E.5).During reading, we prompt MISTRAL-7B and GPT-3.5 to address all k assumptions when generating an answer. 10To keep the same number of passages fed to readers in the baseline pipelines as in the inference-augmented pipeline, we add k extra passages to the top five existing ones.This ensures that while the volume of information presented to machine readers is the same in both pipelines, the nature of the content differs, allowing us to measure the utility of inference augmentation during retrieval and reranking.Figure 3 visualizes our baseline and inference-augmented QA pipelines. 10We do not use FLAN-T5-XXL here because it struggled with reading in the baseline setting. Evaluation We evaluate answers from seven pipeline variations against expert answers (Table 3): ( 1 Automatic Evaluation Metrics.Three automatic evaluation metrics measure the quality of generated answers: ROUGE (Lin, 2004) (both F1 and recall),BLEURT (Sellam et al., 2020), and QAFACTE-VAL (Fabbri et al., 2022), a more recent QA evaluation metric originally designed to measure the faithfulness of summaries.scores the strongest according to QAFACTEVAL, our main evaluation metric because it-of the three metricsmost closely captures information content.However, automatic evaluation of generated answers does not capture several higher-level semantic and pragmatic aspects of the question.Thus, we still need experts to validate the answers. Human Judgments.We ask our expert validators to score answers from the top-performing baseline and inference-augmented pipeline (BASELINE-GPT-3.5 and INFERENCE-GPT-3.5, on QAFACTE-VAL respectively).For each of the 152 questions in INFERENCE-SAMPLE, expert validators score both model outputs simultaneously from 1-5 based on completeness (instructions in Figure 5).Answers typically received a score of 1 when they were offtopic and missing crucial information, a score of 2 when they were topical but still missing crucial information, 3 when containing all essential information to the question, 4 when most information was present for completeness, and a score of 5 when the answer was information complete.Judging the information completeness of an answer is a subjective task, as reflected by the Spearman rank correlation between their annotations (ρ = 0.34).While the mean score of inference-augmented examples is comparable to baseline answers (4.43 vs. 4.45), annotators rated the inference-augmented answer as equivalent or better than its baseline counterpart in 75% of questions in INFERENCE-SAMPLE (see Table 5 for examples). We further focus on annotator preferences on our Table 3: Mean and standard deviations of automatic (ROUGE, BLEURT, QAFACTEVAL) and human evaluation metrics per question.We report results for the top retrieved passage and the top reranked passage, and two modes with and without access to human-written assumptions.Inference-augmented models perform competitively with baselines, indicating the promise of inducing pragmatic behavior in QA models to mitigate harm. original motivating population of questions-those with highly plausible, false assumptions.Both annotators rate inference-augmented answers higher than the default answers in the subset of questions with at least one highly plausible, (plausibility=5) false pragmatic inference (Table 4).We hypothesize that the similar ratings received by the two systems across all questions is due to shortcomings in the instruction-following capabilities of LLMs.Forcing the reader model to address pragmatic inferences distracts it from answering the question more completely, and does not always result in more helpful answers when the pragmatic inferences are true.These results illustrate the promise of inducing pragmatic behavior in QA models and represent a lower bound of their performance, as none of the models we experiment with were trained to optimize for addressing assumptions in questions. 6 Can inference extraction be automated? While pragmatic inferences elicited from health experts are informed by their expertise, they are slow and costly to collect.Our QA experiments use human-written inferences to establish an upper bound of answer quality with existing models.However, a fully automatic pragmatic QA pipeline must first generate pragmatic inferences relevant to a question and then generate an answer that addresses the subset of false inferences.As such, we experiment with generating pragmatic inferences with GPT-3.5 (Ouyang et al., 2022) to understand to what extent automating the process is feasible with existing prompting and in-context learning. Experimental Setup.We generate inferences with GPT-3.5 for all questions in INFERENCE-SAMPLE using seven in-context examples corresponding to 37 different pragmatic inferences, as more in-context examples yields diminishing re-turns (Liu et al., 2022).We select pragmatic inferences written by multiple expert annotators from diverse user questions and randomly shuffle them to prevent unwanted effects emerging from example order (Si et al., 2022), including exemplars from all three sources (ROSIE, Reddit, and NQ) to capture distributional differences in their pragmatic inferences.As humans naturally did, we let GPT-3.5 generate varied numbers of inferences per question. Evaluation: Can GPT-3.5 generate human-like pragmatic inferences?For each inference in INFERENCE-SAMPLE, a pair of authors annotate whether or not each human-written assumption is semantically equivalent to at least one inference generated by GPT-3.5 (Prompt F.1) with a Cohen's kappa of κ = 0.88.Post-adjudication, 63% of inferences were not present among model generations.When stratifying by inference type, 53% of presuppositions and 71% of implicatures were not present.This illustrates that just as they are more difficult to detect, implicatures grounded in domain knowledge are more difficult for language models to generate. Conclusion We show that it is possible to induce pragmatic behavior in QA systems to correct latent false assumptions in the sensitive domain of maternal and infant health.Next-generation QA systems deployed in real-world settings must learn to address the pragmatics of user questions.Though we have shown the viability of explicitly inducing pragmatic behavior in models in this work, directions for future work include training retrievers to inherently search for evidence to address pragmatic inferences and readers to reason on top of such evidence to tactfully and effectively challenge user beliefs.users of ROSIE participating in our clinical trial.The study reported in this paper was supported by research grants from the National Institute on Minority Health and Health Disparities (grant number R01MD016037) and by the National Library of Medicine (grant number R01LM012849).The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the paper. Limitations Data are not multilingual.Although our participants who provide questions come from diverse socioeconomic and racial backgrounds, all of the data we collect are in English.In addition, since we require participants to be located in the United States, the questions provided by participants are only reflective of the healthcare needs of Englishspeaking residents in the United States. Choice of a single domain.While our approach can be generalized to any other domain, all of our data and experiments are confined to a single domain (maternal and infant health).We have not validated that our conclusions generalize beyond this particular important domain. Pragmatics can be annotator-dependent.Finally, some degree of pragmatic inference is always dependent on the annotator, and we have not validated that this is consistent across different annotator backgrounds. Ethical Considerations NLP systems are never a replacement for doctors or clinical expertise, especially in high-stakes settings.This work has grown out of collaboration with public health experts to help disseminate medically accurate but contextual information to new or expectant mothers with limited access to healthcare.Upon detection of false or potentially problematic assumptions, patients can then be referred to healthcare providers better able to provide information than current QA systems.All of our data was collected with IRB approval in consultation with public health professionals. A Reddit Question Filtering A.1 Discourse Markers Assumption Correcting Markers: "however,", "actually,", "as a matter of fact", "in fact", "not true", "despite what you", "on the contrary", "common misconception", "not exactly", "just to clarify", "you're confusing", "correct me if i'm wrong", "correct me if i am wrong", "you're wrong", "we have to remember that", "while that's true", "could be dangerous", "might not be the best thing" Expertise Invoking Markers: "as a doctor", "as a medical professional", "i'm a doctor", "being a doctor", "as a nurse", "i'm a nurse", "i'm a medical professional", "being a nurse" Prompt A.1: Medical vs. Non-Medical Question Identification Prompt: You are an expert in maternal and infant health who specializes in finding out whether a question posed by a new or expecting mother is seeking opinion/community participation, or whether it is a medical question.Given a question, you must answer whether it is question seeking medical advice or if it is seeking personal anecdotes and sharing experience.If it's seeking medical advice, answer with "medical". Otherwise, answer "non-medical".If a question is under-specified, answer with "non-medical".Prompt: Given questions asked by new or expecting mothers, your task is to identify the assumptions in them.For this task, you will be given a QUESTION asked by a new or expecting mother, some ASSUMPTIONS (as a list of beliefs or assumptions) in those questions identified by health experts, and some possible SUBQUESTIONS (as a list) that public health experts have identified to have the same information goals as the original question.Given all three of these, your task is to consolidate the SUBQUESTIONS and ASSUMPTIONS into a single, exhaustive list, called INFERENCES.Turning a SUBQUESTION into an inference may involve just turning it into a declarative sentence, or identifying the assumptions made in the SUBQUESTION.Finally, add the INFERENCES to the list of ASSUMPTIONS and remove any duplicates. C Further Details on Inference Validation Plausibility scores are the outcome of a three-stage process: (1) a pregnant or new mother holding a belief that is latent while asking a question, (2) a maternal health expert reasoning about these latent beliefs of the mother from the question text, and finally (3) a different expert estimating the likelihood of the beliefs extracted in Step 2 of this process.The plausibility distribution in Figure 4 represents the results of Step 3. It is important to note that the humans involved in each step of the process are completely disjoint, and have little to no information about each other.In 90% of inferences, the validators agree that the extracted inferences in Step 2 are indeed plausible.The 10% of inferences Your Task The question below is asked by a pregnant or postpartum mother on a topic related to pregnancy and postpartum symptoms, developmental milestones, infant care, etc.We will present you with two answers to this question.Please read the two answers and provide a score for each on a discrete scale from 1 to 5 evaluating the quality of answer completeness.If both answers are similarly poor, or similarly strong, you can give them similar scores.A high quality answer should identify the information needs of the question and make sure any assumptions are addressed without excluding any major content you feel is necessary in a response to this question.Higher quality answers may provide more comprehensive information or include cautionary caveats.that validators found less plausible is more a reflection of the subjectivity of pragmatic inference in QA in general (see lines 686-690) than our usage of GPT-3.5 a textual transformation tool. In fact, experts themselves can disagree about answers to questions or background inferences (a finding echoed in Xu et al. (2023)), so we include the plausibility study to reassure audiences that experts do find the inferences in our dataset plausible. D Human Evaluation Details Human Evaluation Annotation Instructions.See Figure 5 for annotator instructions. Annotator preferences on questions with a highly plausible, false assumption.Human annotators prefer inference-augmented answers for questions with a highly plausible, false assumption.See Table 4 Using evidence only from verified pieces of information above, respond to the following question with a helpful and complete answer.Use information from multiple sources above if necessary to answer the question.In your answer, do not reveal that you are fetching information from these pieces of evidence.Use information only from the verified sources above, and not from any other sources. Question: {question} Answer: Prompt E.5: Inference-Augmented Reader Prompt You are an expert in maternal and infant health.You are given a few verified pieces of information below: Context: {context} Using information only from verified pieces of information above, respond to the following question with a helpful and complete answer.As humans often make assumptions while asking questions, your answer must address assumptions made by the asker listed below. Assumptions: {assumptions} Use information from multiple sources above if necessary to answer the question below and address the ASSUMPTIONS.In your answer, do not reveal that you are fetching information from these pieces of evidence.Use information only from the verified sources above, and not from any other sources.Question: {question} Answer: When should I give my baby fever reducing meds after shots? You should give your baby fever reducing medication after shots if they are experiencing discomfort or extreme discomfort due to the fever.However, it is important to note that in most cases, a child does not need to be medicated for a fever unless they are extremely uncomfortable. The most important thing to do when a child has a fever is to make sure they stay hydrated by drinking plenty of fluids.If you are unsure whether your child should receive medication to address the fever or other symptoms, it is recommended to consult with your child's healthcare provider. It is not recommended to give your baby fever reducing medication before or immediately after receiving shots. Studies have shown that giving medication prior to a vaccine visit may decrease the child's immune response to the vaccine.Fevers after vaccination are actually a sign that the immune system is responding to the vaccine.If your baby develops a fever after receiving shots, it is generally not necessary to give them medication unless they are extremely uncomfortable.The most important thing to do when a child has a fever is to make sure they stay hydrated by drinking plenty of fluids.If you are unsure whether your baby should receive medication to address the fever or other symptoms, it is best to consult with your child's healthcare provider. How warm can my bath be while staying safe for my 4month-old baby? To ensure the safety of your 4-month-old baby during bath time, it is recommended to fill the basin with water that feels warm, but not hot, to the inside of your wrist or elbow.The water temperature should be comfortable for your baby and should not pose a risk of burns or scalds.It is important to check the water temperature before placing your baby in the bath to prevent any potential harm. The safe temperature for a baby may be different from that for an adult.It is important to ensure that the bathwater is warm, but not hot, for a 4-month-old baby.According to the provided information, the water should feel warm to the inside of your wrist or elbow.It is recommended to fill the basin with 2 inches of water at this warm temperature.This is to prevent the baby from getting burned or scalded by hot water.It is important to note that babies are not as adaptable as adults to temperature change, and they can lose heat rapidly.Therefore, it is crucial to maintain a safe and comfortable water temperature for the baby during bath time. Can pregnancy tests be trusted? According to pregnancy kit manufacturers, most at-home pregnancy tests are 98% to 99% accurate when used exactly as instructed.This means that positive results can generally be trusted.However, it is important to note that false negative results can occur if the test is taken too soon.Therefore, it is recommended to wait until a sufficient amount of time has passed since the possible conception before taking a pregnancy test to ensure more accurate results. According to the information provided by pregnancy kit manufacturers, most at-home pregnancy tests are 98% to 99% accurate when used exactly as instructed.Positive results from these tests can be trusted.However, it is important to note that false negative results can occur if the test is taken too soon.Blood tests, on the other hand, can detect pregnancy sooner and are the only tests that can show specific amounts of HCG in the body... Table 5: Outputs generated by a standard QA system and a QA systems informed by pragmatic inferences.Answers from the pragmatic system better explicate their underlying inferences. F Inference Generation Prompt Prompt F.1: Inference Generation Prompt When humans ask questions, they often have certain assumptions or implications that are embedded in the questions.These assumptions and implications may be true or false, and they may or may not be present in the surface form of the question.Given a question asked by a new or expecting mother, your task is to identify all relevant assumptions and implications in these questions and write them in a list titled INFERENCES.Each inference under INFERENCES should be an independent and declarative assertion that represents an assumption or an implication that the speaker makes while asking the question. Figure 6: Instructions given to each annotator for each phase of annotation.First, we show questions to annotators and ask them to write sub questions and the assumptions present (top panel).Then, after passing these outputs to a prompt-based model to extract consolidated inferences, we ask a different set of annotators to verify the veracity of the inferences along with supporting evidence (middle panel).Simultaneously, we ask a third set of annotators to write answers to questions without any inference supervision (bottom panel). Figure 2 : Figure 2: Tree-map (Shneiderman, 1992) visualizing the distribution of expert-annotated pragmatic inferences in INFERENCE-SAMPLE with their veracity, inference type, and whether or not they were addressed or not in the expert-written answer to the question from which they came from.When users make false or subjective inferences, they are more likely to do so as an implicature.Moreover, when an inference is false, it is more likely to be naturally addressed in an answer by public health experts. Figure 3 : Figure 3: Our baseline and pragmatic inference-augmented QA pipelines.We experiment with retrieval, reranking, and reading stages and a variety of instruction-tuned and prompt-based models. Figure 4 : Figure 4: Ratings of expert validators of the plausibility of inferences written by health experts in our dataset.The majority of inferences are plausible. Prompt A. 2 : Reddit Question RewritingPrompt: You will be shown questions about maternal and infant health asked by users.Each question contains a TITLE and DESCRIPTION that elaborates on it, containing details that are both relevant and irrelevant to answering the question.Given a question TITLE and a DESCRIPTION, your task is to incorporate only the relevant details from the DESCRIPTION and rewrite the TITLE into a REWRITE.If there are no relevant details, return the TITLE.As a general rule, keep the rewrite as similar to the original question as possible.The rewrite should be a question in a single sentence.Title: How to Stop Co-Sleeping Description: ... Rewrite: How to wean my 11-month-old out of Co-Sleeping?B Consolidating Subquestions and Assumptions and Implications into Pragmatic Inferences Prompt B.1: Question Consolidation Figure 5 : Figure 5: Human evaluation instructions provided to two expert annotators. Table 2 : Dataset statistics stratified by question source. Che- Table 4 : Human preferences of answers on questions with at least one high-plausibility false assumption. .
2023-11-17T06:43:04.227Z
2023-11-16T00:00:00.000
{ "year": 2023, "sha1": "2be6cba1f718ea964a4e070eada84530b90861fa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ArXiv", "pdf_hash": "2be6cba1f718ea964a4e070eada84530b90861fa", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
258867223
pes2o/s2orc
v3-fos-license
Depression is associated with heart failure in patients with type 2 diabetes mellitus Background Type 2 diabetes mellitus (T2DM) is associated with an increased risk of heart failure (HF). Depression, a common comorbidity of T2DM, may further increase the risk of heart failure (HF). We investigated the association between depression and incident HF in patients with T2DM. Methods and results Depressive symptoms were assessed in the ACCORD Health-Related Quality of Life study participants at baseline, 12, 36, and 48 months using the nine-item Patient Health Questionnaire (PHQ-9). The severity of depressive symptoms was categorized as none (0–4 points), mild (5–9 points), or moderate-severe (10–24 points). Cox regression with PHQ-9 as a time-dependent covariate was used to assess the association between depression and incident HF. During the median follow-up of 8.1 years, 104 participants developed HF (incidence: 7.1/1,000 person-years). Half of the participants with moderate-severe depression were relieved and a significant percentage of participants without depression or with mild depression worsened to mild or moderate-severe depression during the follow-up period, respectively. Each unit increase in the PHQ-9 score was associated with a 5% higher risk of HF (hazard ratio [HR]:1.05, 95% confidence interval [CI]: 1.01–1.10). Patients with depression ever (HR: 2.23, 95% CI: 1.25–3.98) or persistent depression (HR: 2.13, 95% CI: 1.05–4.44) had a higher risk of HF than those without depression ever. Conclusion Depressive symptoms change greatly in T2DM patients, depressive symptoms are an independent risk factor for HF. These results reinforce the importance of continuous evaluation and management of mental health status in T2DM patients with high HF risk. Introduction Type 2 diabetes mellitus (T2DM) has become an emerging epidemic and a major clinical and public health concern (1). Poor mental health is an additional concern in patients with T2DM. Approximately one in every four patients with T2DM suffers from clinically significant depression (2). T2DM increases the risk of depression, and depression increases the risk of hyperglycemia and insulin resistance, which in turn worsen T2DM. Patients with T2DM have a higher risk of developing cardiovascular disease (CVD) than those without (3). The risk of heart failure (HF) is significantly considerably higher and the prevalence of HF in patients with T2DM is up to four times higher than that in the general population (4,5). Depression is also one of the most prevalent symptoms in patients with HF. Symptoms of depression worsen the quality of life and the prognosis of patients with established HF (6,7). However, it is unclear whether depression is a risk factor for HF or an incidental comorbidity of HF, due to different populations and instruments used to assess depressive symptoms (8)(9)(10)(11)(12). None of the previous studies have investigated the specific effects of depression on HF incidence among patients with T2DM who have a higher risk of both depression and HF than those without T2DM. Previous studies have not considered dynamic changes in depressive symptoms during the follow-up period. Persistent and transient depression may have distinct effects on the incidence of HF. The Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial provided a unique opportunity to investigate the relationship between depression and HF in T2DM patients. In this study, we aimed to investigate the prospective association of dynamic depressive symptoms with the risk of subsequent HF among patients with T2DM, taking into account traditional CVD risk factors. Study population and data collection The rationale, design, and primary outcomes of the ACCORD study have been described and published previously (13, 14). Briefly, the ACCORD study, including 10,251 patients (mean age 62 years and mean glycated hemoglobin [HbA1c] 8.3%) with a median onset of T2DM 10 years previously, was designed to assess whether intensified control over blood glucose, blood pressure, and lipid levels could improve CVD outcomes. After an average follow-up of 3.7 years, the intervention was discontinued because intensive glycemic control increased the risk of cardiac death, and all participants transitioned to standard glycemic control and follow-up was continued. The ACCORD Health-Related Quality of Life (HRQL) study was a substudy of the ACCORD study, which was designed to prospectively assess the overall effect of intensive intervention on validated measures of depression and HRQL from the participant's point of view. Depressive symptoms were assessed in all ACCORD HRQL sub-study participants using the nine-item Patient Health Questionnaire (PHQ-9), which is based on the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) criteria, at baseline and at 12, 36, and 48 months during ACCORD study clinical visits. The PHQ-9 included nine questions, each of which was graded on a scale of 0 to 3 based on the severity of the symptom. The severity of depressive symptoms was categorized as none (0-4 points), mild (5-9 points), or moderate-severe (10-24 points). Participants with a history of HF at enrollment were excluded from the analysis. A flowchart of the study is shown in Supplementary Figure 1. The ACCORD trial was approved by an NHLBI review panel and the ethics committee at each center. Covariates The baseline characteristics of the participants, including age, sex, race (white/non-white), duration of T2DM (years), living alone history of CVD or HF (with/without), proteinuria (with/without), family history of CVD, tobacco and alcohol consumption, and medications, were obtained using questionnaires, interviews, and medical records at recruitment. Smoking status was categorized as never, former, or current. Alcohol consumption was self-reported and was measured as times per week. Body mass index (BMI), systolic blood pressure (SBP), diastolic blood pressure (DBP), and heart rate (HR) were measured by registered nurses at the assessment center. Lipids (total cholesterol, triglycerides, lowdensity lipoprotein cholesterol [LDL], and high-density lipoprotein cholesterol [HDL]), HbA1c, and glomerular filtration rate (GFR) were measured by the central laboratory, as described previously (5,13,15,16). The outcome of interest in our study was the incidence of HF during the follow-up period, defined as the first hospitalization for HF or death due to congestive HF. Hospitalizations due to HF were assessed based on clinical and radiological evidence. Death due to HF without clinical or postmortem evidence of an acute ischemic event was defined as death due to HF. A central committee adjudicated the HF events according to a predefined protocol (5,13,14). Time to event was calculated as the number of years until the occurrence of an HF event. Participants were censored at the time of their last follow-up. Statistical analysis Continuous variables were compared using analysis of variance or Mann-Whitney U tests, and categorical variables were compared using chi-square analysis according to the distribution type. Cox proportional hazards regression with PHQ-9 as a time-dependent covariate was used to assess the association between depression and HF incidence. We analyzed the associations using these two models. Model 1 was adjusted for age, race, sex, glucose control strategy, CVD history, living alone, educational status, and cigarette and alcohol consumption. Model 2 was further adjusted for BMI, total cholesterol, triglycerides, LDL, HDL, SBP, DBP, HR, and GFR, in addition to the adjustment variables in model 1. We further adjusted for family history of CVD, and medications, including antidepressant drugs and beta-blockers, to test whether the association between depression and HF incidence was affected Frontiers in Public Health frontiersin.org . /fpubh. . by family history of CVD and medication use. The medications were also treated as time-dependent covariates and synchronized with the PHQ-9 test. Subgroup and interaction analyses were performed according to age (≤60 years, >60 years), sex, glycemic control strategy (intensive or standard glucose control), and CVD history. All statistical analyses were two-sided, and P < 0.05 were considered statistically significant. All analyses were performed using Stata/MP software, version 17.0 (StataCorp LLC, College Station, TX, USA). Results Of the 10,251 participants included in the ACCORD study, 2,053 participated in the HRQL sub-study. One hundred participants without baseline PHQ-9 data and 100 participants with a history of HF were excluded from the study. A total of 1,853 participants were included in the analysis. During a median follow-up period of 8.1 years (interquartile range: 6.1-10.1 years), 104 participants developed HF (7.1 events per 1,000 person-years). The baseline characteristics of study participants are shown in Table 1. Participants with HF were more likely to be older, have a longer history of T2DM, higher HbA1c levels, and lower DBP than those without HF. Female participants and participants with a history of CVD were more likely to develop HF than male participants without a history of CVD. Table 2 shows the prevalence and incidence of mild or moderate-severe depression based on the PHQ-9 score at baseline and during follow-up (Table 2). A total of 489 participants (26.4%) had mild depression and 354 participants (19.1%) had moderate-severe depression at baseline. Figure 1 shows the dynamic changes in depressive symptoms during the follow-up period. The prevalence of moderate-severe depression decreased during the follow-up period. Depressive symptoms changed during the follow-up period: half of the participants with moderate-severe depression experienced relief of their symptoms, and a substantial percentage of participants without depression or with mild depression developed mild or moderate-severe depression, respectively, but participants without depression at baseline were unlikely to develop moderate-severe depression. There are an increased proportion of data of PHQ-9 categorized as "missing", 5.4% at baseline, 8.9% at 1 year, 14.6% at 3 years, and 37.5% at 4 years, respectively. Table 3 shows the association between the PHQ-9, both as a continuous variable and a categorical variable, and HF. Using model 2, each unit increase in the PHQ-9 score was associated with a 5% increase in the risk of HF. Patients with mild depression had higher risk of HF (hazard ratio [HR]: 2.26, 95% confidence interval [CI]: 1.38-3.69) than those without depression. Although participants with moderate-severe depression also had a higher risk of HF (HR: 1.47, 95% CI: 0.72-2.99) than those without depression, but the increased risk was not statistically significant because of the limited number of participants with moderate-severe depression. Using model 2, both patients with depression ever (HR: 2.23, 95% CI: 1.25-3.98), and patients with persistent depression (HR: 2.13, 95% CI: 1.05-4.44), had higher risk of HF than those without depression ever. Subgroup and interactive analyses were performed to test the robustness of the association between the PHQ-9 and HF risk. Age (≤60 years or >60 years), sex, glycemic control strategy (intensive or standard glucose control), and CVD history did not play an interactive role in the associations between PHQ-9 and incidence of HF (Supplementary Figure 2). When further adjusted for family history of CVD and medications, including antidepressants and beta-blockers, in addition to model 2, the results were robust and unchanged (Table 3). Discussion This post-hoc analysis of the ACCORD HRQL study showed that depressive symptoms changed dynamically during the followup period, and that in patients with T2DM, depression at baseline or during follow-up period were associated with a higher risk of HF. This risk remained generally unchanged even after adjustment for demographic characteristics, CVD risk factors, and medication use, including antidepressant drugs. This finding indicates that it is important to identify depression in patients with T2DM because they are at higher risk of developing HF. Previous studies on depression and risk of HF have been conducted in non-diabetic populations and have had conflicting results (8)(9)(10)(11)(12). A post-hoc analysis of the HUNT study cohort revealed that symptoms of depression at baseline were associated with an increased risk of HF in a dose-response manner (12). However, another post-hoc analysis of the Established Population for Epidemiologic Studies of the Elderly (EPESE) cohort did not find such an association (8). There are several possible reasons for these conflicting results. First, several different depression questionnaires, with different diagnostic performance, were used by these studies. For example, the Center for Epidemiological Studies Depression Scale used in the EPESE study has not been validated for widespread use (17). Second, the depressive symptoms changed markedly during the follow-up period. Previous studies have only included symptoms at baseline without re-evaluation during follow-up periods of more than 10 years (8)(9)(10)(11)(12). Participants without depression at baseline but with depression during the follow-up period may have a higher HF risk than those without depression during the follow-up period. In contrast to previous studies, all participants in the ACCORD study had T2DM with CVD or a higher risk of CVD; thus, they had a higher risk of HF than previous studies (6,18). Therefore, our study might have had more power to detect an association between depressive symptoms and incidence of HF than previous studies. Previous studies ignored the dynamic changes in depressive symptoms during the follow-up period. Some patients with moderate depression experience recurrence, whereas others do not (19). Our present study found that a large proportion of patients with depression experienced relief. Patients without depression develop moderate-to-severe depression. This study found that participants without depression at baseline but with depression during the follow-up period had a comparable risk of HF as those with depression at baseline. Our study has several limitations. First, as in previous studies, our predefined outcomes did not distinguish between HF with reduced ejection fraction and HF with preserved ejection fraction. Second, the sample size was relatively small. Due to the small number of patients with moderate-severe depression, we did not detect significantly higher risk of moderate-severe depression and CI became wide. Furthermore, we did not detect whether antidepressant drugs could reduce the risk of HF incidence due to limited sample size. Third, the use of drugs that may be effective for HF, such as SGLT2 inhibitors and GLP-1 receptor agonists, and the use of such drugs was underrepresented in the study cohort because recruitment to the ACCORD study ended in 2005. Fourth, all the participants were from North America, and these findings may not apply to other populations with different characteristics and lifestyles. Conclusion Depression is an independent risk factor for HF and depressive symptoms change dynamically in patients with T2DM. These results reinforce the importance of continuous evaluation and management of mental health in patients with T2DM and a high risk of HF. Data availability statement The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The ACCORD trial was approved by an NHLBI Review Panel and the Ethics Committee at each center. The patients/participants provided their written informed consent to participate in this study. Ethical review and approval was not required for the animal study because none.
2023-05-25T13:25:29.975Z
2023-05-25T00:00:00.000
{ "year": 2023, "sha1": "0a1a50f1862de2f8739d98b5b52d54538308f182", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "0a1a50f1862de2f8739d98b5b52d54538308f182", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
13689832
pes2o/s2orc
v3-fos-license
Thyroid-Associated Orbitopathy and Biomarkers: Where We Are and What We Can Hope for the Future Background Thyroid-associated orbitopathy (TAO) is the most common autoimmune disease of the orbit. It occurs more often in patients presenting with hyperthyroidism, characteristic of Graves' disease, but may be associated with hypothyroidism or euthyroidism. The diagnosis of TAO is based on clinical orbital features, radiological criteria, and the potential association with thyroid disease. To date, there is no specific marker of the orbital disease, making the early diagnosis difficult, especially if the orbital involvement precedes the thyroid dysfunction. Summary The goal of this review is to present the disease and combine the available data in the literature concerning investigation of TAO biomarkers. Conclusions Despite the progress done in the understanding of TAO disease, some important pieces are still missing. Typically, for the future, major efforts have to be done in the discovery of new biomarkers, validation of the suspected candidates on multicenter cohorts with standardized methodologies, and establishment of their clinical performances on the specific clinical application fields in order to improve not only the management of the TAO patients but also the therapeutic options and follow-up. Clinical Significance Around 25-50% of patients with Graves' disease develop TAO without any predictive factor. Moreover, the ocular disorder usually appears after the thyroid disease or simultaneously but may precede it. Identifying new biomarkers of this orbital disease could help to an early diagnosis, especially if the orbital involvement precedes the thyroid dysfunction. Introduction Thyroid-associated orbitopathy (TAO), also known as thyroid eye disease or Graves' ophthalmopathy, is an autoimmune disease affecting the thyroid, orbits, and skin. Despite important progress in understanding the pathophysiological mechanisms leading to the development of this disease in the orbits during the last decade, some important questions are still without any answer. The exact nature of the relationship of TAO with thyroid remains enigmatic: hyperthyroidism can be related to the development of this orbital disease but exceptions exist. In contrast, TAO can occur in hypo-or euthyroid patients. Therefore, the prediction of Graves' evolution to TAO is difficult and limits early treatment. At cellular and molecular levels, the reason why only orbital fibroblasts (and not the other fibroblasts of the body), orbital adipose tissue, and medial and inferior rectus muscles are more often affected during the disease has not been solved yet. Furthermore, the possibility to have unilateral orbital case and the great variety of clinical presentation are not understood. This last point highlights also in some cases the difficulty to properly diagnose TAO disease by cofounding with mimicking diseases such as orbital myositis, amyloidosis, some tumors or metastatic cancer, and IgG4related diseases [1][2][3][4][5][6][7][8][9][10][11][12]. In this context, the discovery of new biomarkers that could definitively assist the physician to diagnose TAO disease as early as possible, predict prognosis, and propose early and appropriate treatment will be clinically useful for improving patient management. After a brief recall of the clinical manifestations and the pathophysiology, we review where we are in the potential biomarkers reported in TAO and which vision we can have for the future. Review The natural history of TAO, without any treatment, is described as Rundle's curve [13][14][15]. Symptoms and signs of the orbital disease worsen rapidly during an initial phase, reach a maximal severity, and decrease to a plateau known as sequelae. The disease appears 2-6 times more frequent in young women, but severe cases occur more frequently in men more than 50 years old [16]. The manifestations of the orbital involvement are irritation and redness of the eyes and eyelids, lid tumefaction, double vision, and rarely visual loss. The bilateral complete orbital examination should look for lid retraction, proptosis (exophthalmos), limitation of ocular motility, fat hypertrophy, deficit of visual acuity or color vision, the signs of corneal exposure, and signs of orbital inflammation [17][18][19] (Figures 1 and 2). Clinically, the challenge is to recognize the active, inflammatory phase of the orbital disease. In fact, early diagnosis and rapid introduction of the anti-inflammatory treatment, mainly steroids, improve the final outcome and reduce the functional and disfiguring sequelae of the disease [14,20]. As in some cases the orbital manifestations precede the thyroid dysfunction and its systemic signs [21], it seems essential to have a biomarker dedicated to the early diagnosis of the orbital disease. The detection of thyroid-stimulating hormone-receptor (TSH-Receptor) antibodies (TSH-R-Abs) may confirm the autoimmunity and the diagnosis of TAO. But these antibodies are not present in all cases [19,22,23]. So far, we use the clinical activity score (CAS) to determine the indication and the duration of anti-inflammatory treatment [22,24]. We take in consideration the presence or not of pain, lid and conjunctival edema (chemosis), and lid and conjunctival redness. Nevertheless, as for all the clinical scales, this one presents some limitations: CAS is based on few items, mixing different types of clinical information (inflammation versus vision worsening) and proposing only binary answers, reducing therefore the accuracy of its interpretation. Furthermore, this is a subjective scale depending therefore on the timing of the evaluation, on the willingness and objectivity of the patients regarding their clinical situations, and on the level of expertise of the practitioner performing the evaluation. Other scales exist including NOSPECS [25], VISA [26], and EUGOGO [27] but present also advantages and limitations and are not daily used in our hospital. In some difficult cases, the magnetic resonance imaging (MRI) could help to find out the presence of an inflammatory process. Definitely, having molecules that could efficiently complement clinical scores and observation could allow a more precise and rapid diagnosis and also limit the economic burden to useless access to imaging. All the patients with irritation, lid retraction, and proptosis should benefit of a local lubricant treatment (eye drops and ointment). In the presence of orbital inflammation, some treatments such as selenium and steroids are indicated, according to the severity [22]. The goal is to stop the inflammatory process and to improve the final outcome. In case of resistance or contraindication, low-dose external radiotherapy is suggested. The immunomodulatory treatments such as tocilizumab (interleukin-(IL-) 6 receptor antagonist), teprotumumab (insulin-like growth factor-1 (IGF-1) receptor antagonist), or rituximab (anti-CD20) [28] seem to give promising results in resistant cases [18,[29][30][31]. Rehabilitative surgery should be performed in patients with inactive TAO since at least six months. The main steps are orbital decompression for the reduction of proptosis, squint surgery for the treatment of muscular fibrosis and diplopia, lid lengthening, and blepharoplasty for lid retraction and fat hypertrophy. The finding of specific biomarkers of TAO may serve as a predictive factor of development of TAO among patients with Graves' disease and may also give some information on the severity of TAO. The pathophysiology of TAO is poorly understood. Some classical risk factors including genetic predisposition, environmental factors, infection, and stress have been reported, but their real impact on TAO initiation remains debated [32]. Nevertheless, some pieces of the puzzle begin to come together in the literature. B cells, T cells, and orbital fibroblasts have been shown to be the key players of the pathological event. At the origin, T cells are responsible for the initiation of the disease [19]. Indeed, T helper cells become activated when they recognize TSH-R peptides on antigen-presenting cells. Upon interactions with such T cells, B cells secrete anti-TSH-R antibodies. These antibodies lead to stimulation of both thyroid follicular cells, which produce a great quantity of thyroid hormones, and orbital fibroblasts, which proliferate and induce orbital changes. Beside TSH-R, IGF-1 receptor (IGF-1R) was also identified as a potential targeted antigen [32][33][34][35][36][37], and it seems that the interactions between TSH-R and IGF-1R are more important than individual molecules' effect [38]. Furthermore, patients can have either one or both types of autoantibodies, and alternative production of other types of autoantibodies is not excluded. Indeed, recent studies suggested that autoantibodies against carbonic anhydrase 1 and alcohol dehydrogenase 1B had higher prevalence in orbital fat in TAO compared to those in controls [39]. Tripartite relationships between orbital fibroblasts, B cells, and T cells initiate cascades of immune and chemical reactions [40,41] resulting in pathological situations: inflammation of the connective tissues, fibrosis, and adipogenesis [32,33]. These phenomena cause fundamental and dramatic ocular tissue remodeling. The increased volume of extraorbital muscles, induced by intensive hyaluronic acid (HA) production [42] and expansive growth of adipose tissue via activation of peroxisome proliferator-activated receptor gamma (PPAR-γ) [43], leads consequently to the typical eye's protrusion, characteristic of TAO patients. In addition, the compression of orbital tissue causes a compression of vascular structures leading to the reduction of blood flow and subsequent localized hypoxia [44]. In this context, proangiogenic factors seem to be stimulated in order to restore appropriate circulation through the formation of new vessels. 3.1. Biomarkers. In this context, as previously mentioned, no accurate molecular tool to date allows establishing a rapid, early, and robust diagnosis of TAO or predicting the outcome or the efficiency of drug therapy. Nevertheless, the availability of these kinds of tools, objectively measurable and easily interpretable, could greatly enhance the management of TAO patients, especially those with normal thyroid function. However, over the years and regarding the increased number of publications in the biomarker field, only relatively few studies have been focused on the discovery of new biomarkers in TAO disease. The term "biomarker" was officially and accurately defined 15 years ago as a single indicator "that objectively measures and evaluates normal or pathogenic biological processes" [45]. Consequently, a biomarker is not restricted to being a protein but may be any type of specific molecular signature such as a gene, mRNA, or a metabolite. Specific clinical features such as demographic and physiological parameters (age, gender, smoking status, or goiter size), imaging (thyroid volume with ultrasonography or IRM), or clinical scores (CAS; vision, inflammation, and appearance (VISA)) can also be considered as objective biomarkers. However, only the molecular biomarkers will be considered here. In order to be efficient, biomarker discovery in general but also in TAO context should carefully consider the best source of samples in relation to both the clinical question and the methods of investigations. To be applicable on a large scale, a good source of biomarkers should take into account the feasibility of sample collection and its relevance. Extremely invasive sample collection (e.g., biopsy of orbital fat or extraorbital muscles), even if it is highly specific due to the close relationship with the location of a disease, must not be taken for granted because of (i) the related discomfort and risks of secondary complications for the patient; (ii) the restricted access for clinical diagnosis, and (iii) the great difficulty in collecting such samples from healthy control subjects. Biomarkers-Hormones and Antibodies-in the Blood of TAO Patients. In TAO disease, traditional biological fluids including the blood and urine have been investigated. The majority of the studies rather reported principal actors of the TAO disease as potential biomarkers than discovered new candidates. Considering the dysfunction of the thyroid gland associated to TAO, the traditional circulating thyroid hormones (TSH, triiodothyronine also known as T3, and thyroxine, called T4) used for diagnosing thyroid dysfunction or the antibodies against TSH-R (TSH-R-Abs) [22,46] or thyroid peroxidase-(TPO-) Abs [47,48] would be naively expected to be highly studied and give an interesting insight on the clinical status of the TAO patients. Thus, whatever the generation of assays used, TSH-R levels were shown to be associated to activity and severity of TAO [46,[49][50][51]. The new-generation tests allowed to reach up to 97% sensitivity and almost 90% specificity [51]. However to date, some limitations persist for a clinical use of TSH-R in the management of TAO. The heterogeneous pattern of thyroid dysfunction in TAO patients-hyperthyroidism, euthyroidism (6 to 21% depending on the studies [52][53][54]), or hypothyroidism-and the fact that various other diseases [55] may disturb thyroid hormones greatly limit their clinical relevance in TAO diagnosis. Indeed, a potential interference of treatment with the TSH-R level has been suspected [56][57][58] and could disturb their performances in TAO prediction. In conclusion, conflicting data related to different types of generation assays and various experimental designs do not allow to definitively evaluate the clinical performance of TSH-R-Ab on TAO patients, and the conditions of its routine use remain to be clarified. In the same context, the association of TPO-Ab and TAO is still questionable as different studies reported various results [47,[59][60][61]. Biomarkers-Cytokines and Others-in the Blood of TAO Patients. As the pathology is driven by an acute inflammatory event, the proinflammatory cytokines/chemokines including IL-1β [62], IL-6 [63], IL-10 [63], IL-8 [64], C-C chemokine ligand 20 (CCL20) [65], and IL-17 [66] have been studied. The reported data reveal an elevation of their level in the blood of TAO patients compared to that of control patients that could highlight a potential interest of these molecules as diagnosis markers. Furthermore, their levels seem even able to determine the stage of the disease: an active phase is characterized by a higher level of IL-1β, IL-6 [62], and IL-17 [66] compared to inactive phase. The data suggested also that the blood levels of some cytokines could reflect the response to treatment: patients presenting refractory TAO have higher level of IL-4, IL-6, and IL-10 than patients in remission [63]. Furthermore, patients present modified blood level of IL-16 (increase) and IL-8 (decrease) after steroid treatment compared to the previous state [64,67]. Moreover, a possible association of serum IL-10 polymorphism with incidence of TAO has been reported [68]. Based on only two unique studies, controversial data exist on interferon-γ (IFN-γ) and its potential disturbance in the blood of TAO patients [62,69]. The cytokines involved as mediators of B cells and/or T cells have also been largely investigated due to the key roles of these cells in the initiation and the course of the TAO disease. Interleukin-2 [68], IL-16 [67], and IL-33 [69] have been shown to be highly elevated in the blood of TAO patients compared to those of the controls. Serum IL-33 levels were positively correlated with T3 and T4 however negatively correlated with TSH [69]. A polymorphism of IL-2 is suggested to be associated with the disease [68]. Due to their mitogenic and angiogenic properties, the potential value of growth factors has also been investigated. Serum hepatocyte growth factor (HGF) increases in TAO patients compared to that in control subjects and is sensitive to efficient glucocorticoid treatment. Its level decreases in response to drug administration [64]. Adhesion molecules belong to another class of molecules investigated as potential TAO markers. They play a role in cell/cell or cell/extracellular matrix interaction, activation, and migration. Intercellular adhesion molecule-1 (ICAM-1) and soluble vascular cell adhesion molecule-1 (sVCAM-1) have been found elevated in the blood of TAO patients as compared to those in control patients, but their levels seem also to be influenced by the treatment [70]. Selenium is a metabolite implicated in thyroid hormone synthesis and metabolism [71], both actions having high importance in TAO development [72]. Besides, high amounts of selenium are found in the thyroid gland. In an Australian population in 2014, TAO patients showed lower levels of selenium in serum than patients suffering from Graves' disease without eye involvement. In addition, selenium levels decrease with TAO increasing severity. The authors conclude that the lack of selenium might be an independent risk factor for TAO [72]. The potential interest of several exotic biomarkers in TAO recently emerged notably because of the use of omics strategies. Among these emerging candidates, none of them has been deeply evaluated to date, but several can be mentioned for their biological functions that could be directly related to TAO disease. This is the case of osteopontin [65,73], a multifunctional protein involved in inflammation, cell recruitment, cell adhesion, and remodeling. It is inversely correlated with TSH level and positively with T3 and T4 [73]. Another protein called cytotoxic T lymphocyte-associated antigen-4 (CTLA-4), a member of the immunoglobulin superfamily, which is found on T cell surface, negatively regulates these cells. So far, many studies have been focused on a polymorphism localized on CTLA-4 gene, as a consequence of its implication in autoimmune diseases [74][75][76]. Finally, HLA-B8, a MHC class I cell surface receptor, has been observed in association with TAO, but its role remains to be elucidated [77,78]. 3.4. Biomarkers in the Urine of TAO Patients. The urine and its components have been little investigated as potential source of biomarkers in the context of TAO. However, three compounds showed a potential promising interest and should be more studied in the future. The cotinine level, the main metabolite of nicotine used as marker of tobacco use, seems to correlate in smoker TAO patients with the level of blood TSH-R-Abs, the activity of the disease, and secondary ocular complication after radioiodine treatment [79,80]. Glycosaminoglycans (GAGs), the most abundant heteropolysaccharides, display urinary levels 2-3 times higher in patients with the active form compared to those in patients with the inactive form [81]. Finally, 8-hydroxy-2 ′ -deoxyguanosine (8-OHdG) has attracted attention of the scientist's community. This metabolite is used to measure DNA damage in oxidative stress, event that was related with various ocular diseases such as TAO. High levels of 8-OHdG were found in TAO patients' urine compared to those in control patients, and 8-OHdG level was related to CAS [82]. In short, 8-OHdG might be a good biomarker in the future to evaluate the presence of oxidative DNA damage and therefore the oxidative stress generated in TAO patients. Biomarkers in the Blood and Urine of TAO Patients: Conclusions. In conclusion, these 2 common fluids usually explored for biomarker discovery seem disappointing in TAO. Several explanations could be highlighted: at this stage, only few studies focus on the same molecules and, in the main cases, the candidates are investigated not for their potential role as biomarkers but rather for their central role in the pathological events. This is particularly illustrated by the absence of clinical performances (sensitivity, specificity, and positive and negative predictive values) reported in the publications. Nevertheless, with the democratization of the omics methods, we may speculate that, in a near future, new and probably unexpected biomarkers will be discovered and could offer new clinical and management strategies for TAO. Moreover, no standardized protocol is reported for the evaluation of a specific target, and different clinical questions are frequently assessed with a unique cohort design decreasing the power of the analyses. Another aspect could be that the modifications of molecular levels occurring in response to this disease may be too subtle to be efficiently measured in these systemic fluids. We assume therefore that fluids or tissues geographically close to the place of the disease (the eyes) will be more valuable. 3.6. Biomarkers in the Orbital Fat of TAO Patients. Exploring the orbital fat content in TAO patients is, to our point of view, highly relevant since the disease directly affects this tissue. In the orbital fat, the IL-1β and IL-6 levels seem to be associated with the smoker status of TAO patients [83]. Besides, a transcriptomics study performed on orbital fat reports a clear upregulation of IFN-γ in TAO patients [84]. Transforming growth factor-β (TGF-β) and fibroblast growth factor (FGF) are elevated in the orbital fat of TAO patients, and levels of these factors are correlated with the severity of the disease. In the family of growth factors, platelet-derived growth factor (PDGF) is probably the most promising at this stage with a central role in the TAO pathological events. Indeed, several studies have reported its overexpression in orbital tissues of TAO patients [85][86][87], independently of the activity grade of TAO. In addition, specific isoforms of PDGF improve the TSH-R expression on orbital fibroblasts, amplifying the autoimmune reaction against TSH-R [85]. Drugs blocking PDGF signalling allow opening new therapeutic options [87,88]. Finally, in vitro studies have also highlighted the adipogenic function of PDGF, able to induce the transformation of orbital fibroblasts into adipocytes [89]. This mechanism participates in the extension of orbital tissue during TAO course. Adipogenesis is also induced by IL-1β through an increase of cyclooxygenase-2 (COX-2). This enzyme, known to modulate inflammation, is anticipated to be a central element of the active phase of TAO disease. Its mRNA and protein levels have been shown to be overexpressed in orbital fibroblasts of TAO patients, [90] and hyaluronic acid (HA) seems involved in its regulation. Nevertheless, the interest of COX-2 is not definitively assessed as other studies revealed no modification of its expression [91]. On the other hand, at the transcript levels, TGF-β receptor, IGF-1, and insulin-like growth factor binding protein-6 (IBP-6) appear to be downregulated [84]. 3.7. Biomarkers in the Tears of TAO Patients. From our point of view, the most promising fluid for TAO in the future will be probably tears. Surprisingly, until now, tears and their clinical relevance have been poorly studied. With the noninvasive, easy, and rapid collection of samples, tear-based approaches open up new routes for diagnostic methods and for understanding of both ocular and systemic diseases. Tears play a key role in the correct function and health of the eye. Tears are necessary for the lubrication of the eye surface that ensures the appropriate optical properties and for the nutrition and protection of the surrounding tissues. Tears are secreted by the lachrymal glands and contain electrolytes, nucleotides, lipids, metabolites, and proteins. But these components can also be released from the surrounding damaged tissues or by passive transport from the blood. The production and composition of tears are therefore a dynamic system that depends on environmental factors, stimulus, infection, or disease. Consequently, the ability to measure any subtle modification targeting one or several biomarkers in tear contents opens promising opportunities for screening not only ocular but also systemic diseases. The behaviour of the proinflammatory proteins in the blood could be extrapolated to tears. A profile similar to that observed in the blood can be highlighted in the tears with a net increase of IL-1β, IL-6, and IL-17 in active compared to inactive TAO patients [62]. Another important actor of inflammation, tumor necrosis factor (TNF)-α, has been measured only in tears. Its concentration is higher in inactive and active TAO patients than that in control ones [62]. Moreover, two polymorphisms (−1031T/C and −863C/A) of TNF-α gene have been found in samples from a Japanese population with a dramatic increase in patients with Graves' disease suffering from TAO in comparison to those without TAO. In addition, these polymorphisms seem associated to the severity of TAO [92]. Interleukin-7 has also been reported in tears and orbital fat and is suspected to change according to the different phases of the disease [93]. Finally, using proteomic experiments, potential new candidates have been revealed such as proline-rich-protein members (PROL1/PRP4) involved in the modulation of the microflora of the eye and presenting protective function [94,95] or S100 calcium-binding proteins (S100A8/S100A9) modulating inflammation and cell adhesion [94,95]. Conclusions The story of TAO biomarkers is just starting: efficient biomarkers used in routine for TAO have still to be discovered. Ideally, they will offer a new opportunity for improving early diagnosis, follow-up, and treatment monitoring. Further, it could help to a better understanding of pathophysiology and permit new personalized therapeutic strategies. Nevertheless, to be a success story, biomarker discovery should carefully consider the best source of samples in relation to the clinical question and the characteristics of the TAO disease. In order to be extended on a larger scale and finally to the whole population, we strongly believe that a good source of biomarkers should take into account sample collection feasibility. Orbital fat or muscles, even if highly specific, will not be easy to obtain and their collection is an invasive method. They can be collected during a surgery of orbital decompression, which is possible only when the inflammation is calming down. It means also that such samples could not be extrapolated for basic diagnosis nor used as preventive tool. In these situations, common biofluids such as the blood seem to be more appropriate for biomarker investigation. However, considering the past, whatever the disease, there has been little success in translating these findings into clinical applications. More unusual samples including tears have recently emerged as new global source of biomarkers and could be promising and innovative clinical tests in TAO disease in the near future. Because tear sampling is a noninvasive and rapid method, tear-based approaches open promising avenues for diagnostic method and will allow opportunities for deepening understanding of this challenging orbital disease. In addition, as a complex mixture, tears offer the possibility of discovering not only proteins but also RNA, lipid, and metabolite biomarkers that could interestingly complement the traditional clinical tools available for ophthalmologists.
2018-05-11T02:53:18.514Z
2018-03-15T00:00:00.000
{ "year": 2018, "sha1": "51895e24d08e232cb3bd5d2def4292a0e9f0ca76", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/dm/2018/7010196.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "51895e24d08e232cb3bd5d2def4292a0e9f0ca76", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247298619
pes2o/s2orc
v3-fos-license
Successful Management Foreign Body Aspiration Associated with Severe Respiratory Distress and Subcutaneous Emphysema: Case Report and Literature Review The presence of a foreign body in the airway is a potentially life-threatening clinical condition that requires urgent medical attention. We present a case of a 12-year-old boy who presented in the emergency room with a history of an episode of choking after aspiration of a foreign body, followed by severe respiratory distress and subcutaneous emphysema. Chest radiography revealed hyperinflation data, pneumothorax, and subcutaneous emphysema data. The flexible bronchoscope examination showed the presence of an inorganic foreign body impacted on the carina with tracheal lesions and laryngeal edema. It was necessary to perform a tracheostomy for its definitive extraction. The gold standard in the treatment of foreign body aspiration is bronchoscopy; although, in children, the technique adopted continues to be controversial, flexible bronchoscopy can be effective and very useful. Introduction The presence of respiratory distress in children is one of the main reasons for consultation in the emergency room (ER), responsible for up to 10% of all consultations [1]. A significant proportion of respiratory distress cases in children are caused by the presence of a foreign body in the airway (FBa). Although most cases occur in the population under 5 years of age (60-80%), around 15% of cases occur in the population aged 5-15 years, and approximately 6% in those over 15 years of age [2]. According to reports from the US Centers for Disease Control and Prevention, for the year 2000, the incidence rate of FBa was 29.9/100,000 children, responsible for 160 deaths [3]. However, the existing data, in relation to mortality due to the aspiration of a foreign body (FB) in pediatric age, are very few and varied, basically depending on the age group, the socioeconomic environment, and the clinical scenario studied. In this sense, Montana et al. [4] report an extra-hospital mortality of 36.4% related, above all, to those fatal cases where the foreign body became lodged in the larynx or trachea [5]. Meanwhile, in-hospital mortality is between 0.26 and 13.6%, and is mainly associated with late hypoxia complications and bronchoscopy complications [4], with a downward trend over time most likely related to advanced modern bronchoscopic techniques. Bronchoscopy is the diagnostic and therapeutic method indicated when FB inhalation is suspected, especially in children. It is considered the gold standard in the identification and localization of FBa [6]. The evidence shows that morbidity caused by bronchoscopy is lower than that caused by an undiagnosed FB, with a low risk of complications if performed in the first 24 h [6]. The diagnosis is clinical, and a high suspicion should always be maintained in patients presenting with a cough, shortness of breath, cyanosis, unilaterally diminished breath sounds, and air trapping on chest X-ray [7]. We present a case of an adolescent with an FB aspiration and review the literature regarding clinical manifestations, surgical and radiological findings, as well as the management of patients with this condition. Case Report A 12-year-old boy was admitted to the ER of a secondary care hospital with severe respiratory distress. After leaving school, he ingested a sweet and sour chamoy-flavored liquid candy in a container with an atomizer cap. He suddenly began to cough violently with an impending choking sensation. Immediately, he presented difficulty in breathing and FB sensation. He was taken to the emergency department (ED) by an ambulance for his respiratory distress. Following his admission to the ER, the presence of an FBa was suspected. The choking episode, with the subsequent respiratory distress, was witnessed by his classmates and reported to the emergency services. Upon his arrival, he was anxious, diaphoretic, with an evident increase in breathing work, soporous, and had difficulty articulating words. An intravenous line had been placed, in addition to supplemental oxygen, through a reservoir mask. On physical examination, he had a blood pressure of 130/107 mmHg, respiratory rate of 28 bpm, heart rate of 118 bpm, a temperature of 36 • C, and an oxygen saturation of 99% with supplemental oxygen through a mask with a reservoir. He presented a poor general condition, with pale skin and mucous membranes. There was swelling in his neck and right mammary region, with crepitus on palpation, and evident thoracic asymmetry. His respiratory movements were diminished. In auscultation, the vesicular murmur was audible; although it greatly diminished in its intensity during inspiration, it was not audible on exhalation. Once in the ER, a complete blood count, and blood chemistry, coagulation, and arterial blood gases tests were performed, as well as a simple chest X-ray in anteroposterior projection in a sitting position. Given the clinical status of the patient and the severity of the condition, it was not possible to perform other imaging studies, such as a lateral projection chest X-ray or computed tomography of the chest. The arterial gases showed decompensated respiratory acidosis with hypercapnia. The chest X-ray ( Figure 1) showed no evidence of an FB; however, it presented frank hyperinflation data, a small pneumothorax located in the upper part of the left hemithorax, as well as subcutaneous emphysema in the neck and chest. The rest of the laboratory studies did not present relevant data for the case. Given the absence of a pediatric pulmonology service and the impossibility of performing a bronchoscopy in our hospital, he was referred to the pediatric pulmonology service of a tertiary care hospital (our reference hospital), due to the suspicion of an FBa. The performance of flexible bronchoscopy under general anesthesia was considered as the appropriate diagnostic-therapeutic method (since a rigid bronchoscope was not available), and was carried out two hours after their arrival (and five hours after the onset of the clinical picture). The bronchoscopy examination showed abundant blood secretions in the oral cavity (most likely of tracheal origin), as well as in the supraglottic space, with epiglottis in omega, and cartilage without alterations. It showed the glottic space with vocal cords without alterations, the subglottic space with edematous stenosis with an approximately 30% reduction of light, and the trachea with membranous and cartilaginous parts with hyperemic mucosa and blood dotting ( Figure 2). An inorganic FB was seen in the main carina; however, it was not possible to extract it, as it was trapped in the subglottic space, and required an emergency tracheotomy which was performed without complications with FB extraction through a Björk flap. The patient was admitted to the pediatric intensive care unit (PICU), where he stayed for a period of three days. He was discharged six days later. Discussion The presence of an FBa usually occurs at any age; some studies estimate an incidence of 0.66 per 100,000 inhabitants [2]. Although it can occur in adulthood, it is more common in the pediatric age group, especially in children between 1 to 3 years of age, and up to 16% occurs in the ages between 5 to 15 years [2,3]. Some factors related to the risk of presenting an FBa involve mainly those related to specific anatomical conditions of the pediatric age group and age-specific behavior. Accidental aspiration during crying, laughing, or playing is common, although less frequent [6]. For descriptive purposes, the foreign bodies (FBs) have been divided into two large groups according to their nature: organic and inorganic. Multiple studies show a higher frequency of FBs with organic composition, the latter being responsible for more severe inflammatory reactions [8]. The location of the FB usually depends mainly on the size of the FB and the age of the victim. Findings have been reported in virtually the entire respiratory tract [3,6,9]. Most case series reported coincide, positioning the right main bronchus as the most frequent site, and the left main bronchus as the second [7][8][9][10]. As in the case we present, some cases series mention the carina as a localization site with variable frequencies ranging from 5.4% to 29%, emphasizing that the impact of an FB on this site are usually fatal cases [9][10][11]. Four types of obstruction have also been described (Table 1) and according to the clinical manifestations shown by our patient, the type of obstruction presented was a type II or check valve [12]. This caused air trapping in the alveolar spaces with gradual increase in interalveolar pressure, causing its rupture [13]. The escaped air dissected the pulmonary muscular sheath causing interstitial emphysema, and later subcutaneous emphysema, in the thorax that extended to the neck through the cervical fascia, which may have caused pneumomediastinum [14]. Table 1. Types of bronchial obstruction that occur in the presence of a foreign body in the airway. Type of Obstruction Physiology Type I or bypass valve Partial obstruction of light in both phases of respiration with decreased aeration Type II or check valve Allows air flow during inspiration, but not during expiration. Type III or stop valve Air flow is not allowed either during inspiration or expiration, mainly in the event of a total obstruction, or the evolution of a type II obstruction The type IV or ball valve The FB is displaced during expiration, but is impacted again during inspiration In general, physical examination can often find tachypnea, stridor, decreased breath sounds, wheezing and/or crepitus, and in some cases, fever. Our patient entered the ER with tachypnea and with a significant decrease in respiratory sounds [7,10]. In addition to the signs already described, some studies mention the possibility of decreased air intake in auscultation, abnormal breath sounds, asymmetry of chest inspection, nasal flaring, abnormal respiratory wheezing, nose pain, rhonchi, use of accessory muscles of respiration, purulent discharge, bradypnea, hypoxia, and hypercapnia [2,8]. High rates of complications have been reported, mainly associated with the establishment of a late diagnosis. Some of the reported complications include unilateral hyperinflation, mediastinal shift, laryngeal edema, tracheal laceration, empyema, pulmonary edema, atelectasis, persistent fever, bronchiolitis, pneumonia, presence of subcutaneous emphysema, pneumomediastinum, and pneumothorax. [4,8,11]. Subcutaneous emphysema was one of the clinical signs present in our patient and is considered a rare presentation in cases of FBs in the airway [13,15]. Foltran et al. [14] report only five cases in two different publications, estimating a frequency of 1.3% [13]; our patient presented grade IV subcutaneous emphysema, which included the entire thoracic wall and neck. The diagnosis is usually clinical, although sometimes it can be a challenge. It was found that the presence of focal hyperinflation on the chest X-ray, the asphyxia crisis witnessed, and a leukocyte count greater than 10,000 /mL show a cumulative proportion of up to 100% when all three are present [6,7]. On suspicion of the aspiration of an FB, chest radiography is suggested, with anteroposterior and lateral vertical projections; a lateral soft tissue neck radiograph is also suggested [1,6]. The most frequent findings are the visualization of the FB, although it is considered that of the FBs aspirated only 8.2-24% are radiopaque, lobar or segmental radiolucency, areas of atelectasis, and inflammatory consolidation of the pulmonary parenchyma unilateral or bilateral hyperinflation. The presence of pneumothorax, subcutaneous emphysema, and pneumomediastinum are less frequent [1,12,14]. Although it is also common to routinely obtain a simple lateral decubitus chest X-ray, some studies confer it a limited role in diagnosis, with only a sensitivity of 27% and a specificity of 67%. On the other hand, although computed tomography is superior to a chest X-ray (especially in the case of radiopaque foreign bodies), with a sensitivity of 100% and a specificity of 66.7%, it has some limitations, such as: radiation exposure and the restriction of movement required for high-quality scans, which is often not feasible in people with respiratory distress [2]. In the presence of a negative chest X-ray, and high suspicion of FB aspiration, it is necessary to perform bronchoscopy [6,15]. The rigid bronchoscope has been very useful in the diagnosis and management of diseases of the airway; the first reports made in the last century already spoke of a success rate of up to 98.3% in the extraction of FBs from the airway [16]. Rigid bronchoscopy is considered the first diagnostic-therapeutic option for the obstruction of the airway by an FB [16]. Some of the benefits provided by the rigid bronchoscope are maintenance of the airway and ventilation while anesthesia is administered, a large working channel, and the availability of large forceps and other tools to remove FBs. Some disadvantages found are the need for a high level of training and the low availability in health care centers. Many experts consider rigid bronchoscopy as the gold standard in the diagnosis and management of the aspiration of an FB in children [8,16]. However, a review by Salih et al. [2] states that flexible fiber-optic bronchoscopy is considered as the gold standard procedure for the diagnosis and treatment of FBs as it provides direct visualization of airways where the FB is lodged. Flexible bronchoscopy is also safe, cost effective, and preferred by many pediatricians as it avoids the need for general anesthesia in comparison with rigid bronchoscopy At present, there is evidence that shows the usefulness of the flexible bronchoscope in the extraction of FBs from the airway in both adults and children. Multiple series of cases have been reported (Table 2) which have shown high success rates [14][15][16][17][18][19]. Table 2. Case series of the extraction of FBs using flexible bronchoscope. Complications associated with the removal of FBs are usually minimal, especially if the procedure is performed by expert personnel [14]. The mortality rate associated with the procedure varies from 0.13% to 2.0% [2,14]. Complications associated with bronchoscopy have been classified as minor and major [2]. Within the minor complications, trauma of the lips, teeth, tongue, epiglottis and larynx, minor hemorrhage, hypoxia, bradycardia, bronchospasm and mild laryngeal edema, fever, and subcutaneous emphysema have been observed [2,14,20]. It has been observed that the major complications that appear are usually associated with mortality. These include laryngospasm and severe bronchospasm, severe laryngeal edema requiring tracheotomy or reintubation, hypoxic brain damage, infections, atelectasis, pneumomediastinum, tracheal or bronchial laceration, perforation of the airway, failed bronchoscopy requiring a tracheotomy or thoracotomy, hemorrhage, pneumothorax, cardiac arrhythmias, and cardiac arrest [2,6,12]. Sometimes, as in the case of our patient, an additional tracheotomy may be performed in order to reduce risks, to facilitate the removal of the FB or to protect the airway [21]. In the specific case of our patient, it was necessary to perform a tracheostomy before the impaction of the FB in the subglottic space and after multiple unsuccessful extraction attempts. Some experts recommend the tracheostomy to ensure the airway in the presence of large FBs impacted in the subglottic space, and to maintain the tracheostomy cannula for approximately 5 days after extraction [21]. Conclusions The aspiration of an FB is a serious clinical condition which occurs predominantly in the pediatric age group and, in many cases, has fatal outcomes. Timely diagnosis is the key to successful management. The history of an episode of choking associated with the presence of acute respiratory distress and subcutaneous emphysema should make the emergency doctor suspect airway obstruction due to an FB. Bronchoscopy is the gold standard in the management of the obstruction of the airway by an FB. In this sense, flexible bronchoscopy is an effective and highly useful tool for removing foreign bodies in pediatric patients, with low complication rates in expert hands. Informed Consent Statement: A written informed consent to publish about details of the case and photographs was obtained from the patient and his parents. Data Availability Statement: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2022-03-09T16:30:44.297Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "4a2f373077748b19122468905de964f3db3c1b37", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1648-9144/58/3/396/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b00f2d0e97b0ab8cdf7e973cda4b2676966382ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
88687072
pes2o/s2orc
v3-fos-license
Cork Warts on Leaves of Gnetum L . ( Gnetaceae ) and its Phylloplane Fungi The cork warts on leaves of plants appear to be a response to mechanical injuries or pathogen penetrations. Many of Gnetum species regularly form cork warts on leaf surfaces and stems. We have searched anatomy, morphology and development of cork warts and also estimated evaluation of probable influence to its origin by phylloplane fungi. Leaves of two species of Gnetaceae family G. gnemon and G. montanum have been investigated for anatomical and morphological studies of cork warts and for mycological research. Herbarial specimens of 13 Gnetum species have been searched. We have successfully extracted 15 species of phylloplane ascomycetes that appear to influence negatively on continuity of leaf epidermis and being able to existing as parasites as well. The most frequent species were Cladosporium cladosporioides which became a dominant indicating 100% of frequency index and representatives of Fusarium and Phoma genera. The frequency index of latter numbered 36 and 20% correspondingly. Five species of Penicillium ascomycetes were also determined as frequent. Cork warts have been found in 13 species of Gnetum that grow in natural environment. Cells in local areas of epidermis and subepidermal layers proliferate periclinally during cork wart development in leaves of G. gnemon and G. montanum. As a result, the layer of high compactly packaged cells emerges. Tannins inside cell compartments and suberinization of cell walls were indicated for cork warts. It emphasizes a defensive function of the structures. Cork warts appear to originate like "Patches" on the surfaces of leaves of evergreen gnetum plants. A B S T R A C T The cork warts on leaves of plants appear to be a response to mechanical injuries or pathogen penetrations. Many of Gnetum species regularly form cork warts on leaf surfaces and stems. We have searched anatomy, morphology and development of cork warts and also estimated evaluation of probable influence to its origin by phylloplane fungi. Leaves of two species of Gnetaceae family G. gnemon and G. montanum have been investigated for anatomical and morphological studies of cork warts and for mycological research. Herbarial specimens of 13 Gnetum species have been searched. We have successfully extracted 15 species of phylloplane ascomycetes that appear to influence negatively on continuity of leaf epidermis and being able to existing as parasites as well. The most frequent species were Cladosporium cladosporioides which became a dominant indicating 100% of frequency index and representatives of Fusarium and Phoma genera. The frequency index of latter numbered 36 and 20% correspondingly. Five species of Penicillium ascomycetes were also determined as frequent. Cork warts have been found in 13 species of Gnetum that grow in natural environment. Cells in local areas of epidermis and subepidermal layers proliferate periclinally during cork wart development in leaves of G. gnemon and G. montanum. As a result, the layer of high compactly packaged cells emerges. Tannins inside cell compartments and suberinization of cell walls were indicated for cork warts. It emphasizes a defensive function of the structures. Cork warts appear to originate like "Patches" on the surfaces of leaves of evergreen gnetum plants. INTRODUCTION The presence of cork warts in leaf blades of angiosperms has been noticed in the end of 19th century (Bachmann, 1880;Keller, 1890;Matteucci, 1897). The term "Cork wart" was introduced by Solereder (1908) as structure that resembles lenticels and consists of a hemispherical group of suberized cells. Now a days there is no common opinion about causes of origin, development and anatomical structure of cork warts (Metcalfe and Chalk, 1950;Morretes and Venturelli, 1985). It is thought that cork warts derive due to proliferation of the stomatal subsidiary cells (Borzi, 1886;Farooqui, 1982), epidermal cells, basal cells of fell off trichomes (Keller, 1890;Farooqui, 1982) or cells of mesophyll (Borzi, 1886;Keller, 1890;Joffily and Vieira, 2010;Evans and Bromberg, 2010). The division pattern resulting in cork wart formation is attended by suberinization of the walls and partial death of the cells (Farooqui, 1982). The majority of the authors, however, have not been using the term "Periderm" concerning to cork warts. This could be explained by the absence of its typical elements that are phellogen, phellem and phelloderm (Haberlandt, 1928;Stace, 1966;Joffily and Vieira, 2010;Guimaraes et al., 2011). Some authors connect the origin of cork warts with pathogen invasion into the leaves (Ross, 1896;Dickison, Int. J. Bot., 11 (1): 10-20, 20152000. Its cells appear to isolate the pathogen from vital tissue protecting it. According to another point of view the cork wart emergence could be a response to mechanical injuries (Stace, 1966;Farooqui, 1982). In some cases cork warts originate in plants that grow in specific localities. As an example, the warts were described in the trees of mangrove from the following Rhizophoraceae, Sonneratiaceae, Plumbaginaceae and Acanthaceae families (Stace, 1966;Evans et al., 2009;Evans and Bromberg, 2010). In some species like Kandelia candel and K. obovata cork warts arise from epidermal cells (Sheue et al., 2003). Other species such as Rhizophora mangle and Rh. racemosa have the ones originated inside the leaf aerenchyma. Its gradual growth leads to destruction of an epidermis (Evans and Bromberg, 2010). Cork warts are thought to play an important role in internal tissues aeration of the mangrove plants due to Knudsen flow (Evans et al., 2009;Evans and Bromberg, 2010). Some authors had been named the cork warts as lenticels (Haberlandt, 1928;Matteucci, 1897). Among the seed plants that regularly form cork warts are the majority of species of genus Gnetum. These structures have been discovered in leaf blades, petioles and stems of G. gnemon, G. montanum and G. ula (Nautiyal et al., 1976). In the course of leaf structure investigation we have found the cork warts in some other species of Gnetum that grow in natural environment and in introduction (Pautov and Pagoda, 2015). It gives an excellent opportunity to explore the warts under greenhouse conditions. This article is devoted to research of anatomical structure, origin and development of the cork warts of Gnetum species and evaluation of probable influence to its origin by phylloplane fungi. Phylloplane is the surface of the leaf blades of higher plants. It is a specific habitat for a diversity of microorganisms: bacteria, filamentous fungi, yeasts and algae (Prabakaran et al., 2011;Saha et al., 2013;Borgohain et al., 2014). The main sources of its nutrients are excretions of plants and substances percipating from the atmosphere. Some of the phylloplane organisms are parasites of the plants (Inacio et al., 2002). Micromycetes of phylloplane play the most significant role in this community (Langvad, 1980). Alternaria alternata, Cladosporium cladosporioides, Gliocladum viridae, Mucor racemosus, Penicillum chrysogenum are the species that have the widest distribution in the leaf surfaces of plants. These species are inhabitants of wide spectrum of plant species around the world (Saha et al., 2013). MATERIALS AND METHODS Leaf sampling and processing: Leaves of two species of Gnetaceae family (tree Gnetum gnemon L. and liana G. montanum Markgr) were investigated. Plant material was collected in greenhouse No. 20 of botanical garden of Komarov Botanical Institute Russian Academy of Science (RAS), Saint Petersburg, September 2014. Collected leaves were fixed in 70° ethanol. Research in anatomy and morphology of leaves Softening of herbarial specimens: Small fragments of leaves were cut out from herbarial specimens and put into weighing bottles with distilled water for 2-6 h. We used the mixture of glycerin, distilled water and ethanol (70°, 96°) in proportion of 1:1:1 for softening of dried leaf fragments. The weighing bottles with plant material in mixture were put into thermostat under 60-70°С for 24 h. Then the material was washed by distilled water and used for next manipulations. Maceration method: The fragments of epidermis were received with maceration method. The specimens of leaves were put into mixture of full-strength nitric acid (HNO 3 ) and potassium chlorate (KClO 3 ) on 1-1, 5 h, then washed by distilled water. Next, the ones were put in mixture of potassium chloride (KOH) and distilled water on 30-60 min, then washed by distilled water. Separation of upper and lower epidermis was provided by using preparation needle under stereomicroscope Leica EZ4. We used safranin for staining cell walls of epidermis. Preparation of microscopic sections: For making transverse microscopic sections of leaf blades and petioles their fragments were embedded with paraffin (Barykina et al., 2000). Thirdly, the fragments were filled up with paraffin-β-lemonen mixture (24 h in thermostat under 55°С) and paraffin only (7 days in thermostat under 55°С). The microscopic sections were obtained by using of microtome SAKURA Accu-Cut SRM 200. The reverse manipulations with sections were made after cutting (washing up with bioclear, rehydratation in ethanol of decreasing concentrations, washing up in distilled water). The sections were stained by combined staining agent alcian blue and safranin. All specimens were embedded in glycerin-gelatine medium in microscope slide and preserved under cover glass. Specific staining: The staining of cork warts on suberin was produced by potassium chloride (KOH) which stained them in bright yellow color (Barykina et al., 2000). The fragments of blade and petiole epidermis with cork warts and its transverse sections were heated in 30% solution of potassium chloride. Tannins were found by staining the cork warts with Kartis safranin which colored them in dark red (Prozina, 1960). The specimens were put into weighing bottles with safranin solution and kept in thermostat under 60°С for 30-90 min. Then they were cooled down under indoor temperature and washed by acetic acid (CH 3 COOH) for 3 min. The fragments were stained next in alcian blue for 5 min and washed firstly with acetic acid on 3 min, secondly with distilled water and embedded in glycerin-gelatine medium. Making a photographs: Photographing of material under the cover glass were produced with microscopes Leica DM500 and Leica DM1000, camera Leica EC3. Photographs were converted in Leica Application Suite software (Las EZ), Leica Microsystems Framework. Scanning electron microscopy: Epidermis of leaves and structure of cork warts were investigated by method of Scanning Electron Microscopy (SEM). The fragments of leaf blade and petiole were dehydrated in series of ethanol of with increasing concentrations (20°, 50°, 70°, 80°, 90°, 96°, 100°). Next, the ones were filled into mixtures of acetone and ethanol (100°), acetone and isoamyl acetate and isoamyl acetate only. Dehydrated specimens were dried under critical point of fluid carbon dioxide (CO 2 ). Dried objects were sticked on stages and sprayed with ions of gold. We used scanning electron microscope JSM-6390LA for examination of prepared specimens. Phylloplane search Fungi sampling, plating and identification: The leaves of G. gnemon were taken from middle part of the crone, the ones of G. montanum were collected from high, middle and low parts of liana stem. The sampling consisted of 10 leaves for G. gnemon and 9 leaves for G. montanum. We searched both abaxial and adaxial leaf surfaces. Mycological samples were collected by three different ways. Firstly, the fragments of colonies and fungal structures (conidia and mycelia) were transferred from leaf blade to petri dishes with agar by sterile preparation needle (point isolation). We used this way for plating phylloplane fungi from abaxial leaf surface only. Samples were collected along main vein, secondary veins and from fragments of blade surface with indicators of fungal existence (developed dark-colored mycelia and fruiting hyphae). Secondly, the surfaces of leaf blades were treated by sterile swabs on the fragments with dark bloom indicating fungal existence. Leaf blades were previously screened under stereomicroscope. Thirdly, fungal fragments were transferred on the nutrient solution by method of impression replicas. Two leaf blades of G. gnemon and 3 ones of G. montanum were treated this way. Impression replicas were taken from whole abaxial blade surfaces of G. gnemon and from regions of main vein and secondary veins of G. montanum leaves. Plating were taking place in petri dishes on Czapek Dox Agar. The identification was provided after germination and forming of colonies ( Fig. 1a and b). The fungal species were identified on the basis of cultural characteristics and morphology of fruiting bodies and spores by using standard texts and keys. The species was then identified by using the identification manual (Bilay and Koval, 1988;Ellis, 1971Ellis, , 1976De Hoog and Guarro, 1995;Pidoplichko, 1977Pidoplichko, , 1978Satton et al., 2001). Anatomy of cork warts: We have examined 13 species of genus Gnetum and found cork warts on surfaces of its leaves: G. africanum, G. funiculare, G. gnemon, G. indicum, G. latifolium, G. laxifrutescens, G. leyboldii, G. loerzingii, G. montanum, G. paniculatum, G. philippinense, G. scandens and G. ula (Table 1). All of them have been collected from natural habitat. Identical structures have been noticed in G. gnemon and G. montanum introducing in botanical garden of Komarov Botanical Institute RAS (St. Petersburg). Thus, the cork warts have been found in representatives of genus Gnetum under natural conditions and in plants growing in botanical garden. It has given us an opportunity to research the structure and development of cork warts of introduced plants as model objects. We also have estimated possible causes of their formation on leaves of Gnetum. The cork warts of G. montanum are frequent mostly on lower surfaces of the leaves. This structures are distributed along large veins (main vein and secondary veins predominantly) (Fig. 2a) and in petiole as well. Rarely the cork warts develop between large veins on both surfaces of leaf blades and in epidermis under minor veins. In paradermal section the cork warts have rounded or ellipsoid shape ( Fig. 2b and c). The basal square is from 0,012-0,248 mm 2 . In leaves of G. gnemon cork warts are mainly associated with epidermis of petiole. They were also indicated in both surfaces of the leaf blade. The cork warts are distributed solitary or in groups in petioles and in blades. Their shape is the same as in G. montanum. The basal area is from 0,18-0,29 mm 2 . The warts of petiole are bigger. Development of cork warts occurs in young and mature leaves. Initially, on local areas of leaf blades periclinal divisions of epidermal cells and cells of inner tissue occur ( Fig. 3a and 4a). The regular rows and layers of cells are forming in the course of this divisions (Fig. 3b, c and 4b). Thus, the cork wart is exerted to be superducted under epidermal surface by pressing of mechanical exertion. The cells of external layers stop to divide first (Fig. 3c). The ending of the divisions is combined with accumulation of tannins in its compartments ( Fig. 5a and b). The suberinization of cell walls also occurs (Fig. 5c). Then, these cells die. The cells of inner layers continue to proliferate during some while ( Fig. 4b-d). As a result of this division the cell mass enlarges and presses on external layers leading to increasing of cork wart superduction under the blade surface (Fig. 4c, d). Herewith, the cuticle covering of the growing cork wart tears off ( Fig. 2b and c). A gap is formed at the base of superducing part of cork wart near the blade surface. External layers of dead cells are deformed and decayed ( Fig. 4b and c). The gaps are also formed at the top part of cork wart opening the inner cavity with its compartments. The cork warts of petiole and blade surface develop in similar way. They appear as a result of epidermal and subepidermal cell divisions (Fig. 6b and c). The cells resulting from this process accumulate tannins and its cell walls are suberinising. In some cases the epidermal cells not only divide during the development of cork wart but have intensive growth. Its streaching is perpendicular to blade surface (Fig. 6d). The warts with this cell property are highly subducted under the leaf surface. As time goes by some cells are being disrupted. Phylloplane fungi: There have been extracted and determined 13 species of micromycetes from phylloplane of G. gnemon: 6 species were on abaxial surface of the leaves, 10-on abaxial one (Table 2). There have been indicated 10 species of fungi derived from blade surface of G. montanum: 4 species were on upper epidermis, 8-on lower one (Table 2). All determined species belong to ascomycetes. Figure 7 and Table 2 show the index of fungal occurrence. Besides, there were Mycelia sterilia in petri dishes of searching samples. It grows like light-colored or dark-colored fungi isolates without fruiting hyphae or conidia. Cladosporium cladosporioides became a dominant philloplane ascomycete (Fig. 1). It have 100% index of frequency (Fig. 7). It was represented on both surfaces of the leaf blades in all of the samples where it usually became a dominant or formed a monoculture. Genus Penicillium showed the highest level of biodiversity comparable with other ones (Fig. 7). It consisted of 5 species, the most frequent was Penicillium brevicompactum (48%) and P. decumbens (24%). In few samples Phoma sp. were a dominant. Its index of frequency is 36%. Fusarium sp. prevailed in all samples collected by method of impression replicas (20%). The biodiversities of fungi in abaxial and adaxial leaf surfaces are similar (Table 2). mycelia sterile, dark-colored -+ --Total 6 10 4 8 Signs "+" and "-" are marked respectively presence and absence of mentioned phylloplane fungi in samples plated in petri dishes Comparison of sampling methods used for collecting fungi from leaf surfaces shows that impression replicas is less successful method in revealing of ascomycetes on leaf surface. Direct point isolation of cork wart fragments and mycelia, conidia or hyphae with sterile preparation needle on agar in petri dished were more prosperous method. Dominance of representatives of Cladosporium genus is significantly depicted under cuticular layer of cork warts ( Fig. 2e and f). It forms vegetative and reproductive structures. Mycelia and spores are detected in great number inside the compartments of tumble downed cell walls of dead cells (Fig. 2d). There is growth of single hyphae and separated spores of micromycetes on the surface near the cork warts (Fig. 2b). It is typical for phylloplane of tropical vascular plans. Though density of mycelia increases in the direction of cork warts showing its maximal consistence inside it. Data of SEM research shows in total that there are highly favorable conditions for growth and reproduction of fungi inside cork warts. Obviously in these structures they are protected from external influences and found some nutrient sources. DISCUSSION Many species of fungi have the ability to colonize the leaf surface of angiosperms. Some phylloplane fungi can penetrate into its epidermis (Kuthubutheen, 1984). Some species of fungi colonized both surfaces of leaf blade. Other ones grow only on abaxial side of the leaf that could be explained by thin cuticular layer here and easier absorption of nutrients from epidermis and mesophyll (Lee and Hyde, 2002). We have successfully extracted 15 species of phylloplane ascomycetes from Gnetum gnemon and G. montanum belonging to following genera: Aspergillus, Cladosporium, Epicoccum, Fusarium, Paecilomyces, Penicillium and Phoma. The majority of determined ones are associated with abaxial surface of the leaves. Several species are of specific interest. First of all, it is Cladosporium cladosporioides as phylloplane dominant of Gnetum. It is known as active destructor of various natural substrates and exists as a pathogen that causes plant diseases named cladosporiosis (Kuthubutheen, 1984;Briceno and Latorre, 2008;Saha et al., 2013). It affects plants from different families that have significant agricultural value (tomatoes, vines, wheat, etc.) and ornamental plant (Kuthubutheen, 1984;Lee and Hyde, 2002;Briceno and Latorre, 2008). Other representatives belonging to genera Fusarium and Phoma are much frequent. The ones can be parasitic for plants at same degree as C. cladosporioides (Briceno and Latorre, 2008). Besides there is a significant diversity of Penicillium ascomycetes on the surfaces of G. gnemon and G. montanum leaves. This species can change pH of the surface by excreting organic acids, especially oxalic acid (Magro et al., 1984;Prusky et al., 2004). The assimilation of acids by fungi decreases the steadiness of epidermis against pathogen penetrating (Hadas et al., 2007) and is considered to be the one of the pathogenic factors (Dickison, 2000). Thus, there are micromycetes in philloplane of G. gnemon and G. montanum that appear to influence negatively on continuity of leaf epidermis and being able to parasitism as well. The representatives of genus Gnetum are evergreen plants that grow in tropical rainforests (Cadiz and Florido, 2001;Tomlinson and Fisher, 2005). The epiphyllic organisms particularly filamentous fungi are active colonizers of phylloplane of plants under tropical conditions (Richards, 1952). The glabrate surface partially prevents the colonization. Another important characteristic of tropical plant leaves is thick cuticle. It defends cytoplasm of epidermal cells and inner tissues of blade and petiole against pathogenic organisms of phylloplane in definite degree. In addition the long-living leaves of evergreen plants are defended in another ways. We suppose the forming of cork warts is one of that ways. We have found this structures in 13 species of Gnetum (Table 1) (Pautov and Pagoda, 2015). Its development appears to have been generated by damages of epidermis that had been probably caused by several organisms living in leaf surfaces. As it was noticed above, the organisms like that are existing in phylloplane of young and mature leaves of Gnetum species. Thus, we agree with presumptions of the researchers who suppose that cork warts develop as a response to minor damages of leaf surfaces (Dickison, 2000;Joffily and Vieira, 2010;Guimaraes et al., 2011). It is important to note that the cork warts have been described for numerous of angiosperms. But gnetums belong to one of the gymnosperm branches. We did not find any information of cork warts existing in other gymnosperms. The development of cork warts begins when cells in local regions of epidermis and subepidermal layers of blade or petiole begin to proliferate periclinally. As a result, the layer of high compactly packaged cells emerges. It isolates minor damages and defends vital tissues of leaves against pathogens. The evidence of accumulation of tannins in cells and suberinization of cell walls are for benefit of defensive function of cork warts. We emphasize that close-packed cell layers, its suberinized cell walls and accumulation of tannins are typical markers of wound cork (Dickison, 2000). Cork warts are like "Patches" on the surfaces of long-lived leaves of gnetums ( Fig. 2a-c). Our results indicate that it is appropriate to use methods of plant anatomy and morphology on the one hand and mycological tools on the other for understanding of composite questions according the interactions of phylloplane fungi and the host plant.
2019-04-01T13:16:02.960Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "5c1c4e2989219073cbc4f43b060e580cbc80cda3", "oa_license": null, "oa_url": "https://doi.org/10.3923/ijb.2015.10.20", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e3d7ae58c9bc2c77efbec17232246ccb815646b2", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
211483288
pes2o/s2orc
v3-fos-license
Potentials and Challenges of Agricultural Education in reducing Postharvest losses (PHLs) and Food Insecurity in Ogun State, Nigeria www.arjonline.org Page 1 Potentials and Challenges of Agricultural Education in reducing Postharvest losses (PHLs) and Food Insecurity in Ogun State, Nigeria Oyediran, Wasiu Oyeleke1, Omoare, Ayodeji Motunrayo2 1Department of Agricultural Extension and Rural Development, Federal University of Agriculture Abeokuta, Nigeria 2Department of Agricultural Science Education, Federal College of Education, Abeokuta, Ogun State, Nigeria oyediran_wasiu@yahoo.com Abstract: Postharvest losses (PHLs) and food insecurity are major threats to agricultural growth and development in Nigeria. The challenges are enormous, especially in rural areas where food insecurity, poverty and educational deprivation often create a vicious circle. Therefore, this study was carried out to assess the potentials and challenges of Agricultural Education in reducing PHLs and food insecurity in Ogun State, Nigeria. One hundred and twenty-five (125) respondents were selected as sample size using simple random sampling technique. Data obtained were analyzed with descriptive statistics and chi-square. Results showed that majority of the respondents acquired knowledge of Crop Production and Management (83.20%), Cassava processing (48.00%), Poultry (57.60%), and fish production skills (41.60%). In the same vein, Agricultural Education has been identified as important driven force to reduce PHLs ( = 4.22; SD = 1.18) and facilitate quality farm products and its availability all the year-round ( = 4.16; SD = 0.92). Results of chi-square showed a significant relationship between the skill acquisition (χ2 = 13.26, df = 1) and perception of the respondents on Agricultural Education at p < 0.05 level of significance. However, effective Agricultural Education teaching and learning process was constrained by inadequate resource personnel (t = -2.492), epileptic power supply (t = 2.233), poor funding of agricultural education (t = 2.525), inadequate agricultural instructional materials (t = 2.286), poor support for agricultural researches and findings (t = 6.643), inadequate functional processing facilities (t = -4.543) at p < 0.05 level of significance. This study concluded that Agricultural Education contributed to skill acquisition in agricultural production and food security. INTRODUCTION The world population is increasing faster than the growth in the food supply, and the resources used for creating food are all becoming increasingly scarce. Reducing postharvest food losses must be an essential component in any strategy to make more food available without increasing the burden on the natural environment (World Bank, 2010). Nigeria is agrarian, and agriculture remains the hub of the rural Nigerians, providing employment for over 90 percent of the rural dwellers, who constitute about 70 percent of the total population. Nigeria's strengths include abundant land, labour, and natural resources (Ayodele et al., 2013). Postharvest losses (PHLs) and food insecurity are major threats to agricultural growth and development in Nigeria. The challenges are enormous, especially in rural areas where food insecurity, poverty and educational deprivation often create a vicious circle. The issue in Nigeria is inefficient postharvest agricultural systems that lead to a loss of food that people would otherwise eat, sell or barter to improve their livelihoods. As a product moves in the postharvest chain, food wastages may occur from a number of causes, such as from improper handling or bio-deterioration American Research Journal of Agriculture (ARJA) Volume 2016 by micro-organisms, insects, rodents or birds. Developed countries have extensive and effective cold chain systems to prolong product shelf-life. Additionally, more sophisticated management and new technologies continue to improve the efficiency with which food is brought into stores, displayed and sold. Computerized stock control has dramatically decreased the volume of stock held within the food chain, driving down costs (Houghton and Portougal, 1997). Traditional agricultural practices prevail in Nigeria at subsistence level while rural infrastructures are grossly inadequate and as such contributed to high PHLs in the country. Climbing out of these prolonged problems cannot be achieved by addressing one sector alone. It is therefore essential to explore feasible measures in which these inter-related issues can be tackled together, focusing on interventions which have the greatest effect on food security and PHLs reduction. Basic education initiatives in rural areas which will use agricultural or environmental experience as a means of making teaching and learning more relevant and the potential impact of this kind of approach on food security and sustainable rural development are very germane. The food system is a heavy component of the human environmental footprint on the planet. Advancement and sustainable development is a very vital issue in a global world. Agriculture is a reliable source of food, income, raw materials and employment across the world and sub-Saharan Africa in particular. Among the most important and efficient ways to improve food security, nutrition, and income for millions of small scale farmers in Nigeria, is to make sure that every bulk of vegetable, baskets of tomatoes, or kilogram of grains that is produced is stored properly and delivered in good condition from farm to table. It is very important to ensure that goods produced are well packaged, marketed and that they reach consumers in good condition. It's high time to make solving the problems of food and nutrition insecurity and PHLs a thing of the past. In line with these objectives Agricultural Education is focused on acquisition of individual skills and capability for occupation; therefore Agricultural Education is both theoretical and practical in its design and it is packaged to provide knowledge and develop the skills of the future youths for sustainable development. This is capable of eliminating poor agricultural practices, PHLs, food insecurity and low income in the rural areas of Nigeria. Agricultural Education has been an integral part of national development strategies because of the impact on human resources development, productivity and economic growth. It is a vital tool to economic development, enterprise productivity and profitability, national productivity and wealth creation, and for individual prosperity. Moseri (2000) commented that the conditions for an effective agricultural development include a high average educational attempt, adequate capital, quantity and quality of land and technology, development of quality production skills, formation of agricultural associations and supporting services, facilities and programmes. Although Agricultural Education has enormous potentials in checking high rate of PHLs and food insecurity in Nigeria, the sector has not been given much attention by the government. Similarly, there are scanty empirical studies on the inter-relationship of Agricultural Education, PHLs and food insecurity. The situation of high PHLs and food insecurity called for education of citizenry on proper food management. Adopting this approach and the opportunities it presents can lead to greater systemic efficiency, food safety and quality. It is in against this background that this study assessed the perception of students in selected tertiary institutions on potentials and challenges of Agricultural Education in reducing PHLs and food insecurity in Ogun State, Nigeria. Specific Objectives are to identify the skill acquisition by the students through Agricultural Education in the study area i. assess perception of the students on Agricultural Education as a means of reducing PHLs and food insecurity ii. in the study area identify constraints to effective Agricultural Education in reducing PHLs and food insecurity in the study iii. area Volume 2016 Hypotheses H01: There is no significant association between the skill acquired by the respondents and perception of the respondents on Agricultural Education as a means of reducing PHLs and food insecurity in the study area. H02: Challenges to Agricultural Education have no significant influence on its contributions to the skill acquisition in reducing PHLs and food insecurity in the study area Description of study area The study was carried out in Ogun State, Nigeria. Odeda and Ijebu-Ode Local Government Areas were purposively selected. The two Local Government Areas are parts of the twenty LGAs in Ogun State that have large numbers of rural dwellers who engaged in farming activities as means of livelihoods. There are also two tertiary institutions Sampling Techniques and Sample Size Departments of Agricultural Education were selected from Federal College of Education, Osiele (FCE), Tai Solarin College of Education (TASCE), Omu and Tai Solarin University of Education (TASUED), Ijagun while Community Based Framing System (COBFAS) unit was selected from Federal University of Agriculture, Abeokuta (FUNAAB). There are 79 final year students in Agricultural Education Department, FCE and 287 students in the selected section of COBFAS, 53 students in Agricultural Education Department, TASCE and 80 students in Agricultural Education Department of TASUED. Proportional sampling technique was used to select 25% of the students (i.e. 20 from FCE, 72 from FUNAAB, 13 from TASCE and 20 from TASUED) to make up 125 respondents as sample size for this study. Data for this study were collected with the aid of questionnaire and analyzed using SPSS software. The questionnaire was subjected to face validity by consulting experts in the fields of Agricultural Extension and Rural Development. Items found ambiguous and lacking in clarity were eliminated. Test re-test was carried out at interval of two weeks with fifteen (15) Agricultural students in Emmanuel Alayande College of Education, Lanlate Campus, Oyo State to ascertain the reliability of the instrument. Total scores were computed for each week and analyzed with Pearson Product Moment Correlation (PPMC) to get correlation coefficient (r) between two sets of scores. A reliability coefficient of 0.83 was obtained hence, the instrument was termed reliable. Measurement of Variables Skills acquisition through Agricultural Education was measured on a 3-point indicator of greatly acquired, somehow acquired and not-acquired. The perception of the respondents on agricultural education as a means of reducing PHLs and food insecurity was measured using Likert scale type of Strongly Agree (5), Agree (4), Undecided (3), Disagree (2), and Strongly Disagree (1). The statements are self-worded in both positive and negative forms to avoid bias, whereas the scores are reversed for the negative statements. The mean and standard deviation were estimated. Challenges to Agricultural Education were ranked by the respondents. Method of Data Analysis Data collected for this study were subjected to descriptive statistics such as percentage, mean and frequency distribution while linear regression and chi-square analysis were used to test the hypotheses. For the linear regression in this study, it is expressed as Where; E = Agricultural Education (aggregate scores) c = Challenges (aggregate scores) ei = error term Skills acquisition through Agricultural Education According to the National policy of Nigeria (FGN, 2004), the philosophy of education is based upon a strong, united and self-reliant nation. Above eighty percent (83.20%) of the respondents greatly acquired skills in Crop Production and Management while 48.00% of the respondents got skills in Cassava processing in the institutions. The implication is that the skills acquired will help to accelerate food production by the youth and minimize postharvest losses through proper handling. This is in conformity with Ofoh, 2009 who stated that agro-processing is an important operation to reduce spoilage, waste and other losses in quantity and quality of farm produce between the time of harvesting and time of marketing/consumption. Also, 57.60% of the respondents acquired skills in Poultry production and 41.60% in Catfish and Tilapia production. This will go a long way to assist in producing high quality poultry and fish products in the study area as these skills will facilitate rapid job creation, self-empowerment, increased production and household food security. Similarly, more than half (51.20%) of the respondents had greatly acquired skills in agricultural extension practice. This is the hallmark of agricultural activities that involve continuous training and dissemination of innovation to the farmers with the aim of transforming agriculture and improving farmers' standard of living. This means that the students can use skills acquired to establish their own small/medium scale agricultural enterprises and become employers of labour. These knowledge and skills acquisition are very important potentials upon which Agricultural Education is built in Nigeria. However, students were unable to acquire much skill in palm oil processing (22.20%). This situation cannot be unconnected with the inadequate palm-oil processing machines in the institutions. Perception of the respondents on Agricultural Education in reducing PHLs and food insecurity Agricultural Education is utilitarian and stimulating bringing theoretical ideals to practical reality. Education and social marketing strategies that strengthen local food systems and promote cultivation and consumption of local micronutrient rich foods is very essential to overcome the twin problems of postharvest losses and food insecurity situation in Nigeria. In Table 2, the results showed that most (84.0%) of the respondents strongly agreed that Agricultural Education will help youth in acquiring knowledge and skills in farming ( = 4.84; S.D = 0.37) while 12.8% of the respondents strongly disagreed that knowledge acquired may not be transferred to farmers in the rural areas ( = 3.97; S.D = 0.99). The reason is that some graduates of agricultural discipline may not be willing to live in the rural areas because of the appalling state of infrastructural decay and neglect; hence they cannot have close contact with the farmers to disseminate innovation. Many (48.0%) of the respondents strongly agreed that Agricultural Education will facilitate increased food production ( = 4.16; S.D = 1.17), and promote youth development, job creation and empowerment ( = 4.74; S.D = 0.51) whereas 19.2% of the respondents strongly disagreed that it cannot provide jobs for rural people ( = 3.40; S.D = 1.50). The argument against the huge potentials of Agricultural Education is not unconnected with the high rate of youth unemployment in Nigeria. Evidently, Nigeria is lagging behind in preparing her workforce for the challenges of the rapidly changing global economy (Adefiaye, 2004). Rising unemployment, lack of skilled workers, high dropout rates, and the changing demographic nature of the work force constituted impediments to economic growth and development in Nigeria. Eneji (2000) opined that Nigeria needs a major breakthrough in an attempt to come out of these abject poverty situations which have youths and graduates unemployment as major attributes. Moreover, 45.6% of the respondents strongly agreed that efficient farm management and record keeping is possible ( = 3.84; S.D = 1.11), and 48.8% of the respondents strongly agreed that rural-urban migration will be reduced ( = 3.84; S.D = 1.35). This is very possible because record keeping will minimize losses and make agriculture more profitable and attractive to the youths thereby encouraging them to do farming and stay in the rural areas instead of searching for white collar job in the Cities. The implication is that more food will be produced for the rural households and public. In a similar vein, 55.2% of the respondents strongly agreed that innovation dissemination and adoption could only be successful through agricultural extension ( = 4.16; S.D = 1.17). This implies that agricultural productivity will be high and young farmers will be motivated to stay on the farm. Agriculture will be repositioned from traditional to a modernized and commercial farming; fresh, nourished and safe food will be produced abundantly while surplus will be processed, packaged, and stored for further uses through Agricultural Extension support. Also, 59.2% and 52.8% of the respondents agreed that PHLs will be minimized through proper handling ( = 4.22; S.D = 1.18) and quality farm products will be achieved ( = 4.25; S.D = 0.89) respectively. Since modern methods of agricultural practices are parts of teaching in tertiary institutions, the skills acquired in postharvest technology will help to curb huge losses and poor pricing of agricultural produce. It will also help in ensuring quality products and extending the shelf-life of agricultural commodities. In a sharp contrast, 40.8% of the respondents also disagreed that Agricultural Education is for teaching sake and cannot guarantee increased food production ( = 3.20; S.D = 1.49). The reaction is due to the fact that most of the agricultural students are exposed to practical aspects. Agricultural Education is a type of vocational training involving the equipping of the learners with the knowledge and skills involved in productive agriculture. It involves the training of both the head and the hands of the learners (Ekpenyong, 2005). The respondents also indicated that fresh and quality agricultural products will be made available in the markets all year round ( = 4.16; S.D = 0.92), farmers' income will increase ( = 4.38; S.D = 0.66), household nutrition and food security will be enhanced through Challenges to Agricultural Education There are many challenges to Agricultural Education in Nigeria. Currently, Agricultural Education is being taught as one of the art subjects and given orientation as education for citizenship (Egbule, 2002). The results in Table 3 showed that inadequate resource personnel (71.20%) was ranked 1st while epileptic power supply (66.40%) ranked 2nd as a major challenges confronting Agricultural Education in tertiary institutions in the study area. The available Lecturers are over-stretched with too much load of work which is seriously affecting the efficiency of Agricultural Education delivery in the institutions. Epileptic power supply remains a national problem; it impeded the rate of skill acquisition in agriculture because most the machines use during practical classes are power driven. Poor funding of agricultural education (64.8%) was ranked 3rd and it creates a serious vacuum in agricultural development of the nation. Inadequate agricultural instructional materials (60.8%) and support for agricultural researches and findings (57.0%) were ranked as 4th and 5th as major constraints to Agricultural Education in the study area. This result corroborates with that of Yussuf and Soyemi (2012), that problem of low quality training among vocational students is alarming because of inadequate instructional materials. Also, too short time allocating for the practical session (55.2%) constituted the 6th major impediment to the transfer of knowledge and skills to the students. Hence, emphasis is much on theory and certification rather than skill acquisition and proficiency training. Similarly, respondents identified inadequate functional processing facilities (53.6%), poor maintenance of infrastructure (51.20%) and poor learning environment (44.0%) as major challenges inhibiting Agricultural Education in the study area. The implication of this is that the students will not be able to acquire skill under poor study condition and it will have bearing on the skill acquisition which is the primary objective of Agricultural Education in our tertiary institutions. The problems will have a multiply effect on food production, handling and quality therefore contributing to poor productivity, and high PHLs. Association between skill acquisition and PHLs and food insecurity H01: There is no significant association between the skill acquired by the respondents and perception of the respondents on Agricultural Education as a means of reducing PHLs and food insecurity in the study area. The results of chi-square analysis in Table 4 showed a significant relationship between the skill acquisition and perception of the respondents on Agricultural Education at p < 0.05 level of Agricultural Education. Skills acquired in quality cassava processing (χ2 = 13.26, df = 1), poultry production (χ2 = 17.42, df = 1) and agricultural extension practices (χ2 = 8.98, df = 1) were significant to Agricultural Education for reducing PHLs and food insecurity at p < 0.05 level. This relationship can be inferred from the fact that students are often exposed to many of these courses through their staying in the institutions. Meanwhile, skills acquisition in crop production and protection (χ2 = 1.73, df = 1), Catfish and Tilapia production (χ2 = 4.44, df = 1) and palm-oil processing (χ2 = 2.74, df = 1) were not significant at p < 0.05 level of significance. This can be as a result of shortcomings in these fields which call for urgent intervention in order to increase agricultural productivity and minimize losses. Thus, the alternate hypothesis (H1) that "there is significant association between the skill acquired by the respondents and perception of the respondents on Agricultural Education as a means of reducing PHLs and food insecurity" is accepted. Relationship between Challenges and Agricultural Education for reducing PHLs and food insecurity H02: Challenges to Agricultural Education have no significant influence on its contributions to the skill acquisition in reducing PHLs and food insecurity in the study area. Results indicated that challenges had a significant bearing on Agricultural Education in Nigeria. Challenges such as inadequate resource personnel (t = -2.492), epileptic power supply (t = 2.233), poor funding of agricultural education (t = 2.525), inadequate agricultural instructional materials (t = 2.286), poor support for agricultural researches and findings (t = 6.643), inadequate functional processing facilities (t = -4.543), and poor learning environment (t = -3.551) were significant to Agricultural Education for reducing PHLs and food insecurity at p < 0.05 level of significance. The more severe the problems confronting Agricultural Education, the lesser the rate at which knowledge and skills will be transmitted and acquired through Agricultural Education. Consequently, agricultural productivity will be retarded and problems of PHLs and food insecurity will be heightened. The alternate hypothesis (H1) that "challenges to Agricultural Education have significant influence on its contributions to the skill acquisition in reducing PHLs and food insecurity" is hereby accepted. Conclusion The study concludes that skills acquisition in Crop Production and Management, Cassava processing, Poultry production, and Catfish and Tilapia production could have a great contributions to reducing PHLs and food insecurity. The respondents strongly agreed that Agricultural Education would reduce PHLs and facilitate quality farm products and its availability all the year-round. Also, a significant association existed between the skills acquisition and perception of the respondents on agricultural education as a means of reducing PHLs and food insecurity in the study area. However, objectives of Agricultural Education were affected by myriad of problems such as inadequate resource personnel, epileptic power supply, poor funding of Agricultural Education and inadequate instructional materials among others.
2019-08-17T06:53:47.091Z
2016-01-01T00:00:00.000
{ "year": 2017, "sha1": "df3c91d9d03e0adefbb938ef91efce9e78886ec5", "oa_license": null, "oa_url": "https://doi.org/10.21694/2379-1047.16001", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c527a981d4dcc3c89f66f3aa1923e934c8785acf", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
251030967
pes2o/s2orc
v3-fos-license
Extended planetary chaotic zones We consider the chaotic motion of low-mass bodies in two-body high-order mean-motion resonances with planets in model planetary systems, and analytically estimate the Lyapunov and diffusion timescales of the motion in multiplets of interacting subresonances corresponding to the mean-motion resonances. We show that the densely distributed (though not overlapping) high-order mean-motion resonances, when certain conditions on the planetary system parameters are satisfied, may produce extended planetary chaotic zones --"zones of weak chaotization,"-- much broader than the well-known planetary connected chaotic zone, the Wisdom gap. This extended planetary chaotic zone covers the orbital range between the 2/1 and 1/1 resonances with the planet. On the other hand, the orbital space inner (closer to the host star) with respect to the 2/1 resonance location is essentially long-term stable. This difference arises because the adiabaticity parameter of subresonance multiplets specifically depends on the particle's orbit size. The revealed effect may control the structure of planetesimal disks in planetary systems: the orbital zone between the 2/1 and 1/1 resonances with a planet should be normally free from low-mass material (only that occasionally captured in the first-order 3/2 or 4/3 resonances may survive); whereas any low-mass population inner to the 2/1 resonance location should be normally long-lived (if not perturbed by secular resonances, which we do not consider in this study). Introduction The orbital architectures of planetary systems are established as results of complex cosmogonical and dynamical processes, which include planetary formation, close encounters, scattering, migration in gas-dust disks. On the other hand, selection effects favor discoveries of long-term stable systems. That is why applications of stability criteria are necessary for explaining the observed multitude of architectures of exoplanetary systems. In this article, we reveal the existence of a weakly unstable multi-resonant zone of dominant perturbative planetary influence, which we call the extended planetary chaotic zone (EPCZ). It covers the orbital range between the 2/1 and 1/1 mean-motion resonances with the planet. Though weak, it is by far more unstable (in a number of senses) than the orbital zone with smaller orbital periods, inner to the 2/1 resonance. We argue that observed structural patterns of planetesimal disks, such as the 2/1 resonance cut-off, may arise due to this effect. The article is organized as follows. In Section 2, we briefly review relevant theoretical issues on chaotic resonance multiplets (including the standard map theory) in Hamiltonian dynamics, concentrating on how the Lyapunov exponents and diffusion rates in resonance multiplets can be estimated analytically. In Section 3, we apply the theory to characterize the chaotic mean-motion resonances in planetary systems. In Sections 4, 5, and 6, we characterize the resonances massively, building the Farey trees of resonances in model planetary systems. In Section 7, we compute covering factors of dynamical chaos, as a function of test particles' orbit size, in the same model systems. Section 8 is devoted to general discussions and conclusions. Lyapunov and diffusion timescales in resonance multiplets We adopt a model of perturbed nonlinear resonance given by the paradigmatic Hamiltonian (Shevchenko, 1999(Shevchenko, , 2000, where the first two terms in equation (1) represent the Hamiltonian H 0 of the unperturbed pendulum, and periodic perturbations are represented by the last two terms; φ is the pendulum angle (the resonance phase angle), p is the momentum, τ is the phase angle of perturbation, τ = Ωt + τ 0 , and Ω is the perturbation frequency, τ 0 is the initial phase of the perturbation; F , G, a, b are constants. Many resonant systems in physics and astronomy can be canonically transformed to the perturbed pendulum model, which is thus considered in some sense as a "universal" or "first fundamental" model of nonlinear resonance; see Chirikov (1979); Shevchenko (2020) for details. In a generalized form, the perturbed pendulum model of nonlinear resonance is given by Hamiltonian (1). In the next section, we will see that this model perfectly corresponds to a Hamiltonian description of high-order mean-motion resonances, considered henceforth in this work. In equation (1), the phase φ represents a linear resonant combination of the angles of any original system; in the next section, examples of such representation are given. The momentum p is proportional to the time derivative of φ. The momentum p and phase φ form a pair of conjugated canonical variables of the system defined by the Hamiltonian (1). The system in nonautonomous, as the Hamiltonian explicitly depends on time; thus, the system has one and a half degrees of freedom. The small-amplitude libration frequency of the system on the resonance modelled by the pendulum is ω 0 = (F G) 1/2 . The "adiabaticity parameter", characterizing the relative frequency of perturbation, is defined as (Shevchenko, 2020). The unperturbed resonance full width is defined as the maximum distance (in momentum p) between its separatrices; it is equal to 4(F /G) 1/2 . Therefore, the adiabaticity parameter λ characterizes the distance (in momentum p) between the guiding and perturbing resonances in units of one quarter of the full resonance width. The rate of divergence of close trajectories (in the phase space and in the logarithmic scale of distances) is characterized by the maximum Lyapunov exponent. If the maximum Lyapunov exponent is greater than zero, then the motion is chaotic (Chirikov, 1979;Lichtenberg & Lieberman, 1992). The inverse of this quantity, T L ≡ L −1 , is the Lyapunov time. It represents the characteristic time of predictable dynamics. On the other hand, knowledge of diffusion timescales allows one to judge on the possibility for effective transport in action-like variables in the phase space of motion. The Hamiltonian (1) includes three trigonometric terms, corresponding to three interacting resonances, forming a resonance triplet. Let the number of resonances in a resonance multiplet be greater than three. In the case of non-adiabatic chaos (λ 1/2) one may still apply, as an approximation, the theory developed in Shevchenko (2014) for the triplet case, because the influence of the "far-away" resonances is exponentially small with λ. However, if chaos is adiabatic (λ 1/2), the triplet approximation is no more valid for the multiplet comprising more than three resonances. Therefore, let us consider a limit case, namely, the case of infinitely many interacting equal-sized and equally spaced resonances. The standard map describes the motion in just this case, as it is clear from its Hamiltonian: (Chirikov, 1979), where M = ∞, k = K/(2π) 2 . The variables x i , y i of the map (3) correspond to the variables x(t i ), y(t i ) of the continuous system (4) taken stroboscopically at time moduli 2π (see, e. g., Chirikov 1979). Alongside with λ, the standard map stochasticity parameter K can be as well regarded as a measure of resonance overlap, because the adiabaticity parameter for the standard map is λ ≡ Ω/ω 0 = 2πK −1/2 (Shevchenko, 2020). A formula for the maximum Lyapunov exponent of the standard map at K much greater than the critical value K G = 0.9716..., i.e., at K ≫ 1, was derived by Chirikov (1979): Already at K = 6 the difference between the theoretical and numericalexperimental values of L becomes less than ≈ 2% (Chirikov, 1979). The mapping period T map of the map (3) can be expressed in original time units of the system (4). Therefore, for the maximum Lyapunov exponent of system (4), at K ≫ 1 (or, at λ ≪ 1), one has where time is expressed in original time units. For the whole interval of definition of K, we use the approximate numerical and theoretical L(K) functions obtained in Shevchenko (2004a,b) for the main chaotic domain of the standard map phase space at any K: where and T map is the perturbation period. The λ dependences, both numerical and theoretical, of the maximum Lyapunov exponent (normalized by ω 0 ) in multiplets of equal-sized and equally spaced resonances are shown in Fig. 1. Note that the normalization by ω 0 means that the given normalized maximum Lyapunov exponent is adimensional. In Fig. 1, the curve for the septet occupies an intermediate position between the curve for the triplet and the curve for the "infinitet", i. e., for the standard map. Notwithstanding the large perturbation amplitude (resonances are equal in size: in particular, for the triplet given by equation (1) one has ε ≡ a/F = b/F = 1), the numerical data for the triplet agrees well with the separatrix map theory presented in Shevchenko (2014); in Fig. 1, this theory provides the lower solid curve. The diffusion coefficient D is defined as the mean-square spread in a selected variable per time unit (Chirikov, 1979;Meiss, 1992); in the standard map model, the selected variable is y. If y is not taken moduli 2π, then its variation is unbounded if K > K G . At K ≫ 1, the Lyapunov time ≈ ln K 2 (Equation (5)), and even adjacent iterated values of the phase variables can be regarded as practically independent. Then, the normal diffusion in y at K ≫ 1 has the rate D = (∆y) 2 t = 1 2 K 2 (9) (Chirikov, 1979). According to Chirikov (1979), in the whole range 1 < K < ∞, the diffusion time, characterizing the mean time of transition between neighbouring integer resonances, T d ∝ 1/D, is given by which at K ≫ 1 corresponds to the quasilinear diffusion law (9). Generally, the standard map theory for the diffusion rate is expected to be adequate if the number of resonances in a considered resonance multiplet is large. Chaotic mean-motion resonances In the vicinities of high-order mean-motion two-body and three-body resonances, the equations of motion are approximately reducible to those of pendulum with periodic perturbations, given by the Hamiltonian (1); see Shevchenko (2020). This reduction provides an opportunity to analytically estimate the Lyapunov and diffusion timescales of the motion, as described in the previous Section. Let us consider the restricted elliptical planar three-body problem, with a passively gravitating test particle orbiting around a primary mass m 1 and perturbed by the secondary mass m 2 < m 1 . In the vicinity of a meanmotion resonance (k + q)/k (where k ≥ 1 and q ≥ 1 are integers) with the gravitating binary, the Hamiltonian of the particle's barycentric motion can be approximately represented as (Holman & Murray, 1996;Murray & Holman, 1997) 1/3 , µ = m 2 /(m 1 +m 2 ); ̟ is the longitude of the tertiary's (particle's) pericentre; a and e are the tertiary's semimajor axis and eccentricity; t is time, and the frequency Ω is defined below in equation (15). The angle ψ ≡ kl − (k + q)l b , where l and l b are the mean longitudes of the tertiary and the primary binary. The units are chosen in such a way that the total mass (m 1 + m 2 ) of the primary binary, the gravitational constant, the primary binary's semimajor axis a b are all equal to one. The binary's mean longitude l b = n b t, and its mean motion n b = 1, i. e., the time unit equals the 1 2π th part of the binary's orbital period. The binary's period P b is set to 2π; its mean motion n b = 1 and semimajor axis a b = 1. The momentum Λ and phase ψ form a pair of conjugated canonical variables of the system defined by the Hamiltonian (11); they can be put in correspondence to the momentum p and phase φ of the "first fundamental" model Hamiltonian (1). We see that the model (1) can be put in correspondence to the Hamiltonian description (11) of high-order mean-motion resonances, considered henceforth. The system (11) concerns the case of the outer (with respect to the particle) perturber: the tertiary (the test particle) orbits inside the secondary's (the perturber's) orbit around the primary. Note that the d'Alembert rule concerning the zero sum of integer coefficients in the resonant angles is of course satisfied, but indirectly, because the secondary's constant longitude of pericentre is set equal to zero (on the d'Alembert rules see Morbidelli 2002). The integer non-negative numbers k and q define the resonance: the ratio (k + q)/k equals the ratio of the mean motions of the tertiary (the particle) and the secondary (the perturber) in exact resonance. As described by equation (11), if the perturber's orbit is eccentric (e b > 0), then the resonance (k + q)/k splits into a cluster of q + 1 subresonances p = 0, 1, . . . , q, whose resonant arguments are given by the formula The coefficients of the resonant terms, derived in Holman & Murray (1996), are given by (Holman & Murray, 1996). Besides, model (11) is restricted to the resonances with q ≥ 2. The signs of the coefficients ϕ k+q,k+p,k alternate when p is incremented. Therefore, the coefficients with indices p and p + 2 always have the same sign. This means that at any choice of the guiding resonance, its closest neighbours in the multiplet have coefficients with equal signs. In the model (11), the coefficients ϕ k+q,k+p,k are treated as constants. According to Holman & Murray (1996) and Murray & Holman (1997), the frequencies ω 0 of small-amplitude librations on subresonances are given by and the perturbation frequency is where b (1) 3/2 (α) is a Laplace coefficient, and Following Murray & Holman (1997), for the effective stochasticity parameter K in the subresonance multiplet we take and Further on, for evaluating β and A, we use the non-approximated expressions, i.e., the first ones in equations (18) and (19). The stochasticity parameter K of the standard map theory (Chirikov, 1979) is the same, in its dynamical sense, as the given K eff ; otherwise, in the standard map case, the resonance multiplet is infinite. For the guiding subresonance we take the strongest one, that in the middle of the multiplet. As soon as p = 0, 1, 2, 3, ..., q, the "middle" value of p is p mid = (q + 1)/2 − 1; and, if q = 1, then we take p mid = 1. Let us calculate the width of the chaotic multiplet (that with overlapping subresonances, K > K G ). First of all, we define technical quantities. For the first subresonance (p = 0), such quantity is whereas, for the last one (p = q), and, for the middle one, at ǫ = ǫ 0 , and The distance, in the canonical momentum variable, between the subresonances is δΛ = 2µA/β; see (Murray & Holman, 1997, equation (28)). For the first and last subresonances in the multiplet, the half-widths are given by and, summing, for the total width ∆a ch of the subresonance multiplet one has If ∆a ch < 2∆Λ 0 , we take ∆a ch = 2∆a 0 , and if ∆a ch < 2∆Λ q , we take ∆a ch = 2∆a q . Note that the total width of the chaotic multiplet is calculated here taking into account the half-widths of the boundary subresonances, as bounded by their unperturbed separatrices. The widths of the perturbed (splitted) separatrices can also be calculated (see Shevchenko 2008Shevchenko , 2020), but we ignore them in view of the dominating widths of the considered multiplets themselves. To apply in the next sections, let us write down also an expression for the half-width ∆a cr of the Wisdom gap (the planetary connected chaotic zone). In units of the perturber's semimajor axis a b , it is given by (Duncan, Quinn, & Tremaine, 1989;Murray & Dermott, 1999); concerning the accuracy of the numerical coefficient, see discussion in Shevchenko (2020). Consider now the diffusion rates. As follows from equation (9), the diffusion coefficients are given by where I 0 = e 2 0 /2, I max = e 2 max /2. To compute the removal time, we set e max = 0.4, as this eccentricity value is normally sufficient for reaching typical secular resonances in the inner Solar system (see, e.g., Morbidelli 2002). We do not take the particle's initial eccentricity equal to a particular constant value (as was adopted in Holman & Murray 1996;Murray & Holman 1997), but take it equal to the forced eccentricity. In the perturber's vicinity, according to (Hénon & Petit, 1986, equation (33)), the latter is given by and, far from the perturber, according to (Heppenheimer, 1978, equation (4)), it is given by . This is the time-averaged quantity, hence the coefficient 2/π. At a given value of a, if e f,HP > e f,H , then we take e 0 = e f,HP , else we take e 0 = e f,H . In accord with equations (28) and (29), the diffusion time is therefore given by To use the standard map theory, we set K = K eff . In the standard map theory, the Lyapunov exponent L is given by formula (7). Therefore, the Lyapunov time for Hamiltonian (11), in the perturber's orbital periods, can be written down as . The diffusion time T d is given by formula (32); therefore, in the perturber's orbital periods, the removal time is The Farey tree of mean-motion resonances The Farey tree technique is used in the number theory to organize rational numbers (Hardy & Wright, 1979). Here we use it to organize mean-motion resonances in a clear and straightforward way. The Farey tree is built as follows. Consider some rational numbers m ′ /n ′ and m ′′ /n ′′ that are "neighbouring", i.e., m ′ n ′′ − m ′′ n ′ = 1. Let them form the first level of the tree. Then, the second level of the tree is formed by a "mediant," given by the formula m ′′′ /n ′′′ = (m ′ + m ′′ )/(n ′ + n ′′ ). Each next level is formed by taking mediants of the numbers obtained at all preceding levels. Thus, the third level comprises two mediants ((m ′ + m ′′′ )/(n ′ + n ′′′ ) and (m ′′′ + m ′′ )/(n ′′′ + n ′′ )) of three numbers at two lower levels, the fourth level comprises four mediants of five numbers at three lower levels, and so on. If, at the first level, one takes m ′ /n ′ = 0 and m ′′ /n ′′ = 1, then the Farey tree, generated up to infinity, will comprise all rational numbers in the [0, 1] closed interval. For details, see (Meiss, 1992, pp. 814-815); a graphical scheme of the Farey tree construction is given in fig. 26 in Meiss (1992). Concerning the motion inner to the perturber in our planetary problem, the ratio of orbital frequencies (mean motions) of the particle and the perturber is greater than one; therefore, representing mean-motion resonances by rational numbers, we define the resonances as reciprocals of the rational numbers in the Farey tree generated in the [0, 1] segment. Recall that the order of a mean-motion resonance is given by the difference between the numerator and the denominator in its rational-number representation. It is important that, at each consequent level of the Farey tree, the order of any generated mean-motion resonance may only rise or stay constant; indeed, for the rational-number mediant m ′′′ /n ′′′ = (m ′ + m ′′ )/(n ′ + n ′′ ) the order of the corresponding resonance is q ′′′ = n ′ +n ′′ −m ′ −m ′′ , i.e., it is equal to q ′ + q ′′ , the sum of the orders of two lower-level generating resonances. As soon as the orders are non-negative, the generated resonance order cannot decrease. It is also important to note that the Farey tree covers and organizes the full set of rational numbers (Hardy & Wright, 1979;Meiss, 1992); accordingly, it covers and organizes the entire set of mean-motion resonances. For two generating integer resonances p/1 and (p + 1)/1, the mediant will be (2p + 1)/2. Therefore, the half-integer resonances are the mediants for the integer ones, and so on. The number of all resonances up to level k is N res = 2 k−1 + 1. For the first and second generating rational numbers at the first level of the Farey tree, we take, respectively, 0/1 and 1/1. They correspond to the mean-motion resonances 1/0 and 1/1 of the particle with the perturber (in its turn, these two resonances correspond to the test particle's semimajor axis a = 0 and a = 1, in units of the perturber's semimajor axis). Then, following the outlined above algorithm, we obtain the resonances 2/1 (at the second level of the tree); 3/1 and 3/2 (at the third level); 4/1, 5/2, 5/3, 4/3 (at the fourth level); and so on. The "Sun -Jupiter -minor body" model system Let us consider our Solar system with Jupiter regarded as a unique perturber, i.e., we ignore all other planets. Therefore, in the formulas of Section 3, we set the mass parameter µ = 1/1047, the secondary's eccentricity e b = 0.048 and its orbital period P b = 11.86 yr. Using the algorithm described in Section 4, we generate mean-motion resonances in the inner Solar system up to level 10 of the Farey tree, and compute the Lyapunov times, removal times, and widths of the chaotic resonance multiplets, using formulas given in Section 3. In Fig. 2, we illustrate the mean-motion resonances in the inner Solar system. In the left panel of this Figure, the stochasticity parameter K (blue dots) of the resonances is shown as a function of the tertiary's semimajor axis a. The vertical green line marks the location of the 2/1 resonance with Jupiter. The horizontal magenta and blue dotted lines correspond to K = K G and K = 4, respectively. One may see that in the orbital range between the 2/1 and 1/1 resonances the values of K are orders of magnitude greater than those in the range between the 0/1 and 2/1 resonances. In the right panel, the same set of resonances is displayed, but for the product qǫ (blue dots). The horizontal red line marks the qǫ = 1 limit. We see that for the most of the high-order resonances, the product qǫ > 1; this means that the adopted theory can be used solely as an extrapolation. At smaller values of µ and e b , one may use the theory without any extrapolation, as we demonstrate in the next Section. In Fig. 3, left panel, the Lyapunov time T L (olive dots) is shown as a function of the tertiary's semimajor axis a. The vertical green line marks the location of the 2/1 resonance with Jupiter. The vertical red line marks the inner border of the Wisdom gap (the planetary connected chaotic zone) of Jupiter, and the double vertical black, light magenta, blue, and magenta lines mark the borders of the Wisdom gaps of Mercury, Venus, Earth, and Mars, respectively. The Wisdom gap borders' locations are given by equation (27). In the right panel, the removal times T r (olive dots) are shown for the resonances with K > K G . The two panels certify that, in the orbital range between the 2/1 and 1/1 resonances, the T L and T r values are orders of magnitude less than those in the range between the 0/1 and 2/1 resonances. In Fig. 4, the widths ∆a ch (in tertiary's semimajor axis) of the subres-onance multiplets of mean-motion resonances are displayed (the blue dots). For the resonances with K < K G , the widths are set to zero. The vertical green line marks the location of the 2/1 resonance with Jupiter. The vertical red line marks the inner border of the Wisdom gap of Jupiter. We see that the total width of the chaotic resonances to the left of the 2/1 resonance is just zero, in contrast to the situation to the right of the 2/1 resonance, where a lot of chaotic resonances of significant measure are present. 6 The "star -super-Earth -minor body" model system Now let us consider a system with a much smaller value of the mass parameter µ: a "Solar-like star -super-Earth" system. Mass of the model super-Earth is set equal to three Earth masses, i.e., µ = 10 −5 ; and the super-Earth's orbit eccentricity e b = 0.005. As in the previous Section, we generate the meanmotion resonances in the inner model system up to level 10 of the Farey tree, and calculate the Lyapunov and removal times and the widths of the chaotic subresonance multiplets of the mean-motion resonances. In Fig. 5, we illustrate the mean-motion resonances in our model exoplanet system. In the left panel of this Figure, the stochasticity parameter K of the chaotic subresonance multiplets of the mean-motion resonances is shown as a function of the tertiary's semimajor axis a (blue dots). The semimajor axis of the super-Earth orbit is set to one. The vertical green line marks the location of the 2/1 resonance with the super-Earth. The horizontal magenta and blue dotted lines correspond to K = K G and K = 4, respectively. As in Section 5 above, one may see that in the range between the 2/1 and 1/1 resonances the values of K are typically orders of magnitude greater than in the range between the 0/1 and 2/1 resonances. Also we display (in the right panel) the same resonances, but for the product qǫ (blue dots). The horizontal red line marks the qǫ = 1 limit. We see that for all resonances, the product qǫ < 1; this means that the adopted theory is everywhere valid. In Fig. 6, left panel, the Lyapunov time T L (olive dots) is shown is shown as a function of the tertiary's semimajor axis a. The vertical green line marks the location of the 2/1 resonance with the super-Earth. The vertical red line marks the inner border of the super-Earth's Wisdom gap. In the right panel, the removal times T r (olive dots) are shown for the resonances with K > K G . As in Section 5 above, the panels of Fig. 6 make it clear that, in the range between the 2/1 and 1/1 resonances, the T L and T r values are orders of magnitude smaller than those in the range between the 0/1 and 2/1 resonances. In Fig. 7, the widths ∆a ch (blue dots) of the subresonance multiplets of mean-motion resonances are displayed. For the subresonance multiplets with K < K G , the widths are set to zero. The vertical green line marks the location of the 2/1 resonance with the super-Earth. The vertical red line marks the inner border of the super-Earth's Wisdom gap. As in Section 5, we see that the total width of chaotic resonances to the left of the 2/1 line is zero, whereas to the right of the 2/1 line there is a lot of broad chaotic resonances. Covering factors of dynamical chaos Let us define the covering factor of chaos as the sum of the widths of the mean-motion resonances with K > K G in a particular range of the initial orbital radii of the test minor body. This notion may seem similar to "optical depths," used in Quillen (2011) and Hadden & Lithwick (2018) to characterize resonance ensembles, but there exists a qualitative difference: the covering factor, as introduced here, concerns chaotic resonances (those with overlapping subresonances), whereas the "optical depths" take into account the widths of all resonances. In the model Solar system, defined in Section 5, the sole perturber is Jupiter and, therefore, the mass parameter µ = 10 −3 . The covering factor of chaos in the inner Solar system (apart from the Wisdom gap of Jupiter), for the resonance ensemble up to the Farey tree level 10, is shown, as a function of Jupiter's eccentricity e planet = e b , in Fig. 8. The horizontal blue dotted line represents the covering factor (the radial half-width) of the Wisdom gap of Jupiter. We see that the covering factor of the chaotic resonances inner (in orbital radius) to the Wisdom gap is always much less than the covering factor (the relative radial size) of the Wisdom gap itself. In the model exoplanet system, as defined in Section 6, the mass parameter µ = 10 −5 . The covering factor of the mean-motion resonances with K > K G in the inner system up to the Farey tree level 10 is shown, as a function of the planet's eccentricity e planet , in Fig. 9. The horizontal blue dotted line represents the covering factor (the radial half-width) of the super-Earth's Wisdom gap. We see that, with increasing the perturber's eccentricity, the covering factor of the chaotic resonances inner (in orbital radius) to the perturber's Wisdom gap starts to dominate over the covering factor of the Wisdom gap rather rapidly. Discussion and conclusions As shown above in Sections 5 and 6, the densely distributed (though not overlapping) high-order resonances, when certain conditions for the planetary system parameters are satisfied, may produce an extended planetary chaotic zone (EPCZ) -the weak instability zone, disconnectedly but densely extending, in orbital radius, the planetary chaotic zone down to the 2/1 resonance with the planet. Therefore, the extended planetary chaotic zone covers the orbital range between the 2/1 and 1/1 resonances with the planet. On the other hand, the orbital space inner to the 2/1 resonance can be essentially long-term stable. What is the cause of this difference? Let us demonstrate that it can be qualitatively explained as due to a specific behavior of the dependence of the adiabaticity parameter of subresonance multiplets on the value of the particle orbit's semimajor axis. The adiabaticity parameter is given by equation (2): λ = Ω/ω 0 . Using equations (14) and (15) for the frequencies ω 0 and Ω, one arrives at where α = a/a b , as defined above, and the coefficient C = C(µ, q, e, e b ) does not depend on α. One may see that, at q > 2, λ tends to zero, if α → 1, and it tends to C, if α → 0. Equation (35) is valid if the particle's proper eccentricity is large enough, namely, if it is much greater than the forced eccentricity: e proper ≫ e forced . As mentioned above, the forced eccentricity in immediate vicinities of the perturber is given, according to Hénon & Petit (1986), by equation (30), and, far from the perturber it is given, according to Heppenheimer (1978), by equation (31). If e proper ≪ e forced , one should substitute e = e forced in the expression for ω 0 , equation (14). Then, for the adiabaticity parameter, at locations close to the perturber, one has with the coefficient C ′ = C ′ (µ, q, e b ). At locations far from the perturber, one has with C ′′ = C ′′ (µ, q, e b ). Equations (36) and (37) demonstrate that in the whole range 0 < α < 1, at q > 1, the adiabaticity parameter λ increases if α is decreased. In other words, λ becomes larger for orbits more distant from the perturber and more closer to the host star. Thus, for high-order mean-motion resonances, either at e proper ≫ e forced or at e proper ≪ e forced , the adiabaticity parameter λ increases if α is decreased from 1 down to 0. Recall that the chaotic layers, if λ is increased, exponentially shrink in width (Chirikov, 1979;Shevchenko, 2008). On the other hand, as it is clear from Figs 2-3 and 5-6, the Farey tree of mean-motion resonances forms two distinct major "nests", distinctly separated from each other by the 2/1 resonance location at α = α 1/2 = 2 −2/3 = 0.630.... The increase of λ with decreasing α radically (exponentially with α → 0) suppresses chaos in the nest on the left of α 1/2 (in the panels of Figs 2-7), in contrast to that on the right of α 1/2 , because the nests are far from each other. In this way, the interplay of the rise of λ with decreasing α and the resonance nests' broad separation explains the rather sharp EPCZ appearance. The EPCZ phenomenon can be more or less active in determining the architectures of planetary systems: the orbital zone between the 2/1 and 1/1 resonances with a planet can be expected to be normally free from lowmass material and, perhaps, also from less massive (than the perturber) planets. Only the material occasionally captured in the first-order 3/2 or 4/3 resonance may survive, as in the Kepler-223 system. On the other hand, no restrictions apply to populate the zone inner (in orbital radius) to the 2/1 resonance. In this respect, the sharp difference in the global stability between the 0/1-2/1 and 2/1-1/1 orbital zones seems to agree with available data on the known architectures of planetary systems. This first of all concerns the observed structure of planetesimal disks, such as the 2/1 resonance cut-off, in observed planetary systems, including our Solar one. The main asteroid belt in the Solar system is cut-off from above at its radial exterior by the 2/1 mean-motion resonance with Jupiter; therefore, if any material have been ever substantially present in the 2/1-1/1 orbital zone (corresponding to the radial space between the 2/1 and 1/1 mean-motion resonances with Jupiter), it has been exhausted, whatever the reason for this removal could have been. Only some small amount of material captured in the first-order 3/2 and 4/3 resonances could have survived. Note that the formation of individual matter-free gaps at the 2/1 resonance is directly observed in numerical experiments, already on relatively short timescales (Demidova & Shevchenko, 2016). Among observed exoplanet systems, a prominent example of the 2/1resonance inner cut-off of a circumstellar disk is exhibited in the HR 8799 system. This system is remarkable, being a "young" structural analogue of the Solar system. Indeed, its architecture is similar to that of ours: the orbits of its observed four giant planets are surrounded by a warm dust belt analogous to the asteroid belt in our system, and from outside they are surrounded by a cold belt analogous to the Kuiper belt (Faramaz et al., 2021). The inner part of the system (bounded in radius from above by the "asteroid belt") contains a zone of potential habitability. According to Faramaz et al. (2021), "simply put, the system of HR 8799 is a younger, broader, and more massive version of the Solar System". Its "asteroid belt," in its turn, is cut-off from above by the 2/1 resonance with the giant planet that is innermost in this system, similar to the situation in the Solar system. What could be the mechanism responsible for any rapid-enough removal of material from the weakly unstable zone? In our Solar system, the Yarkovsky effect and the impact destruction (giving birth to asteroid families) of bodies in the main asteroid belt continuously supply material into numerous chaotic resonant bands present inside the belt. This process monotonously, though slowly, exhausts the belt: in the chaotic bands, the eccentricity is slowly pumped up until the particles enter secular resonances, and the latter drive the material away, mostly up to falling onto the Sun (see, e.g., Morbidelli 2002). Note that the Yarkovsky drift in the semimajor axis, da/dt, can be estimated using equations (4)-(5) in Bottke et al. (2006). As illustrated in Fig. 2 in Bottke et al. (2006), it may provide, depending on a number of physical parameters, the rapid-enough permanent radial transport of asteroidal material. The same removal process is by all means active, to a more or less degree, in planetesimal disks of any exoplanet system. Therefore, it may more or less (depending on the system parameters) rapidly exhaust the EPCZ. For this to occur, the "clearing" (those providing the rapid-enough eccentricity pumping) chaotic resonant bands should have a sufficient covering factor (as defined in Section 7) in orbital radius. In this article, we have considered the EPCZ formed interior to the planet's orbit. Any global stability properties of the outer resonance zones require a separate analysis, as they broadly extend to infinity; it would be accomplished elsewhere. In particular, this analysis could shed light on possible resonant/chaotic structure of circumstellar external planetesimal disks, similar to the Solar system's Kuiper belt, whose resonant structure is mostly controlled by Neptune. As follows from comparing the numerical results presented in Sections 5 and 6, the dynamical importance of the EPCZ in presence of smaller-µ perturbers tends to be much greater than in presence of larger-µ perturbers: indeed, the results indicate that, for the Earths and super-Earths orbiting the Solar-like stars, the removal of material from their EPCZs is expected to be much more pronounced than for the giant planets of similar host stars. Figure 1: The λ dependences of the maximum Lyapunov exponent (normalized by ω 0 ) in resonance multiplets. Dots: numerical-experimental data of Shevchenko (2014). Green (upper) solid curve: the standard map theory (given by equations (7)) for the "infinitet". Blue (lower) curve: the separatrix map theory for the equally-spaced equally-sized triplet, as described in Shevchenko (2014). Adapted from (Shevchenko, 2014, Fig. 7). Figure 9: The chaos covering factor in the inner model "Sun-like star -super-Earth -minor body" system (the sum of the widths of the mean-motion resonances with K > K G , up to the Farey tree level 10), as a function of the super-Earth's eccentricity e planet (black solid curve). The widths are in units of the super-Earth's orbital radius. Blue dotted horizontal line: the covering factor (the radial half-width) of the super-Earth's Wisdom gap.
2022-07-25T15:03:49.716Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "16e2228d0c957f16d61c626c3e5c99488b2aee6b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3a1a0af5813cc0d0df976bb1a68059c79f9de26a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
7738318
pes2o/s2orc
v3-fos-license
The Relative Age Effect on Soccer Players in Formative Stages with Different Sport Expertise Levels Abstract The Relative Age Effect (RAE) in sport has been targeted by many research studies. The objective of this study was to analyze, in amateur clubs, the RAE of soccer players, according to the sport expertise level of the team (e.g., A, B, C and subsequent) that they belong to within the same game category. 1,098 soccer players in formative stages took part in the study, with ages varying between 6 and 18 years old (U8 to U19 categories). All of them were members of 4 Spanish federated clubs. The birth dates were classified into 4 quartiles (Q1 = Jan-Mar; Q2 = Apr-Jun; Q3 = Jul-Sept; Q4 = Oct-Dec)according to the team they belonged to. The results obtained in the chi-squared test and d value (effect size) revealed the existence of RAE in the teams with the highest expertise level, “A” (X2 = 15.342, p = .002, d = 0.4473) and “B” (X2 = 10.905, p = .012, d = 0.3657). However, in the lower level teams, “C and subsequent”, this effect was not observed. Present findings show that players born during the first months of the year tend to be selected to play in teams with a higher sport expertise level of each category, due to their physical maturity. Consequently, this causes differences in terms of the experience they accumulate and the motivation that this creates in these players. Introduction To guarantee equality in competition and provide young people with the same success opportunities, in sport, athletes are grouped into different game categories based on their chronological age (Musch and Groundin, 2001). However, using this criterion does not seem adequate given that there is 12 months' difference between the youngest and the oldest athletes. In this sense, and because considerable anthropometric and physiological changes take place during adolescence, an athlete born in January (relatively older) will have one year's advantage over another athlete born in December (relatively younger) (Arrieta et al., 2015). The consequence of this, translated into the tendency to choose the oldest players in each category, is known as the Relative Age Effect (RAE) (Gil et al., 2014). The RAE has been studied in different sports such as hockey (Nolan and Howell, 2010), rugby (Till et al., 2009), volleyball (Okazaki et al., 2011) and especially in soccer (Gil et al., 2007;Sallaoui et al., 2014). In soccer, numerous studies focused on the RAE have determined that players born during the first months of the year are identified with success in this sport (Augste and Lames, 2011;Helsen et al., 2005;Van den Honert, 2012). Hence, clubs with prestige or characterized by playing in higher leagues select these players. However, in soccer, this selection process is not only perceived among clubs, but also among teams with different levels within the same Journal of Human Kinetics -volume 60/2017 http://www.johk.pl category (e.g., team A, team B, team C, etc.) (González-Víllora et al., 2015). In this sense, a player selected for team A will play in different tournaments and competitions than a player in team C, and will also, in turn, play against the most competitive teams of their category (Helsen et al., 2005). The latter positively determines the player's development. In this regard, Díaz del Campo et al. (2010) and Romann and Fuchslocher (2011) point out that these players, who were born during the first months of the year, have more experience and have trained for more hours. Furthermore, it is normal for the best teams to train with best coaches, play with high level athletes and participate in most prestigious competitions (Figueiredo et al., 2009), all of which provides them with advantages in performance and in future selections (Augste and Lames, 2011). Additionally, Helsen et al. (2005) point out that this selection increases both the players' intrinsic (perceived competence) and extrinsic (appreciation by coaches and parents) motivation, as it encourages them to improve their skills. This becomes a vicious circle where children born at the beginning of the year seem to have a great advantage, in terms of sports performance, over those born at the end of the year. Considering the reasons for this selection, Augste and Lames (2001) reached a conclusion that the search for short-term performance and success prevailed in formative categories. These authors indicated that coaches, instead of trying to identify more talented players who can provide them with better results in the future, selected those born in the first quartile. In addition to this, nowadays, both club coaches and directors, as well as the players' parents, are more oriented towards competitive, instead of formative sport, from the very early age. Thus, we may notice how children born at the end of the year spend less time on the playing field during matches (Díaz del Campo et al., 2010;Romann and Fuchslocher, 2011). Studying this question in even greater depth, Kirkendall (2014) pointed out that initiation of sport should be an opportunity for young players to improve their skills, increase their tactical awareness, improve fitness, and enjoy playing with others of the same level. Although there are numerous studies that have analyzed RAE in different sports such as soccer (González-Víllora et al., 2015), basketball (Saavedra et al., 2015) or volleyball (Okazaki et al., 2011), and tried to establish its relationships with maturation and anthropometry (Lovell et al., 2015) or physical performance characteristics (Haddad etl., 2015), only few studies have analyzed RAE in teams with a different sport expertise level within one age category (Gutiérrez et al., 2010). Thus, we considered it important to continue this research direction examining the reserves of soccer clubs, as it would provide valuable information for club directors and coaches. Therefore, the objective of this study was to analyze, in amateur clubs, the RAE of soccer players, according to the sport expertise level of the team they belonged to within the same age category. Participants A total of 1,098 soccer players in formative stages, with ages varying between 6 and 19 (U8, U10, U12, U14, U16 and U19 categories) participated in the study. All players belonged to 4 Spanish federated clubs, of which main teams were in the 3 rd division or in a lower category (amateur clubs). The selected clubs were located in medium-sized towns (50,000 -100,000 inhabitants) and their players were recruited from the youth from the actual town. Thus, there was very little player mobility among the different clubs. Variables The independent variable considered in the study was the team's sport expertise level that was determined based on whether the players belonged to teams A, B or C, in each age category. The criterion used by amateur soccer clubs in Spain was followed, i.e., the players were distributed according to their expertise level and experience in the particular category. The best players were assigned to teams A and B, and their training was performance-oriented, while players not selected for these teams in their category were assigned to teams C and their training was rather recreational and education-oriented. Hence, the following classification of the sport expertise level was used in this study: 1. High expertise level teams, teams A, in each age category, consisted of players selected with performance objectives, and with one © Editorial Committee of Journal of Human Kinetics year experience in the category. 2. Intermediate expertise level teams, teams B, in each age category, were comprised of players also selected with performance objectives, although without any experience in the category. 3. Low expertise level teams, teams C, in each age category, included players who had not been selected as the best of their category, and whose participation in the club was recreation and education-oriented. Procedures Firstly, the distribution of the birth dates of players in the 2015/2016 season was analyzed. The data on birth dates were provided by the directors of the clubs and the players were divided into groups according to their month of birth. Following the guidelines of the Fédération Internationale de Football Association (FIFA), since 1997, the 1st of January is the start of the selection year, and this therefore is the cut-off date for the soccer competition year. Thus, January would be the first month of the selection year, and December the last. Therefore, players located in quartile 1 (Q1) are those born between January and March, in quartile 2 (Q2), those born from April to June, in quartile 3 (Q3), from July to September, and in quartile 4 (Q4), from October to December. Statistical analysis The asymmetry measures, kurtosis and Kolmogorov-Smirnov with Lilliefors correction test were employed to establish that the sample did not present a normal distribution, justifying the need to use non-parametric statistics. Then, a chi-squared test was performed to compare the relative age quartiles according to the team. Since the chi-squared test did not reveal the magnitude or direction of the existing relationship, Cohen's effect size was calculated in order to examine the differences between the teams and .20 was considered a small effect size, .50 medium and .80 large (Kraemer and Kupfer, 2006). All the analyses were performed with the SPSS 19.0 program, and statistical significance was set at p < 0.05. Results Table 1 and Figure 1 show the distribution of birth dates according to the sport expertise level of the team (A, B and C). As we can see, the distribution was unequal depending on the quartile when the players were born. However, the distribution of the birth dates was only significantly different in teams A and B (highlevel and intermediate level, respectively). Between 30 and 35% of the players of these teams were born in Q1, and between 25 and 30% in Q2. Such differences were not observed in the third group (C), and the percentages were similar among the four quartiles. To study the RAE in greater depth, the calculation of effect size d was considered, verifying that, in teams A and B, the effect had intermediate magnitude, although this was slightly greater in teams A compared with teams B. Note. Q1-Q4 = birth quartiles 1-4; X 2 = Chi-square value; p = significance. Figure 1 Distribution of birth dates according to the quartile in the three teams with different sport expertise levels Discussion The purpose of the study was to analyze the RAE of soccer players from amateur clubs, according to the sport expertise level of the team they belonged to within the same category. The obtained results show that in the lower categories of the four Spanish federated clubs, only the highlevel and intermediate-level teams A and B presented significant differences between the different quartiles, the percentage of players born in Q1 being higher. Therefore, it can be pointed out that RAE occurs in these teams. These findings confirm that RAE still constitutes a problem in team sports (Delorme and Raspaud, 2008;Till et al., 2010). This phenomenon is due to the fact that nowadays, the search for immediate performance prevails over the identification of talent, of which objective is to achieve more longterm results (Augste and Lames, 2011). Thus, in formative categories, players born at the beginning of the year (relatively older) have more advantages when the time of selection compared to those born at the end of the year (relatively younger), because physical, physiological and psychological development is more advanced in the former (Musch and Grondin, 2001). In this regard, it appears that maturity is one of the factors that most determines the selection processes of soccer players, and their consequent formation. By studying in greater depth the differences found between particular teams according to the sport expertise level (A, B and C), © Editorial Committee of Journal of Human Kinetics with regard to higher-and intermediate-level teams (A and B), the obtained results seem to indicate that these clubs select players born during the first months of the year, who are characterized by a greater physiological and anthropometric development (Delorme and Raspaud, 2008). However, in team C, such a phenomenon was not observed. Considering this perspective, only a small number of studies have analyzed this problem. More specifically, Gutiérrez et al. (2010) did not find any significant differences between teams A, B and C, observing that the percentage of players in each quartile was the same in all of them, being much greater in Q1 than the others quartiles. Thus, the RAE occurred in all teams. However, in our study we did find differences between the different teams. A possible explanation of the differences between the teams, found in our research, is that the young soccer players participating in the study belonged to the lower categories of clubs that competed in the 3 rd division leagues or lower, unlike the study by Gutiérrez et al. (2010), in which all teams belonged to the Professional Football League. In this sense, as these clubs were in higher leagues and located in bigger cities, they had more possibilities of finding and selecting players born during the first months of the year. In contrast, clubs included in this study belonged to lower level leagues and were located in smaller towns, thus, the selection process was limited to the youth from their actual town (Idafe, 2008). On the other hand, another possible explanation to the differences found between our research and that of Gutiérrez et al. (2010), may be that the reserve teams that belonged to lower leagues and had less prestige, included both teams directed at seeking performance (A and B) as well as teams directed at promoting sport (C), in which the RAE was not observed. The consequences of RAE on the formation of young soccer players must be pointed out, in such a way that, in agreement with the theory of deliberate practice, players born during the first months and who are selected by the best teams of each category, benefit from a larger number of training hours (Díaz del Campo et al., 2010;Romann and Fuchslocher, 2011). In this sense, the experience they accumulate is greater than that of the other players, providing them with advantages in achieving peak performance (Ward and Williams, 2003;Ward et al., 2004), since, as indicated by the theory of deliberate practice, there is a significant positive relationship between practice and performance (Ericsson et al., 2006). In this regard, anthropometric characteristics are not the only decisive factors in the selection of players, as, considering their progress through the age categories, the fact that particular players have been selected for the best teams means that the effect of experience begins to become relevant. This, in turn, creates an increase in the player's motivation (González-Víllora et al., 2015), which may be translated into greater effort and confidence. On the other hand, players born at the end of the year do not have the advantage of being trained by the best coaches and of taking part in highest level competitions (Figueiredo et al., 2009), a fact that increases the probability of them dropping out of sport (Figueiredo et al., 2009;Rebelo et al., 2012), and goes against the principles of development and formation of athletes who are in formative stages (Kirkendall, 2014). Conclusions We concluded that when studying the RAE, we must consider the influence of maturity as a decisive factor in the selection of players, but also the consequences of RAE on the formation of athletes, as nowadays, the older players (belonging to Q1 and Q2) benefit from an accumulation of experience in terms of sporting practice. Finally, we consider that the problem of the RAE may be counteracted by the responsible technicians of the sports clubs, by introducing other criteria to select players, according to formative objectives, in such a way that priority is given to the search for sporting talent rather than achieving short-term results. Along this line, it is necessary to guarantee to all young players the possibility of being part of the most advanced teams in each category, depending on their talent and not on their chronological age. To this end, competitions should be organized according to the child's sport expertise level, with higher level teams playing against their equals. Thus, the number of players who drop out of school sport practice could be reduced, as the fact that they compete against other players of the same level Journal of Human Kinetics -volume 60/2017 http://www.johk.pl would permit avoiding exaggerated results, and the children would not become discouraged. To conclude, we must emphasize that the present investigation was a preliminary study, due to the fact that the sample was reduced, considering only soccer players of amateur clubs and the same region. For this reason it would be justified to carry out comparative studies between amateur and professional clubs. Future studies analyzing the influence of RAE on the team's expertise level in each category, are necessary to expand knowledge in this area.
2018-04-03T03:39:14.034Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "bdeb754a08be0ebf759e4efb72f86bdfc49f7a56", "oa_license": "CCBYNCND", "oa_url": "https://content.sciendo.com/downloadpdf/journals/hukin/60/1/article-p167.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bdeb754a08be0ebf759e4efb72f86bdfc49f7a56", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
245856906
pes2o/s2orc
v3-fos-license
Implementation of clinical audit to improve adherence to guideline-recommended therapy in acute coronary syndrome Background Despite global consensus on the management of acute coronary syndrome (ACS), implementation of strategies to improve adherence of guideline-directed medical therapy (GDMT) remains sub-optimal, especially in developing countries. Thus, we aimed to assess the effect of clinical pharmacist-led clinical audit to improve the compliance of discharge prescriptions in patients admitted with ACS. It is a prospective clinical audit of ACS patients which was carried out for 12 months. The discharge prescriptions were audited by clinical pharmacists for the appropriateness in the usage of statins, dual antiplatelet therapy (DAPT), beta-blockers, and angiotensin-converting enzyme inhibitors (ACE-I)/angiotensin receptor blocker (ARB). A feedback report was presented every month to the cardiologists involved in the patient care, and the trend in the adherence to GDMT was analyzed over 12 months. Results The discharge prescriptions of 1072 ACS patients were audited for the justifiable and non-justifiable omissions of mandated drugs. The first-month audit revealed unreasonable omissions of DAPT, statin, ACE-I/ARB, and beta-blockers in 1%, 0%, 14%, and 11% respectively, which reduced to nil by the end of the 11th month of the audit–feedback program. This improvement remained unchanged until the end of the 12th month. Conclusions The study revealed that periodic clinical audit significantly improves adherence to GDMT in patients admitted with ACS. Background ACS is a spectrum of clinical conditions that occur due to myocardial ischemia or infarction that are commonly due to an abrupt reduction in coronary blood flow. It comprises two clinical presentations, namely ST elevation ACS and non-ST elevation ACS [1]. The management of ACS has rapidly evolved worldwide over the past two decades with a better focus on protocol-based pharmacotherapy [2]. Modern medical treatments like percutaneous coronary intervention (PCI) are proven to have high recovery rates in patients with ACS [3]. Despite these, the survivors are still at high risk for recurrent cardiovascular events. It is estimated that the short-term mortality rate at 30 days after an acute ACS event is between 2 and 3%, whereas the rehospitalization rate within 30 days is as high as 12 to 25% [4,5]. A significant risk persists, and the key is to reduce the morbidity and mortality risk in ACS with a secondary prevention plan [6]. Patients should inevitably receive appropriate medical management of coronary risk factors irrespective of the state of revascularization. Studies show that pharmacological strategies have improved the long-term outcome of patients presenting with ACS [7,8]. The American College of Cardiology/American Heart Association as well as the guidelines by the European Society of Cardiology advocates the collective use of antiplatelets, angiotensin-converting enzyme inhibitors/ angiotensin receptor blockers, beta-blockers, and lipidlowering agents (primarily statins) for long-term treatment of patients after ACS [1,9]. Many registries across the globe project that approximately one-half of patients do not receive recommended treatments after an ACS event [10]. Despite global consensus on the management of ACS, gaps in the implementation of adherence to guideline-directed therapy exist in developing countries [2]. The result from the Indian data on ACS points out that patients are less likely to receive evidence-based treatment. It also emphasizes the sub-optimal medical discharge management of ACS patients in India [11]. The purpose of the current study was to conduct a monthly clinical audit of discharge prescriptions in patients admitted with ACS. The discharge prescriptions were examined for the inclusion of recommended drugs at the recommended dosage, and the unreasonable omission of mandated drugs was highlighted to the cardiologists by clinical pharmacists. The clinical audit-feedback was carried over 1 year, and the impact of monthly clinical audits in improving the prescriptions of evidence-based pharmacotherapy in patients with ACS was analyzed. Study design A prospective, unicentric, observational clinical audit of discharge prescriptions of ACS patients was conducted at PSG Hospitals, Coimbatore, Tamil Nadu, after the clearance from the institutional human ethics committee. Monthly clinical audit Adherence to GDMT is an area of prime importance in clinical medicine. Ideally, the discharge prescriptions of patients admitted with ACS should include dual antiplatelet therapy, statin, beta-blocker, and ACE-I/ARB unless contraindicated for a patient in addition to the drugs for other comorbidities. The aforementioned drugs, if missed due to valid reasons, are mentioned in the discharge summary of the patients, to avoid confusion during patient follow-up. The inclusion of the recommended drugs in ACS was audited retrospectively based on the patients' discharge summary. This clinical audit of mandated drugs was conducted as a part of a quality improvement program in the management of ACS patients. Study population The study population included all patients admitted with ACS, under the Department of Cardiology, PSG Hospitals, between June 2019 and June 2020. Exclusion criteria • ACS patients discharged from hospital against medical advice • Death Method The clinical audit and feedback were conducted by cardiology clinical pharmacists. The report presentation included characteristics of patients, such as gender, age, comorbidities, type of ACS, and details about the inclusion of guideline-directed drugs in the management of ACS. The omissions of mandated drugs were discussed upon, based on the clinical summary of the patients, and the unreasonable omissions of drugs were highlighted. This audit presentation was carried out at the end of every month to the cardiologists involved in the management of ACS patients over 12 months. Statistical analysis The number of patients with unreasonable omission of DAPT, ACEI/ARB, statins, and beta-blockers were calculated in percentage. Curve estimation analysis was performed to determine the pattern of unreasonable omissions over 1 year. p value was considered to be significant if it was less than 0.05. Data were analyzed using IBM SPSS statistics 24 and MS Excel (Microsoft Corp, Redmond, WA). Results A total of 1072 patients were admitted with ACS in the Department of Cardiology between June 2019 and June 2020. The audit of discharge prescriptions was conducted in 1045 ACS patients who were eligible for the study. The study comprised 75.02% male patients and 24.98% female patients. Dual antiplatelets The first month of audit revealed that DAPT was not included in the discharge prescriptions of 3.4% of the ACS patients. It was discussed and pointed out that 1% of patients had unreasonable omission of DAPT. The criteria to omit the drugs were agreed upon by the cardiologists, and the audit was conducted at the end of every month. The clinical audit conducted in the second month revealed a similar picture of irrational omissions of DAPT (1%). In the subsequent months, the unreasonable omissions of DAPT dropped to nil which remained unchanged throughout the study period (Table 2, Fig. 1). This was statistically significant in curve estimation analysis (Fig. 2a). Reasons for medication omission are described in Table 3. Statins The clinical audit for discharge prescriptions for ACS patients conducted in the first month revealed that all the prescriptions included statins. The audit presentation in the following months revealed 1.4% of omission in the eighth month which was justified as it was withdrawn temporarily up to short follow-up due to statin-associated muscle symptoms. The findings from successive audits showed nil omissions of statins (Table 2, Fig. 1). ACE-I/ARB ACE-I/ARB was excluded in 15.6% of discharge prescriptions in the first month of observation, of which 14% of omissions were unjustified. A remarkable reduction in the omission of ACE-I/ARB (4%) was observed in the second month. A marginal rise in unreasonable omission in ACE-I/ARB was observed when compared to second month (8% in the third month and 6% in both fourth and fifth months). Beta-blockers The first month of audit exhibited a high number of beta-blocker omissions, which accounted for 19.1% of which only 8.1% was reasonable. The irrational omission of beta-blockers was 10% in the second month of clinical audit. A sustained decreasing trend of unreasonable omission of beta-blockers was observed in the subsequent months (7% in the third, fourth as well as fifth months). The unjustified omissions of beta-blockers in the discharge prescriptions in the sixth and seventh months of audit revealed a borderline dip (6% in both the sixth and seventh months of audit compared to 7% in the preceding months). A gradual decline in the unreasonable omissions of beta-blocker was observed in the discharge prescriptions of ACS patients in the eighth and ninth months of the audit which revealed a drop from 6% in the preceding months to 4% and 2% in the eighth and ninth months, respectively. At the end of the tenth month of audit, unwarranted exclusion of beta-blockers fell to 2%. It was noted that, with consistent efforts, the eleventhmonth audit had unreasonable omissions of all mandated drugs to zero. This trend of unreasonable omission in the key pharmacotherapy of ACS was rechecked in the 12th (Tables 2, 3, Figs. 1, 2b). Discussion An optimal secondary prevention plan is indispensable in reducing CVS morbidity and mortality after an ACS event [12,13]. Significant risks persist even after a PCI, and continuous efforts are required to reduce these risks, which can be done by optimizing pharmacological treatment at discharge and follow-up [14]. Compliance with the prescription of guideline-recommended therapy constitutes an essential quality benchmark in the management of ACS. Underutilization of GDMT is still prevalent worldwide even in developed countries [15,16]. To the best of our knowledge, this study is the first of its kind in India, highlighting the impact of an ongoing clinical audit of discharge prescriptions in a multidisciplinary forum in improving the adherence to GDMT, resulting in optimal secondary prevention of ACS patients. In this study, it was observed that in the first month of the audit of discharge prescriptions, only 87.1% of admitted patients were discharged collectively with all the four mandatory drugs. A study conducted in six Arab countries also revealed that only 49% of the ACS patients received evidence-based discharge prescriptions [17]. The underutilization of the evidence-based medications is observed not only in developing countries but also in many developed countries of the world. A retrospective cross-sectional study conducted in Australia and Malaysia also reported the underutilization of evidence-based pharmacotherapy in eligible ACS patients [18,19]. Multiple ACS registries also displayed similar findings [20,21]. The under-prescribing of essential drugs reported in previous studies is often quantified with non-adherence to GDMT. It is important to note that the exclusion of 1 or more guideline-directed pharmacotherapy in ACS does not imply non-optimal therapy. Evidence-based therapy is omitted most often due to justifiable patientspecific contraindications. This study, therefore, aims only to limit the unjustifiable omission of mandated drugs. Clinical pharmacists are vital in the multidisciplinary management of patients with ACS [22]. A study conducted in Saudi by Amina M. Jabri et al. showed that pharmacist-led review, feedback, and discussion with treating cardiologists improved the prescription of drugs for secondary prevention in ACS from 35 to 80% [23]. Hassan et al. also reported an increase in the utilization of drugs for secondary prophylaxis with pharmacist's involvement in clinical rounds [24]. Clinical audit has been proven to be an essential quality improvement technique [25]. Thus, we used the expertise of clinical pharmacists to conduct a clinical audit of discharge prescriptions of ACS patients in a monthly presentation in an open forum to the prescribers involved in the care of ACS patients. Dual antiplatelet is a cornerstone of ACS management [26]. The clinical audit conducted in the first month showed 1% of the unjustifiable omission of DAPT. Presence of life-threatening bleeding, coagulopathy, thrombocytopenia, aspirin allergy, and immediate surgery such as coronary artery bypass grafting warrants an omission of DAPT. Discussion of the absolute and relative contraindication of antiplatelet drugs helped in the improvement in unjustifiable omission in discharge prescription from 1% to 0 over 12 months (p < 0.000) (Fig. 2a). Contradictory to the findings of underusage of highintensity statins in ACS in many developed countries, the retrospective audit conducted in the first month of the study period revealed that high-intensity statins were not excluded unreasonably from the discharge prescriptions of ACS patients [27]. Only the efforts to assure the existing adherence pattern had to be carried out in terms of statins, which proved successful. It is widely known that ACE-I/ARB when initiated after an acute MI reduces mortality, recurrent CVS events, and new-onset heart failure [28]. The findings from a large US-based national registry showed that 1 in 5 eligible patients admitted with ACS failed to receive ACC/ AHA class I-recommended ACE-I/ARB therapy at discharge [29]. Additionally, a study, conducted in Qatar for determining the utilization of evidence-based medication in ACS, also noted sub-optimal usage of ACE-I/ARB in comparison with other drugs [30]. In this study, it is noteworthy that 14% of eligible patients with diabetes, Apprehensive to initiate beta-blocker and plan to introduce at follow-up hypertension, chronic kidney disease, heart failure, and LV dysfunction with EF < 40% failed to receive ACE-I/ ARB in the first month of observation. Current evidence suggests that physicians are ambivalent in the prescription of ACE-I/ARB probably due to concerns of worsening renal failure or hyperkalemia [31]. Consistent reinforcement of benefits and risks of these drugs with the projection of discharge prescription rates helped in the uptake of these drugs in ACS patients. At the end of 11 months, no eligible patient was discharged without an ACE-I/ARB after an ACS event (Fig. 2c). Beta-blockers have class I indication in patients with ACS, if not contraindicated [32]. Over 11 months, the prescription rate of beta-blockers in ACS patients increased to 100%. This finding was in line with a study conducted by Hassan et al. which showed increased use of beta-blockers in cardiology units with the help of pharmacist involvement [24]. Our finding was in contrast to a study conducted by Thang Nguyen et al. which demonstrated that interventions targeted at healthcare professionals did not significantly improve the prescribing patterns in ACS except for statins [33] (Fig. 2c). Compliance with guideline recommendations in ACS discharge management improved significantly with an ongoing audit-feedback presentation by a clinical pharmacist to the prescribing physicians. Conclusions Our study provides an insight into prescription adherence to GDMT in ACS patients. It highlights the ongoing education of caregivers and reinforcement as the best practice for improving adherence to guideline recommendations. This study exhibited the striking reduction in the unjustifiable omission of dual antiplatelets, statins, ACE-I/ARB, and beta-blockers by a clinical pharmacist-led monthly audit presentation to the prescribing cardiologists. Through this study, we recommend the maintenance of GDMT checklist by clinical pharmacist before patient discharge along with clinical audit of discharge prescriptions as the best practice to improve quality of care in patients with ACS.
2022-01-12T14:40:36.070Z
2022-01-12T00:00:00.000
{ "year": 2022, "sha1": "63f53373cb7d0c10e9da0e1761234755767ad951", "oa_license": "CCBY", "oa_url": "https://tehj.springeropen.com/track/pdf/10.1186/s43044-021-00237-7", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "63f53373cb7d0c10e9da0e1761234755767ad951", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214623364
pes2o/s2orc
v3-fos-license
A relativistic outflow from the energetic, fast-rising blue optical transient CSS161010 in a dwarf galaxy We present X-ray and radio observations of the Fast Blue Optical Transient (FBOT) CRTS-CSS161010 J045834-081803 (CSS161010 hereafter) at t=69-531 days. CSS161010 shows luminous X-ray ($L_x\sim5\times 10^{39}\,\rm{erg\,s^{-1}}$) and radio ($L_{\nu}\sim10^{29}\,\rm{erg\,s^{-1}Hz^{-1}}$) emission. The radio emission peaked at ~100 days post transient explosion and rapidly decayed. We interpret these observations in the context of synchrotron emission from an expanding blastwave. CSS161010 launched a relativistic outflow with velocity $\Gamma\beta c\ge0.55c$ at ~100 days. This is faster than the non-relativistic AT2018cow ($\Gamma\beta c\sim0.1c$) and closer to ZTF18abvkwla ($\Gamma\beta c\ge0.3c$ at 63 days). The inferred initial kinetic energy of CSS161010 ($E_k\gtrsim10^{51}$ erg) is comparable to that of long Gamma Ray Bursts (GRBs), but the ejecta mass that is coupled to the relativistic outflow is significantly larger ($\sim0.01-0.1\,M_{\odot}$). This is consistent with the lack of observed $\gamma$-rays. The luminous X-rays were produced by a different emission component to the synchrotron radio emission. CSS161010 is located at ~150 Mpc in a dwarf galaxy with stellar mass $M_{*}\sim10^{7}\,\rm{M_{\odot}}$ and specific star formation rate 0.3 $Gyr^{-1}$. This mass is among the lowest inferred for host-galaxies of explosive transients from massive stars. Our observations of CSS161010 are consistent with an engine-driven aspherical explosion from a rare evolutionary path of a H-rich stellar progenitor, but we cannot rule out a stellar tidal disruption event on a centrally-located intermediate mass black hole. Regardless of the physical mechanism, CSS161010 establishes the existence of a new class of rare (rate<0.25% the core-collapse supernova rate at z~0.2) H-rich transients that can launch relativistic outflows. INTRODUCTION Fast Blue Optical Transients (FBOTs), or alternatively Fast Evolving Luminous Transients (FELTs), are a class of transients defined by an extremely rapid rise to maximum light (typically < 12 days), luminous optical emission ( 10 43 erg s −1 ) and blue colors. Due to their fast rise-times, they are difficult to detect and have only been identified as a class since the recent advent of high-cadence optical surveys. Only a few tens of systems have been found at optical wavelengths (e.g., Matheson et al. 2000;Poznanski et al. 2010;Ofek et al. 2010;Drout et al. 2013Drout et al. , 2014Shivvers et al. 2016;Tanaka et al. 2016;Arcavi et al. 2016;Whitesides et al. 2017;Rest et al. 2018;Pursiainen et al. 2018;Tampo et al. 2020). Not all FBOT rise-times and luminosities can be reconciled with standard SN models (e.g., Drout et al. 2014), and the diverse properties of the class have led to a range of proposed models. These include explosions of stripped massive stars (e.g., Drout et al. 2013;Moriya et al. 2017), shock breakout emission from an extended low-mass stellar envelope or dense circumstellar medium (CSM, e.g., Ofek et al. 2010;Drout et al. 2014), cooling envelope emission from extended stripped progenitor * Alfred P. Sloan Fellow. † Einstein Fellow stars (e.g., Tanaka et al. 2016), helium shell detonations on white dwarfs (Shen et al. 2010;Perets et al. 2010), or scenarios invoking a central engine such as a magnetar or black hole (e.g., Cenko et al. 2012a;Hotokezaka et al. 2017). However, prior to this work, only two FBOTs (AT 2018cow and ZTF18abvkwla) had been detected at radio and/or X-ray wavelengths. The variable X-ray emission (Rivera Sandoval et al. 2018), transient hard Xray component, steep X-ray decay and multi-wavelength evolution (Margutti et al. 2019) of AT 2018cow directly indicate a driving central engine (e.g., Prentice et al. 2018;Perley et al. 2019;Kuin et al. 2019;Margutti et al. 2019;Ho et al. 2019). Another direct manifestation of a central engine is the presence of relativistic ejectathis was recently inferred for ZTF18abvkwla (Ho et al. 2020). CRTS CSS161010 J045834-081803 (hereafter referred to as CSS161010) was discovered by the Catalina Realtime Transient Survey (Drake et al. 2009) on 2016 October 10. The transient was also detected by the All-Sky Automated Survey for Supernovae (ASAS-SN, Shappee et al. 2014) and showed a fast ∼ 4 day rise to maximum light at V -band (Dong et al., in prep.). Follow-up optical spectroscopic observations one week later showed a blue and featureless continuum (Reynolds et al. 2016). These characteristics identify CSS161010 as an FBOT (see Drout et al. 2014). Further spectroscopic observa-tions by Dong et al. (in prep.), showed broad spectral features (including hydrogen) and placed CSS161010 at a distance of 150 Mpc (z = 0.034 ± 0.001). Optical spectroscopy of the transient host galaxy that we present here leads to z = 0.0336 ± 0.0011, consistent with the estimate above. In this paper we present radio and X-ray observations of CSS161010 and optical spectroscopic observations of its host galaxy. This paper is organized as follows. In §2 we present the observations of CSS161010 and its host galaxy and in §3 we infer the blast-wave properties based on the radio and X-ray observations. In §4 and §5 we respectively model the host properties and discuss models for CSS161010. Conclusions are drawn in §6. The optical observations and spectral evolution will be presented in Dong et al. (in prep). Time is reported relative to the estimated explosion date MJD 57667 (2016 October 6, Dong et al. in prep.). 1σ uncertainties are reported unless stated otherwise (where σ 2 is the variance of the underlying statistical distribution). VLA observations of CSS161010 We observed CSS161010 with the NSF's Karl G. Jansky Very Large Array (VLA) through project VLA/16B-425 (PI: Coppejans) over five epochs from December 2016 to March 2018, δt = 69 − 530 days after explosion (Table 3 and Figure 1). To monitor the spectral evolution of the source, we observed at mean frequencies of 1.497 (L-band), 3 (S-band), 6.048 (C-band), 10.0 (X-band) and 22.135 GHz (K-band). The bandwidth was divided into 64 (K-band), 32 (X-band) 16 (C-band and S-band) and 8 (L-band) spectral windows, each subdivided into 64 2-MHz channels. The observations were taken in standard phase referencing mode, with 3C147 as a bandpass and flux-density calibrator and QSO J0501-0159 and QSO J0423-0120 as complex gain calibrators. We calibrated the data using the VLA pipeline in the Common Astronomy Software Applications package (CASA, McMullin et al. 2007) v4.7.2., with additional flagging. For imaging we used Briggs weighting with a robust parameter of 1, and only performed phase-only self-calibration where necessary. We measured the flux density in the image-plane using PyBDSM (Python Blob Detection and Source Measurement Mohan & Rafferty 2015) with an elliptical Gaussian fixed to the dimensions of the CLEAN beam. To more densely sample the cm-band spectral energy distribution, we subdivided the available bandwidth into 128 MHz sections where possible and imaged each individually. We verified the pipeline reduction by undertaking manual flagging, cal- Figure 1. The 8-10 GHz light curve of CSS161010 (red stars) in the context of those of other classes of explosive transients including GRBs (blue squares), sub-energetic GRBs (light-blue squares), relativistic SNe (dark grey circles), normal H-stripped core-collapse SNe (light-grey circles), TDEs (light-green diamonds) and TDEs with relativistic jets (dark-green diamonds). Empty grey circles mark the non-detection of the very rapidly declining SN-Ic 2005ek and the rapidly rising iPTF16asu, which later showed a Ic-BL spectrum (Drout et al. 2013;Whitesides et al. 2017). CSS161010 had a radio luminosity similar to that of the subenergetic GRB 031203 and higher than that of relativistic SNe, normal SNe and some sub-energetic GRBs. CSS161010 declined significantly more rapidly than any of these source classes, including the GRBs. The other two FBOTs with detected radio emission are also shown, with orange stars. Ref Coppejans et al., in prep. ibration, imaging, and self-calibration of the first three epochs of VLA observations in CASA. The derived flux densities were consistent with the values measured from the VLA pipeline calibration. We report the flux densities from the VLA pipeline-calibrated data together with a more detailed description of each observation in Table 3. The position that we derive for CSS161010 from these radio observations is RA=04:58:34.396±0.004, dec=-08:18:03.95±0.03. GMRT observations of CSS161010 We observed CSS161010 for 10 hours with the Giant Metrewave Radio Telescope (GMRT) under the project code DDTB287 (PI: Coppejans). These observations were carried out on 2017 September 14. 93, 21.96, 19.88 UT (δt = 344 − 351 days after explosion) at frequencies 1390, 610 and 325 MHz, respectively (Table 3). The 33 MHz observing bandwidth was split into 256 channels at all three frequencies. We used the Astronomical Image Processing Software (AIPS) to reduce and analyze the data. Specifically, for flagging and calibration we used the FLAGging and CALibration (FLAGCAL) software pipeline developed for GMRT data (Prasad & Chengalur 2012). Additional manual flagging and calibration was also performed. We performed multi-facet imaging to to deal with the field which is significantly curved over the GMRT field-of-view. The number of facets was calculated using the SETFC task. Continuum images were made using the IMAGR task. For each observation we performed a few rounds of phase-only self calibration and one round of amplitude and phase self-calibration. The errors on the flux density were calculated by adding those given by the task JMFIT and a 15% systematic error in quadrature. The source positions in our GMRT and VLA images are consistent. To compare the flux density scaling of the VLA and GMRT data, we took an observation at ∼ 1.49 GHz with each telescope (these observations were separated by two weeks and the central frequencies differed by 0.107 GHz) and the flux densities were consistent. Additionally, we confirmed that the flux density of a known point source in our GMRT 1.4 GHz image was consistent with that quoted in the National Radio Astronomy Observatory (NRAO) VLA Sky Survey (NVSS; Condon et al. 1998) source catalogue. Chandra observations of CSS161010 We initiated deep X-ray observations of CSS161010 with the Chandra X-ray Observatory (CXO) on January 13, 2017 under a DDT program (PI Margutti;Program 17508566;IDs 19984, 19985, 19986). Our CXO observations covered the time range δt ∼ 99 − 291 days after explosion (Fig. 2). The ACIS-S data were reduced with the CIAO software package (v4.9) and relative calibration files, applying standard ACIS data filtering. A weak X-ray source is detected at the location of the optical transient in our first two epochs of observation at t ∼ 99 days and ∼ 130 days, while no evidence for X-ray emission is detected at ∼ 291 days. In our first observation (ID 19984, exposure time of 29.7 ks) we detect three photons in a 1 region around the transient, corresponding to a 3.9 σ (Gaussian equivalent) confidence-limit detection in the 0.5-8 keV energy range, at a count-rate of (1.01 ± 0.58) × 10 −4 c s −1 (the uncertainty here reflects the variance of the underlying Poissonian process). For an assumed power-law spectrum with photon index Γ = 2 and no intrinsic absorption, the corresponding unabsorbed 0.3-10 keV flux is F x = (1.33 ± 0.76) × 10 −15 erg s −1 cm −2 and the luminosity is L x = (3.4 ± 1.9) × 10 39 erg s −1 . The Galactic neutral hydrogen column density in the direction of the transient is N H MW = 4.7 × 10 20 cm −2 (Kalberla et al. 2005). Constraints on the Prompt γ-ray Emission We searched for associated prompt γ-ray emission from CSS161010 around the time of explosion with the Inter-Planetary Network (IPN; Mars Odyssey, Konus/Wind, INTEGRAL SPI-ACS, Swift-BAT, and Fermi-GBM). Based on the optical photometry of the rise (Dong et al. in prep.), we used a conservative explosion date of JD=2457669.7 ± 2 for this search. We estimate an upper limit (90% conf.) on the 20 -1500 keV fluence of ∼ 8 × 10 −7 erg cm −2 for a burst lasting less than 2.944 s and having a typical Konus Wind short GRB spectrum (an exponentially cut off power law with α = −0.5 and E p = 500 keV). For a typical long GRB spectrum (the Band function with α = −1, β = −2.5, and E p = 300 keV), the corresponding limiting peak flux is ∼2×10 −7 erg cm −2 s −1 (20-1500 keV, 2.944 s scale). The peak flux corresponds to a peak luminosity L pk < 5 × 10 47 erg s −1 . For comparison, the weakest long GRBs detected have L pk ≈ 10 47 erg s −1 (e.g. Nava et al. 2012). Host Galaxy Observations CSS161010 has a faint host galaxy that is visible in deep optical images of the field. The location of CSS161010 is consistent with the inferred center of the host galaxy (RA=04:58:34.398 and dec=-08:18:04.337, with a separation of 0. 39). We acquired a spectrum of this anonymous host galaxy on 2018 October 10 (δt =790 days since explosion) well after the optical transient had completely faded away. We used the Keck Low Resolution Imaging Spectrometer (LRIS) equipped with the 1. 0 slit, the 400/3400 grism for the blue side (6.5Å resolution) and the 400/8500 grating for the red side (6.9Å resolution), covering the wavelength range between 3400 and 10200Å, for a total integration time of 3300 s. The 2-D image was corrected for overscan, bias and flatfields, and the spectrum was then extracted using standard procedures within IRAF 1 . The spectrum was wavelength and flux calibrated using comparison lamps and a standard star observed during the same night and with the same setup. A Galactic extinction 1 http://iraf.noao.edu/ E(B − V ) = 0.084 mag in the direction of the transient was applied (Schlafly & Finkbeiner 2011). On 2019 February 25, we imaged the field of the host galaxy of CSS161010 in the VRI optical bands with Keck+DEIMOS, using an integration time of 720 s for each filter. We used SExtractor (Bertin & Arnouts 1996) to extract the isophotal magnitudes of the host galaxy of CSS161010. We calibrated this photometry using the fluxes of the field stars retrieved from the Pan-STARRS1 2 catalogue (Chambers et al. 2016). We converted the gri magnitudes of the Pan-STARRS1 field stars to Johnson/Cousins VRI magnitudes following Chonis & Gaskell (2008). The final Vega magnitudes of the host of CSS161010 are V = 21.68 ± 0.09 mag, R = 21.44 ± 0.07 mag, I = 20.91 ± 0.08 mag. We then used the same technique to extract gri magnitudes of the host from the Pan-STARRS1 data archive images of g = 21.9±0.1 mag, r = 21.1±0.1 mag and i = 20.6±0.1 mag. We obtained near-infrared (NIR) imaging of the field of CSS161010 with MMT and the Magellan Infrared Spectrograph (MMIRS McLeod et al. 2012) in imaging mode on 2018 November 15. We acquired JHK images with 60 s exposures for a total integration time of 900 s for J, and 1200 s for H and K. We processed the images using the MMIRS data reduction pipeline (Chilingarian et al. 2015). A separate NIR source is clearly detected at RA=04:58:34.337 dec=-08:18:04.19, 0. 91 from the radio and optical location of CSS161010 (Fig. 3). This source dominates the NIR emission at the location of CSS161010. The inferred Vega measured magnitudes of this contaminating source calibrated against the Two Micron All Sky Survey (2MASS) catalog 3 (Skrutskie et al. 2006) are J = 19.24 ± 0.30 mag, H = 18.09 ± 0.08 mag, K = 17.76 ± 0.11 mag. We note that this source is also detected in the WISE (Wide-field Infrared Survey Explorer) W1 and W2 bands. To measure WISE W1 (3.4µm) and W2 (4.6µm) fluxes, we performed PSF photometry on the Meisner et al. (2018) unWISE coadds. These stacks have a ∼ 4× greater depth than AllWISE, allowing for higher S/N flux measurements. We infer Vega magnitudes of W 1 = 16.94 ± 0.07 and W 2 = 16.74 ± 0.17. The uncertainties were estimated via PSF fitting of Monte Carlo image realizations with an appropriate per-pixel noise model. According to Jarrett et al. (2017), W 1 − W 2 = 0.2 ± 0.2 mag rules out active galactic nuclei, T-dwarfs and ultra luminous infrared galax- The Keck/LRIS spectrum has been re-scaled to the Pan-STARRS and Keck/DEIMOS photometry (blue filled circles) as part of the fitting procedure. The black line shows the best-fit FAST model, which has a total stellar mass of ∼ 10 7 M and current star-formation rate ∼ 0.004 M yr −1 . Right Panels: Optical (V -and I -band from Keck-DEIMOS) and NIR (JHK -bands from MMT+MMIRS) images of the surroundings of CSS161010. The red cross marks the position of the centroid of the dwarf host galaxy visible in V -band and the green ellipse marks the 5σ contour of the radio transient at 6 GHz, which is consistent with the optical position of the transient. The apparent shift of the centroid of the emission in the redder bands is due to contamination by a red source (possibly a red dwarf star) almost coincident with the position of the host galaxy of CSS161010. The radio emission is not associated with the contaminating red source. H⍺ [OIII] [OII] ies. This contaminating source is therefore most likely a foreground star. Radio Spectral Evolution and Modelling The observed radio spectral evolution is consistent with a synchrotron self-absorbed (SSA) spectrum where the self-absorption frequency ν sa evolves to lower frequencies as the ejecta expands and becomes optically thin (Fig. 4). The optically thick and thin spectral indices derived from our best-sampled epoch (99 days post explosion) are α = 2.00 ± 0.08 and α = −1.31 ± 0.03, respectively (where F ν ∝ ν α ). The optically thin flux density scales as F ν ∝ ν −(p−1)/2 , where p is the index of the distribution of relativistic electrons responsible for the synchrotron emission N e ∝ (γ e ) −p and γ e is the Lorentz factor of the electrons (we find p = 3.6 +0.4 −0.1 ). Table 1 and Figure 4 show the peak frequency ν p (which is equivalent to the self-absorption frequency ν sa ), the peak flux density (F p ) and the parameters derived for the SSA spectrum by fitting each epoch with a broken power-law. We find ν p ∝ t −1.26±0.07 and F p ∝ t −1.79±0.09 a steep decay in the radio luminosity of L 8 GHz ∝ t −5.1±0.3 at > 99 days post explosion). The evolution of the SSA peak is consistent with an expanding blast-wave, but is different from the evolution of a SSA-dominated, nonstrongly decelerating, non-relativistic SN in a wind-like medium where ν p ∝ t −1 and F p ∼ constant (Chevalier 1998; Soderberg et al. 2005Soderberg et al. , 2006a. The inferred F p (t) is also steeper than seen in relativistic SNe (see §3.3). We compare these properties to the two other radio-detected FBOTs in §3.4. The physical properties of an expanding blastwave can be calculated from an SSA spectrum if F p , ν p , the source distance, and the fractions of energy in the relativistic electrons ( e ) and magnetic fields ( B ) in the internal shock are known (Scott & Readhead 1977;Slysh 1990;Readhead 1994;Chevalier 1998;Chevalier & Fransson 2006). We follow the SSA modelling framework for SNe (Chevalier 1998;Chevalier & Fransson 2006) to obtain robust estimates of the blastwave radius R and velocity, environment density n, internal energy U int and magnetic field B. We employ the subscript 'eq' to identify quantities derived under the assumption of equiparti- Table 1. The measurements below 2 GHz at 162 days post explosion were strongly affected by radio frequency interference and we flagged out much of this band. Subsequently, we treat the lowest frequency point (shown in light gray) with caution. The X-ray emission does not fall on the same SSA spectrum, as the spectral index steepens at frequencies above the cooling break. The dotted (green) line shows the extrapolation of the SSA spectrum without taking the cooling break into account. Note that the X-ray observation in the bottom right panel was taken at 425 days post explosion. tion (i.e., e = B = 1/3). We emphasize that our estimates of B and R (and subsequently the shock velocity) are only weakly dependent on the microphysical parameters. The normalizations of U int and n do depend on the shock microphysics, but the inferred variation of these parameters with time does not. We do not assume any time-dependent evolution for the blastwave, but rather fit each epoch individually to derive the blastwave properties given in Table 1. The relations quoted below were obtained by fitting a power-law to these properties over the epochs at 69, 99 and 357 days post explosion. Our major conclusions are not affected if we include our least constrained epoch (162 days post explosion) in the fits. A Mildly Relativistic, Decelerating Blast-wave in a Dense Environment Over the 308 days spanned by our observations the forward shock radius in CSS161010 expanded according to R = 3 × 10 15 (f e / B ) −1/19 (t obs /days) 0.81±0.08 cm, where R is calculated from Equation 21 in Chevalier & Fransson (2006), f is the fraction of the spherical vol-ume producing radio emission, and t obs is the time since explosion. In the absence of strong relativistic beaming (which applies to Lorentz factors Γ 1), the radio emission effectively provides a measure of the blastwave lateral expansion (instead of the radius along our line of sight) or (Γβ)c = R/t obs , from which we derive an apparent transverse velocity up to 99 days (our bestconstrained epoch) of (Γβc) eq = 0.55±0.02c. The blastwave was decelerating during our observations, as at 357 days post explosion we measured (Γβc) eq = 0.36±0.04c. Because of the equipartition assumption and the deceleration of the blastwave, we conclude an initial Γβc > 0.6c. This result implies a decelerating relativistic blastwave, with similarities to the radio-loud FBOT event ZTF 18abvkwla ( §3.4, Ho et al. 2020). We thus conclude that CSS161010 is an FBOT with a mildly relativistic, decelerating outflow, and is the first relativistic transient with hydrogen in its ejecta (optical spectroscopic observations presented in Dong et al., in prep.). Following the standard Chevalier & Fransson (2006) framework for synchrotron emission from SNe, we Note-a As the observations at 162 days were strongly affected by radio frequency interference at low frequencies and we had to flag most of the data (Figure 4), the optically thick emission was not constrained and we do not include the results for this epoch here or in our modelling. For reference, the derived parameters at 162 days are Fp = 3.4 ± 0.1, νp = 5.8 ± 0.1, Req = 12.7 ± 0.5, Beq = 0.241 ± 0.009, (Γβc)eq = 0.30 ± 0.01c, U int,eq = 2.9 ± 0.1, neq = 59 ± 6 andṀeq = 3.2 ± 0.2. b Frequency (column 2) and flux density (column 3) at the intersection of the optically thin and thick synchrotron power-laws, from which we calculate the blast-wave parameters following Chevalier (1998). c Average apparent velocity (Γβc)eqc = Reqc/t. d For wind velocity vw =1000 km s −1 . If CSS161010 originated from a massive stellar explosion (see §5 for discussion) and the radio emission was powered by the interaction of the entire outer stellar envelope with density profile ρ SN ∝ r −q with the medium of density ρ CSM ∝ r −s , we would expect the transient to be still in the "interaction" regime during the time of our radio observations (e.g. Chevalier 1982). During this phase the shock radius expands as R ∝ t m with m = (q−3)/(q−s) (Chevalier 1982), which implies q ∼ 7 with s = 2. It is unclear if the entire outer envelope is contributing to the radio emission, or if, instead, the radio-emitting ejecta constitutes a separate ejecta component (as in long GRBs, which have a relativistic jet and a spherical non-relativistic ejecta component associated with the SN). It is thus possible that CSS161010 was already in the energy-conserving limit at t ∼ 100 days. We discuss below our inferences in this limit. In the non-relativistic energy-conserving regime the Sedov-Taylor solution applies (ST, von Neumann 1941;Sedov 1946;Taylor 1950) and the shock position scales as R ∝ t 2/(5−s) , from which we would derive s ∼ 2.5. In the ultra-relativistic Γ 1 energy-conserving limit the Blandford-McKee (BM) solution (Blandford & Mc-Kee 1976) applies, Γ ∝ R (s−3)/2 and dt obs ∼ 2dt/Γ 2 , from which R ∝ t 1/(4−s) obs , leading to s ∼ 2.7. 4 The nonrelativistic and ultra-relativistic limits, both of which are self-similar, suggest a steep density profile. However, the mildly relativistic nature of the outflow of CSS161010 implies that the blastwave expansion is fundamentally not self-similar, as the speed of light contributes an additional velocity scale that characterizes the expansion of the blastwave (i.e., a velocity scale in addition to the non-relativistic, energy-conserving velocity scaling V 2 ∝ R s−3 ). We therefore do not expect the shock position to behave as a simple power-law with time, but to instead show some degree of secular evolution as the blast transitions to the non-relativistic regime in which the dependence on the speed of light is lost. For mildly relativistic shocks we expect the standard ST scaling to hold up to terms that are proportional to V 2 /c 2 ; Coughlin (2019) showed that the coefficient of proportionality multiplying this correction, σ, is a parameter that depends on the post-shock adiabatic index of the gas (effectively equal to 4/3) and the ambient density profile (see their Table 1). In particular, following Coughlin 2019 (their Equation 51), in the mildly relativistic regime the shock velocity varies with position as where V i is the velocity that the shock would have if we ignored relativistic corrections and the shock position is normalized to the time at which the shock sweeps up a comparable amount of rest mass to the initial mass. Inverting and integrating Equation 1 and accounting for dt obs = (1 − β cos θ)dt (for a patch of the shell at an angle θ with respect to the observer line of sight), it is possible to determine R(t obs ). An additional complication in the mildly relativistic regime is that the observed emitting surface is viewed at delayed times for different θ; specifically, photons arriving from the poles were radiated earlier than those emitted at the equator (in order to be observed simultaneously) when the ejecta was more relativistic and the radiation was more highly beamed out of our line of sight. Taking the two limiting cases, dt obs = (1 − V /c) dt and dt obs = dt, which apply to the early and late-time evolution, respectively, we find that the environment around CSS161010 was likely steeper than those created by a constant mass-loss rate (s = 2), and falls in between the limits provided by the ultra-and non-relativistic regimes. There is some precedent for this non-steady mass-loss. Recent observations of a number of SNe show eruptions in the centuries prior to explosion (e.g., Smith 2014; Margutti et al. 2014aMargutti et al. , 2017bMilisavljevic et al. 2015), and AT 2018cow shows a similarly steep density profile (Margutti et al. 2019) to CSS161010. We note that within our framework, a steeper density profile implies that the magnetic field also scales more steeply than the traditional wind scaling of B ∝ R −1 . Inferences on the Initial Blastwave Properties We determined the shock internal energy U int at each epoch following Chevalier (1998), their Equations 21 and 22. At 99 days, the equipartition conditions give a robust lower limit of U int 6 × 10 49 erg (Table 1), which implies a kinetic energy of E k 6 × 10 49 erg coupled to material with velocity Γβc ≥ 0.55c. We compare the shock properties of CSS161010 to those of SNe, FBOTs and TDEs in Fig. 5. The E k of the fast material in CSS161010 is larger than in normal core-collapse SNe, relativistic SNe 5 , and sub-energetic GRBs, but comparable to GRBs and relativistic TDEs. The shock powering the non-thermal emission in CSS161010 is also significantly faster than in normal SNe, especially considering that it is decelerating and we are measuring it at a much later phase (≈ 99 days post explosion) than the SNe shown in Figure 5 at ≈ 1 day post explosion. To estimate the initial explosion parameters, we need to extrapolate backwards by assuming a set of blastwave dynamics. Since the early evolution of the blastwave at t < 70 days is not constrained by our observations we proceed with robust order-of-magnitude inferences. As the blast-wave expands and interacts with the surrounding medium its E k is converted into U int , which implies that the shock's initial E k is E k,0 >U int or E k,0 > 10 50−51 erg for fiducial values e = 0.1 and B = 0.01. The fact that the shock is decelerating means that the swept-up CSM mass is comparable to or exceeds the mass of the fast material in the blast-wave. We can thus estimate the fast ejecta mass and kinetic energy. During our observations the shock wave swept up M sw ∼ 10 −2 M (∼ 10 −3 M in equipartition) as it expanded from 1 × 10 17 cm to 3 × 10 17 cm. The density profile at smaller radii is not constrained, but for profiles ranging from flat to r −2.3 we derive a total swept up mass of M sw ∼ 0.01 − 0.1 M (M sw ∼ 10 −3 − 10 −2 M in equipartition). As the blastwave is decelerating, the mass of the fastest [(Γβc) eq ∼ 0.55c] ejecta responsible for the non-thermal emission is thus M ej ∼ 0.01−0.1 M and has a kinetic energy of ∼ 10 51 − 10 52 erg. Comparison to multi-wavelength FBOTs CSS161010 and AT 2018cow are the only FBOTs for which we have long-term X-ray and radio detections. ZTF18abvkwla is also detected at radio wavelengths (Ho et al. 2020). Remarkably, the radio luminosity of the three FBOTs is large compared to SNe and some subenergetic GRBs, and is even comparable to the radio emission in long GRBs (ZTF18abvkwla). Even with a sample of three radio-loud FBOTs, we already see a wide range of behaviors, which likely reflects a wide dynamic range of the properties of the fastest outflows of FBOTs. ZTF18abvkwla and CSS161010 share the presence of mildly relativistic, presumably jetted outflows ( §5.2.1). ZTF18abvkwla had an expansion velocity 6 (Fig. 5) of Γβc ≥ 0.3c at t ∼ 100 days. They establish a class of transients that are able to launch relativistic ejecta with similarities to GRBs, yet differ from GRBs in their thermal optical emission (and presence of H, for CSS161010, Dong et al., in prep). The relativistic velocity of CSS161010 and ZTF18abvkwla, and the large energy of the blast-wave in CSS161010 differ distinctly from the non-relativistic and slow blast-wave in AT 2018cow, which showed v ∼ 0.1c (Fig. 5, Margutti et al. 2019;Ho et al. 2019). Indeed, high spatial resolution radio observations of AT 2018cow indicated that AT 2018cow did not harbor a long-lived relativistic GRB-like jet (Bietenholz et al. 2020). The post-peak decline in radio luminosity of the radiodetected FBOTs is extraordinarily steep compared to all other classes of transients (Fig. 1), even the en- Ultra-Relativistic Non-Relativistic rel. SNe Ibc-SNe GRBs sub-E GRBs TDEs (jetted) TDEs FBOTs U int CSS161010 E k Figure 5. Kinetic energy of the fast-moving material in the outflow with velocities > Γβ for CSS161010 and other classes of transients, as determined from radio observations. With the exception of the FBOTs, these properties are measured at approximately 1 day post explosion. We plot the internal energy in the shock (Uint) for the FBOTs at the time of the observations. For ZTF18abvkwla we calculated this assuming that the 10 GHz measurement at 81 days from (Ho et al. 2020) is the peak of the SSA spectrum as they find a spectral index of −0.16 ± 0.05. For CSS161010 we also plot the kinetic energy at 99 days post explosion/disruption (our best constrained epoch, see Table 1). The latter is a robust lower limit for the initial kinetic energy. CSS161010 is mildly relativistic and has a velocity at least comparable to that of the relativistic SNe 2009bb ergetic and highly collimated GRBs. CSS161010 and AT 2018cow had comparable rates of L 8 GHz ∝ t −5.1±0.3 and L 8 GHz ∝ t −4.19±0.4 (Coppejans et al. in prep) respectively. The decline of ZTF18abvkwla (Ho et al. 2020) was shallower, with L 8 GHz ∝ t −2.7±0.4 . A comparison between the radio properties of these three FBOTs also shows other spectral and evolutionary differences. Compared to AT 2018cow, which had F p ∝ t −1.7±0.1 and ν p ∝ t −2.2±0.1 (Margutti et al. 2019;Ho et al. 2019), CSS161010 exhibited a similar F p (t) evolution but a slower ν p (t) decay. The information on the radio spectral properties of the FBOT ZTF18abvkwla is limited, but we note that at ∼ 63 days Ho et al. (2020) infer ν p ∼ 10 GHz with a significantly larger radio luminosity L ν ∼ 10 30 erg s −1 than CSS161010 (Fig. 1). We now turn to the X-ray emission in CSS161010 and AT 2018cow. Although we only have late time X-ray observations of CSS161010, the luminosity appears to be consistent with that of AT 2018cow at ∼ 100 days post explosion (see Figure 2). As was the case in AT 2018cow, the source of the X-ray emission cannot be synchrotron emission from the same population of electrons that produces the radio emission. In the two epochs at 99 and 357 days where we have simultaneous X-ray and radio observations, the extrapolated radio flux densities are consistent with the X-ray measurements only if we do not account for the presence of the synchrotron cooling break at ν = ν c . For the B eq of Table 1, we expect ν c to lie between the radio and X-ray bands at 99 < t < 357 days leading to a flux density steepening F ν ∝ ν −p/2 ∝ ν −1.8 at ν > ν c (Rybicki & Lightman 1979). It follows that the extrapolated SSA spectrum under-predicts the X-ray flux and that another mechanism is thus required to explain the X-ray emission in CSS161010. In AT 2018cow there was also an excess of X-ray emission, which was attributed to a central engine ( Toonen 2019). We speculate that the X-ray emission in CSS161010 might also be attributable to the central engine. Interestingly, both FBOTs also have hydrogen-rich outflows (Dong et al. in prep.) and dense environments, and at optical/UV wavelengths are among the most luminous and fastest evolving members of the FBOT family (Dong et al., in prep.). PROPERTIES OF THE DWARF HOST GALAXY We use the Fitting and Assessment of Synthetic Templates code (FAST Kriek et al. 2009) to fit the host galaxy emission and constrain the properties of the underlying stellar population. We first combine and renormalize the Keck-LRIS spectrum by using the broadband PanSTARSS gri and DEIMOS VRI photometry corrected for Galactic extinction. We do not include the NIR data at λ ≥ 10000Å (i.e., JHK and the WISE W1 and W2 bands) in our fits, as these wavelengths are dominated by emission from the contaminating object ( §2.5). We assumed a Chabrier (2003) stellar initial mass function (IMF) and considered a variety of star formation histories and stellar population libraries. The best-fitting synthetic spectrum, which we show in Fig. 3, uses the stellar models of Bruzual & Charlot (2003) with a metallicity of Z = 0.004 and no internal extinction (A V = 0 mag), and favors an exponentially declining star formation law yielding a current star formation rate of SF R ∼ 4 × 10 −3 M yr −1 . The total stellar mass of the host galaxy is M * ∼ 10 7 M , which implies a current specific star formation rate sSF R ∼ 0.3 Gyr −1 . Other choices of stellar population models, star formation histories and metallicity produce similar results. For example, using the stellar models of Bruzual & Charlot (2003) and Conroy & Gunn (2010), with either an exponential or delayed exponential star formation history, and considering metallicity values in the range Z = 0.0008 − 0.02 we find A V = 0 − 0.4 mag, a current stellar age of (0.6 − 4) Gyr, a stellar mass of M * = (1−3)×10 7 M , SF R = (0.3−2)×10 −2 M yr −1 and sSF R = (0.2 − 1) Gyr −1 . The star formation rates that we derive using the [OII] and Hα spectral lines are consistent with the value derived from our models. Figure 6 shows the properties of CSS161010's host compared to those of the hosts of other relevant classes of explosive transients. Interestingly, CSS161010 has the smallest host mass of the known FBOTs, with the three radio-loud FBOTs known (red stars and symbols) populating the low-mass end of the host galaxy distribution. Hydrogen-stripped superluminous supernovae (SLSNe I) and long GRBs also show a general preference for low mass and low metallicity hosts ( §5.2.1 for further discussion). It is important to note that the star formation rate per unit mass of the host of CSS161010 is comparable to that of other transient classes involving massive stars. We conclude this section by commenting that there is no observational evidence of activity from the dwarf host galaxy nucleus. There were no observed outbursts or flaring events (AGN-like activity) at the location of CSS161010 prior to explosion. Specifically, we applied the Tractor image modeling code (Lang et al. 2016) across 6 g-band Dark Energy Camera epochs (DECam, from 2018-10-06 to 2018-10-13) and 137 r-band and 3 g-band Palomar Transient Factory images (PTF, from 2009(PTF, from -10-03 to 2014 to find the best fit model for a host galaxy profile and a point source close to the position of CSS161010. We find no evidence for the presence of a variable point source in either DECam (Dey et al. 2019) or PTF images prior to explosion of CSS161010 (2016 October 6). THE INTRINSIC NATURE OF CSS161010 The key properties of CSS161010 can be summarized as follows: it had a rise-time of a few days in the optical and showed a large peak optical luminosity of ∼ 10 44 erg s −1 (Dong et al. in prep.). Broad Hα features also indicate that there was hydrogen in the outflow (Dong et al. in prep.). The surrounding CSM has a large density corresponding to an effective mass-loss ofṀ ∼ 2 × 10 −4 M yr −1 (for v w = 1000 km s −1 ) at r ∼ 10 17 cm. The dwarf host galaxy has a stellar mass of ∼ 10 7 M that is significantly lower than other FBOTs, but has a comparable sSFR (Figure 6). From our radio modelling, we know that the outflow was relativistic with initial Γβc >0.6c. The fast outflow has an ejecta mass of ∼ 0.01 − 0.1 M and a kinetic energy of E k 10 51 erg. The X-ray emission is not produced by the same electrons producing the radio emission. Volumetric Rates of the most luminous FBOTs in the local Universe We present three independent rate estimates for FBOTs such as CSS161010, AT 2018cow and ZTF18abvkwla, which populate the most luminous end of the optical luminosity distribution of FBOTs with optical bolometric peak luminosity L opt 10 44 erg s −1 . At the end of this section we compare our estimates to the inferences by Ho et al. (2020) and Tampo et al. (2020), which were published while this work was in an advanced stage of preparation. Drout et al. (2014) determined an intrinsic rate for FBOTs with absolute magnitude −16.5 ≥ M ≥ −20 of 4800−8000 events Gpc −3 y −1 based on the detection efficiency of the PanSTARRS1 Medium Deep Survey (PS1-MDS) for fast transients as a function of redshift. However, this estimate assumes a Gaussian luminosity function with a mean and variance consistent with the entire PS1-MDS population of FBOTs, after correcting for detection volumes. In order to assess the intrinsic rate of luminous rapid transients, such as CSS161010, we repeat the rate calculation of Drout et al. (2014), but adopt a new luminosity function based only on the four PS1-MDS events brighter than −19 mag in the g-band (PS1-11qr, PS1-12bbq, PS1-12bv, and PS1-13duy). This yields intrinsic rates for FBOTs with peak magnitudes greater than −19 mag of 700−1400 Gpc −3 y −1 , which is ∼0.6−1.2% of the core-collapse SN rate at z ∼ 0.2 from Botticella et al. (2008) or ∼ 1 − 2% of the local (< 60 Mpc) core-collapse SN rate from Li et al. (2011a). We further estimated the luminous FBOT rate from the Palomar Transient Factory (PTF; Law et al. 2009;Rau et al. 2009). The PTF was an automated optical sky survey that operated from 2009-2012 across ∼8000 deg 2 , with cadences from one to five days, and primarily in the Mould R-band. We adopted the PTF detection efficiencies of Frohmaier et al. (2017) and simulated a population of FBOTs with light curves identical to AT 2018cow (as we have color information for AT 2018cow near optical peak) and a Gaussian luminosity function M R = −20 ± 0.3 mag. Our methodology closely follows that described in Frohmaier et al. (2018), but with a simulation volume set to z ≤ 0.1 to maintain high completeness. We also performed a search for AT 2018cow-like events in the PTF data and found zero candidates. Given both the results of our simulations and no comparable events in the data, we measure a 3σ upper limit on the luminous FBOTs rate to be 300 Gpc −3 y −1 , which is 0.25% of the core-collapse SN rate at z ∼ 0.2 (Botticella et al. 2008) or 0.4% of the local core-collapse SN rate (Li et al. 2011a). This volumetric rate is consistent with what we derive for luminous FBOTs in massive galaxies based on the Distance Less Than 40 Mpc survey (DLT40, Tartaglia et al. 2018) following Yang et al. 2017. We refer to the PTF rate estimate in the rest of this work. We compare our rate estimates of luminous FBOTs in the local Universe (z ≤ 0.1) with those derived by Ho et al. (2020) from the archival search of 18 months of ZTF-1DC survey. The transient selection criteria by Ho et al. (2020) are comparable to our set up of the simulations on the PTF data set. Specifically, Ho et al. (2020) selected transients with peak absolute g-band magnitude M g,pk < −20 and rapid rise time < 5 days, finding a limiting volumetric rate < 400 Gpc −3 yr −1 at distances < 560 Mpc, consistent with our inferences. Our study and Ho et al. (2020) thus independently identify luminous FBOTs as an intrinsically rare type of transient, with a volumetric rate < (0.4 − 0.6)% the core-collapse SN rate in the local Universe. We conclude that luminous FBOTs are sampling a very rare channel of stellar explosion or other rare phenomenon ( §5.2). Interestingly the luminous FBOT rate is potentially comparable to that of sub-energetic long GRBs (230 +490 −190 Gpc −3 yr −1 , 90% c.l., before beaming correction, Soderberg et al. 2006b), and local SLSNe (199 +137 −86 Gpc −3 yr −1 at z = 0.16, Quimby et al. 2013). We end by noting that our rate estimates are not directly comparable to those inferred by Tampo et al. (2020) from the HSC-SSP transient survey. These authors considered rapidly evolving transients in a wider range of luminosities (−17 ≥ M i ≥ −20) at cosmological distances corresponding to 0.3 ≤ z ≤ 1.5 and inferred a rate ∼ 4000 Gpc −3 yr −1 . A similar argument applies to the FBOT rates by Pursiainen et al. (2018). Table 2 presents a summary of the current estimates of the vol- umetric rate for both the entire population of FBOTs and for the most luminous FBOTs. Physical Models Multiple physical models have been suggested to explain the optical behaviour of FBOTs (see §1). Here, we consider mechanisms/transients that could power the radio and X-ray emission of the FBOT CSS161010. As the ejecta is hydrogen-rich (Dong et al., in prep.), we do not consider neutron star mergers and accretion induced collapse models. We also disfavour models involving the disruption or explosion of white dwarfs (WDs). CSS161010 is not flaring activity associated with an Active Galactic Nucleus (AGN). The fraction of dwarf galaxies with masses of the order 10 7 M that host an AGN is not well-constrained (e.g., Mezcua et al. 2018), but as there is at least one AGN host with a stellar mass comparable to CSS161010 (1 − 3 × 10 7 M Mezcua et al. 2018), an AGN cannot be excluded based on the small host galaxy mass alone. The evolving synchrotron radio spectrum is not consistent with the typical flat spectrum seen in AGNs. There is also no evidence for prior optical or radio variability in PTF data ( §4) or the NRAO/VLA Sky Survey (NVSS, Condon et al. 1998 (Kewley & Dopita 2002;Kauffmann et al. 2003) from our Keck spectrum (Fig. 3) exclude the presence of an AGN. Stellar Explosion In §3.2 we inferred that CSS161010 has E k > 6 × 10 49 erg coupled to fast moving material with Γβc ≥ 0.55c. This finding implies that the slow moving material at v ∼ 10, 000 km s −1 would have E k > 10 53 erg under the standard scenario of a spherical hydro-dynamical collapse of a star, where E k ∝ (Γβ) −α with α ≈ −5.2 for a polytropic index of 3 (Tan et al. 2001). This value largely exceeds the E k ∼ 10 51 erg limit typical of neutrino-powered stellar explosions, pointing to a clear deviation from a spherical collapse. We conclude that if CSS161010 is a stellar explosion, then its fastest outflow component (i.e. the one powering the radio emission that we detected at late times) must have been initially aspherical and potentially jetted, similar to that of GRBs. Indeed, Fig. 5 shows that only GRBs (and jetted TDEs) have comparable energy coupled to their relativistic outflows, suggesting that regardless of the exact nature of CSS161010, a compact object (such as a magnetar or accreting black hole) is necessary to explain the energetics of its outflow. In the context of SNe, CSS161010 thus qualifies as an engine-driven explosion. This finding has important implications. Shock interaction with, or breakout from, a dense confined shell of material surrounding the progenitor has been proposed to explain the blue optical colors and fast optical evolution of a number of FBOTs (e.g., Drout et al. 2014;Whitesides et al. 2017). Although these mechanisms could explain the optical colors and fast rise times of FBOTs, they cannot naturally produce the relativistic outflows observed in CSS161010 (and ZT-Fabvkwala, Ho et al. 2020). We thus conclude that a pure shock interaction/breakout scenario of a normal SN shock through a dense medium cannot account for all the properties of luminous FBOTs across the electromagnetic spectrum, and that at least some luminous FBOTs are also powered by a central engine, as it was inferred for AT 2018cow (Margutti et al. 2019;Ho et al. 2019;Perley et al. 2019). The analysis of ZTF18abvkwla by Ho et al. (2020) supports a similar conclusion. Known classes of engine-driven stellar explosions include relativistic SNe, (long) GRBs, and SLSNe. The dwarf nature of the host galaxies of luminous FBOTs that are engine-driven (red stars in Fig. 6) is reminiscent of that of some SLSNe and GRBs, which show a preference for low-mass galaxies (e.g., Lunnan et al. 2014;Chen et al. 2017;Schulze et al. 2018), as independently pointed out by Ho et al. (2020). A second clear similarity between luminous FBOTs, relativistic SNe and GRBs is the presence of relativistic outflows (Fig. 5) and the associated luminous radio emission (Fig. 1), which is clearly not present with similar luminosities in SLSNe (Coppejans et al. 2018;Eftekhari et al. 2019;Law et al. 2019). 7 Yet, luminous FBOTs differ from any known class of stellar explosions with relativistic ejecta in two key aspects: (i) the temporal evolution and spectroscopic properties of their thermal UV/optical emission; (ii) CSS161010 showed evidence for a large mass coupled to its fastest (relativistic) outflow. We expand on these major differences below. Luminous FBOTs with multi-wavelength detections reach optical bolometric peak luminosities 10 44 erg s −1 (Prentice et al. 2018;Perley et al. 2019;Margutti et al. 2019;Dong et al. in prep.) comparable only to SLSNe. The extremely fast temporal evolution (over time-scales of ∼days) and hot, mostly featureless initial spectra with T ∼ 40, 000 K (Kuin et al. 2019;Perley et al. 2019;Margutti et al. 2019;Ho et al. 2020) distinguish luminous FBOTs from any other enginedriven transients. While it is unclear if the ejecta of ZTF18abvkwla contained hydrogen (Ho et al. 2020), AT 2018cow and CSS161010 showed for hydrogen rich ejecta (Margutti et al. 2019;Prentice et al. 2018;Perley et al. 2019, Dong et al. in prep.). In fact, CSS161010 is the first case where a relativistic hydrogen-rich outflow is observed, which implies the existence of a new class of engine-driven explosion that originate from progenitors that still retain a significant fraction of their hydrogen envelope at the time of explosion. In principle there is no clear reason why only hydrogen-stripped progenitors should launch jets. Jets in hydrogen-rich progenitors could simply lack the necessary energy to pierce through the stellar envelope (e.g., MacFadyen & Woosley 1999;MacFadyen et al. 2001;Lazzati et al. 2012;Bromberg et al. 2011;Nakar & Sari 2012;Margutti et al. 2014b, and references therein). Next we comment on the amount of mass coupled to the fastest ejecta. While the shock velocity of CSS161010 is comparable to that of the relativistic SNe and the initial E k of the outflow is similar to GRBs, the fastest ejecta mass of CSS161010 is significantly larger than that of GRB jet outflows, which typically carry ∼ 10 −6 − 10 −5 M . It thus comes as no surprise that neither on-nor off-axis GRB-like jet models (e.g., Granot & Sari 2002;van Eerten et al. 2012) fit the radio temporal or spectral evolution of CSS161010. Indeed, the ejecta mass carried by GRB jets needs to be small enough to reach sufficiently large velocities to prevent the absorption of γ-rays for pair production (see Dermer et al. 2000;Huang et al. 2002;Nakar & Piran 2003). Explosions with a sufficiently large ejecta mass to be important in the dynamics and absorb the high-energy emission are referred to as 'baryon-loaded explosions' or 'dirty fireballs'. Although predicted (e.g., Huang et al. 2002), such sources have remained fairly elusive. The relativistic SN 2009bb is argued to be relativistic and baryon-loaded with M ej ≥ 10 −2.5 M (Chakraborti & Ray 2011), and the transient PTF11agg is another potential relativistic baryon-loaded candidate (Cenko et al. 2013). CSS161010 is relativistic, did not have a detected gamma-ray counterpart ( §2.4), had a large E k that is comparable to GRBs, and had an ejecta mass that is intermediate between GRBs and SNe. It is thus a relativistic baryon-loaded explosion or dirty fireball. Interestingly, luminous GRB-like γ-ray emission was also ruled out for the other relativistic FBOT ZTF18abvkwla (Ho et al. 2020). Our major conclusion is that while luminous multiwavelength FBOTs share similarities with other classes of engine driven explosions, their properties clearly set them apart as a completely new class of engine-driven transients comprising at most a very small fraction of stellar deaths ( §5.1). Special circumstances are thus needed to create the most luminous FBOTs. Black Hole One of the proposed models for the FBOT AT 2018cow was a tidal disruption event (TDE) of a star by an intermediate mass black hole (IMBH, Perley et al. 2019;Kuin et al. 2019). Margutti et al. (2019) disfavour this model as it is difficult to explain the origin of the high-density surrounding medium (inferred from radio observations) with a TDE on an off-center IMBH. CSS161010 is spatially consistent with the nucleus of its host, so this argument is not directly applicable here. The dwarf host galaxy of CSS161010 is at least ∼ 10 times less massive than any other confirmed TDE host (Fig. 3). The M ≈ 10 7 M implies that the central BH would likely be an IMBH. The BH masses and occupation fractions in dwarf galaxies are not well constrained. However, using the relations between the BH mass and host galaxy stellar mass in Marleau et al. (2013) and Reines & Volonteri (2015), which were derived largely based on higher mass galaxies, we obtain a rough estimate for the BH mass of ∼ 10 3 M . For this BH mass, the X-ray luminosity at ∼ 100 days is ∼ 0.01 L Edd (where L Edd is the Eddington Luminosity) and the optical bolometric luminosity is 10 3 L Edd . The optical luminosity would have to be highly super-Eddington in this scenario. However, the optical luminosity estimate is highly dependent on the assumed temperature, the uncertainty on the BH mass is very large, and CSS161010 was aspherical and clearly showed an outflow. Consequently we cannot conclusively rule out that CSS161010 is a TDE based on the luminosity. It is similarly not possible to rule out a TDE scenario based on the optical rise and decay time-scales. It is true that the optical rise and decay rate of CSS161010 was significantly faster than TDEs on super-massive black-holes, SMBHs (e.g. Hinkle et al. 2020). In fact, the ∼ 4 day optical rise of CSS161010 (Dong et al., in prep.) was shorter than the ∼ 11 day rise of the fastest TDE discovered to date iPTF16fnl (which had a BH mass of ≤ 10 6.6 M Blagorodnova et al. 2017) and formally consistent with the classical TDE scalings t rise ∼ 1.3(M BH /10 3 M ) 1/2 days for a Sun-like star disruption. However, the circularization of the debris is unlikely to be efficient and the circularization timescales of the debris are highly uncertain for IMBHs (e.g., Chen & Shen 2018;Darbha et al. 2019) and we cannot directly compare the TDE timescales of SMBHs and IMBHs. The radio and X-ray luminosities of CSS161010 are comparable to those of some jetted TDEs ( Figures 1 and 2), although CSS161010 shows a faster radio decline. The kinetic energy is also comparable to the jetted TDEs ( Figure 5). In TDEs that lack gamma-ray detections, the radio synchrotron emission is proposed to be from the shock between the CSM and an outflow driven by a super-Eddington accretion phase (e.g., Rees 1988;Strubbe & Quataert 2009;Zauderer et al. 2011;Alexander et al. 2016), or the external shock from the unbound stellar material (Krolik et al. 2016), or internal shocks in a freely expanding relativistic jet (Pasham & van Velzen 2018). The outflows are modelled using equipartition analysis as we have done for CSS161010 in Section 3, so our results are equally applicable to TDE models and we cannot rule out a TDE based on the radio properties. Based on the aforementioned arguments, and the fact that the dwarf host galaxy spectrum does not have clear post-starburst features, we disfavour the scenario that CSS161010 is a TDE of an IMBH but cannot conclusively exclude it. If this scenario is true though, then there are several implications. First, as CSS161010 is hydrogen rich (Dong et al. in prep.), the disrupted star would likely not be a WD. Second, CSS161010 would be the TDE with the smallest BH mass to date. This would imply that TDEs on IMBHs can produce transients that launch relativistic outflows and show short rise times of a few days. If this is the case, then multi-wavelength observations of FBOTs could identify IMBHs and also help to determine the BH mass function and occupation fraction at low galaxy masses. Third, the volumetric rates estimates for SMBH TDEs are ∼ 200 Gpc −3 y −1 (Alexander et al. submitted). If the population of luminous FBOTs is the population of TDEs on IMBHs, then our volumetric rate estimate for luminous FBOTs ( 300 Gpc −3 y −1 ) would imply that the rate of TDEs on IMBHs would be at most that of the TDE rate of SMBHs. SUMMARY AND CONCLUSIONS We present X-ray and radio observations of the luminous FBOT CSS161010 and its dwarf host galaxy. The optical properties of the transient are described in Dong et al. in prep. At the distance of ∼150 Mpc, CSS161010 is the second closest FBOT (after AT 2018cow). To date, CSS161010 is one of only two FBOTs detected at radio and X-ray wavelengths (including AT 2018cow, Rivera Sandoval et al. 2018;Margutti et al. 2019;Ho et al. 2019) and three detected at radio wavelengths (including AT 2018cow and ZTF18abvkwla, Ho et al. 2020). We highlight below our major observational findings: • CSS161010 reached a radio luminosity L ν ∼ 10 29 erg s −1 Hz −1 (at ν = 6 GHz) comparable to subenergetic GRBs (i.e. significantly larger than normal SNe), with a steep after-peak temporal decline similar to that observed in AT 2018cow. • The radio properties of CSS161010 imply the presence of a decelerating relativistic outflow with Γβc > 0.6c at t=99 days, carrying a large ejecta mass 0.01 M and kinetic energy E k >10 50 erg, and propagating into a dense environment with n ≈ 700 cm −3 at r ≈ 10 17 cm (an effective mass-loss rate ofṀ ≈ 2 × 10 −4 M y −1 for a wind velocity of 1000 km s −1 ). • The X-ray luminosity of 3 × 10 39 erg s −1 (at 99 days) is too bright to be synchrotron emission from the same population of electrons powering the radio emission. In AT 2018cow this X-ray excess was attributed to a central engine and we speculate that this is also the case in CSS161010. • CSS161010 resides in a small dwarf galaxy with stellar mass M * ∼10 7 M , (the smallest host galaxy to an FBOT to date). However, its specific star formation rate of sSF R = (0.2 − 1) Gyr −1 is comparable to other transient host galaxies (e.g. GRBs and SLSNe). Intriguingly, all the FBOTs with multi-wavelength detections so far have dwarf host galaxies (Prentice et al. 2018;Perley et al. 2019;Ho et al. 2020). • CSS161010, AT 2018cow and ZTF18abvkwla belong to a rare population of luminous FBOTs (M R < −20 mag at peak). For this population, using PTF data, we estimate a volumetric rate <300 Gpc −3 y −1 , which is 0.25% of the core-collapse SN rate at z∼0.2. This result is consistent with the estimates by Ho et al. (2020). We thus reach the same conclusion as Ho et al. (2020) that luminous FBOTs stem from a rare progenitor pathway. In the context of stellar explosions, the properties of CSS161010 imply a clear deviation from spherical symmetry (as in the case of GRB jets), and hence the presence of a "central engine" (black hole or neutron star) driving a potentially collimated relativistic outflow. Differently from GRBs, CSS161010 (i) has a significantly larger mass coupled to the relativistic outflow, which is consistent with the lack of detected γ-rays; (ii) the ejecta is hydrogen-rich (Dong et al., in prep.). For CSS161010 we cannot rule out the scenario of a stellar tidal disruption on an IMBH. However we note that this scenario would imply a highly super-Eddington accretion rate of ∼ 10 3 L edd for our (uncertain) BH mass estimate ∼ 10 3 M . Irrespective of the exact nature of CSS161010, CSS161010 establishes a new class of hydrogen-rich, relativistic transients. We end with a final consideration. The three known FBOTs that are detected at radio wavelengths are among the most luminous and fastest-rising among FBOTs in the optical regime (Perley et al. 2019;Margutti et al. 2019;Ho et al. 2019. Intriguingly, all the multi-wavelength FBOTs also have evidence for a compact object powering their emission (e.g., Prentice et al. 2018;Perley et al. 2019;Kuin et al. 2019;Margutti et al. 2019;Ho et al. 2019). We consequently conclude, independently of (but consistently with) Ho et al. 2020, that at least some luminous FBOTs must be engine-driven and cannot be accounted for by existing FBOT models that do not invoke compact objects to power their emission across the electromagnetic spectrum. Furthermore, even within this sample of three luminous FBOTs with multiwavelength observations, we see a wide diversity of properties of their fastest ejecta. While CSS161010 and ZTF18abvkwla harbored relativistic outflows, AT 2018cow is instead non-relativistic. Radio and X-ray observations are critical to understanding the physics of this intrinsically rare and diverse class of transients. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. The scientific results reported in this article are based in part on observations made by the Chandra X-ray Observatory. This research has made use of software provided by the Chandra X-ray Center (CXC) in the application packages CIAO. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
2020-03-25T01:01:14.367Z
2020-03-23T00:00:00.000
{ "year": 2020, "sha1": "030586af4616f879c6921dba9517fbc25868da41", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.3847/2041-8213/ab8cc7/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "030586af4616f879c6921dba9517fbc25868da41", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
38146343
pes2o/s2orc
v3-fos-license
Endoplasmic Reticulum Stress Induction of Insulin-like Growth Factor-binding Protein-1 Involves ATF4* Endoplasmic reticulum (ER) stress is sensed by cells in different physiopathological conditions in which there is an accumulation of unfolded proteins in the ER. A coordinated adaptive program called the unfolded protein response is triggered and includes translation inhibition, transcriptional activation of a set of genes encoding mostly intracellular proteins, and ultimately apoptosis. Here we show that insulin-like growth factor (IGF)-binding protein-1 (IGFBP-1), a secreted protein that modulates IGF bioavailability and has other IGF-independent effects, is potently induced during ER stress in human hepatocytes. Various ER stress-inducing agents were able to increase IGFBP-1 mRNA levels, as well as cellular and secreted IGFBP-1 protein up to 20-fold. A distal regulatory region of the human IGFBP-1 gene (–6682/–6384) containing an activating transcription factor 4 (ATF4) composite site was required for promoter activation upon ER stress. Mutation of the ATF4 composite site led to the loss of IGFBP-1 regulation. Electrophoretic mobility shift assay revealed an ER stress-inducible complex that was displaced by an ATF4 antibody. Knockdown of ATF4 expression using two specific small interfering RNAs impaired up-regulation of IGFBP-1 mRNA, which highlights the relevance of ATF4 in endogenous IGFBP-1 gene induction. In addition to intracellular proteins involved in secretory and metabolic pathways, we conclude that ER stress induces the synthesis of secreted proteins. Increased secretion of IGFBP-1 during hepatic ER stress may thus constitute a signal to modulate cell growth and metabolism and induce a systemic adaptive response. The endoplasmic reticulum (ER) 3 is the site of synthesis, folding, and modification of secretory and cell surface proteins as well as the resident proteins of the secretory pathway. Perturbations that alter ER homeostasis lead to the accumulation of unfolded proteins that are detrimental to cell survival. To alleviate this stressful condition, the cell has evolved a coordinated adaptive program called the unfolded protein response (UPR) (1)(2)(3). The most immediate response associated with ER stress is transient attenuation of protein synthesis to decrease the load on the ER (1,4,5). The second part of the response consists of specifically up-regulating ER protein folding, ERassociated degradation (ERAD) efficiencies, and expression of a set of genes involved in the general adaptation to stress. Target genes include molecular chaperones such as BiP, GRP94, and calreticulin (6) and ERAD components such as EDEM (7), genes involved in amino acid metabolism and resistance to oxidative stress (8). In addition to the recovery from ER stress, the UPR initiates proapoptotic pathways that could lead to programmed cell death if the stress is sustained (9). The diversity of these responses is mediated by three ER transmembrane transducer proteins that sense the accumulation of unfolded protein in the ER lumen and activate signaling pathways. These transducers include the precursor form of the activating transcription factor 6 (ATF6) and two kinases as follows: the double-stranded RNA-activated protein kinase-like ER kinase (PERK) and the kinase/endoribonuclease IRE1 (10). Upon ER stress, PERK phosphorylates the ␣ subunit of eukaryotic initiation factor-2 (eIF2␣), which reduces the level of eIF2-GTP available for translation initiation, leading to a general attenuation of translation (5). Paradoxically, eIF2␣ phosphorylation also increases translation of selective mRNAs such as the mRNA coding for the bZIP-activating transcription factor 4 (ATF4), which regulates the expression of some UPR target genes (8). Thus, by phosphorylating eIF2␣, the PERK pathway regulates both translation and transcription during ER stress. The second transmembrane ER kinase, IRE1, is an endoribonuclease that splices XBP1 (X-box binding protein-1) mRNA in response to ER stress; this generates the potent bZIP transcription factor, XBP1, which regulates the transcription of a set of UPR target genes (11). Activation of the third transmembrane ER transducer, the precursor form of ATF6, in response to ER stress leads to its transport to the Golgi apparatus where it is cleaved, releasing a cytoplasmic fragment, the bZIP transcription factor called ATF6 that activates transcription of UPR target genes (11). The three signaling pathways of the UPR allow the regulation of gene expression by three transcription factors as follows: ATF4, XBP1, and ATF6, which bind to different cis-acting elements. ER stress-response element (ERSE, CCAATN 9 CCACG) * This work was supported in part by INSERM, the University René Descartes (Paris 5), the Ligue Contre Le Cancer, the program Environnement Santé (Ministè re de l'Environnement, number EN00DO1), and the Ré gion Ile de France. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 Recipient of fellowships from the Ministè re de la Recherche et de la Technologie and from the Association pour la Recherche Contre le Cancer. 2 To whom correspondence should be addressed. Tel.: 33-1-42-86-33-60; Fax: 33-1-42-86-38-68; E-mail: Michele.Garlatti@univ-paris5.fr. 3 The abbreviations used are: ER, endoplasmic reticulum; UPR, unfolded protein response; ERAD, ER-associated degradation; IGFBP-1, insulin-like growth factor-binding protein- binds both XBP1 and ATF6 factors in the presence of the general transcription factor nuclear factor-Y (NF-Y) (12,13). UPRE sequences (consensus TGACGTG(G/A)) specifically binds XBP1 without assistance of NF-Y (7,12). ATF4 composite sites (consensus (R/C)TT(R/T)CRTCA, R ϭ G or A) (14) include a set of sequences called amino acid-response element (AARE) in the CHOP promoter (15,16), nutrient stress response element-1 (NSRE1) in the asparagine synthetase gene (ASNS) (17), C/EBP-ATF composite site in the Herp promoter (18), and the ATF site in the GADD34 gene (19). In the CHOP promoter, the ATF4 composite site binds ATF4 in combination with ATF2 (16); in the ASNS promoter, the NSRE1 site binds different transcriptional factors with a sequential order (20); during the initial phase following the stress (amino acid deprivation), NSRE1 mainly binds ATF4, and ASNS gene transcription is increased. Subsequently, binding of C/EBP␤ and ATF3-fulllength to NSRE1 increased and ASNS transcription declined. The three signaling pathways differ in their activation pattern. Although Ire1/XBP1 and ATF6 pathways are specific to ER stress, the PERK pathway shares phosphorylation of eIF2␣ and the induced "integrated stress response" with unrelated stress: amino acid deprivation, viral infection, and heme deficiency that activate specific kinases (8,21). UPR target genes encode mostly intracellular proteins that carry biological functions helping the cell to cope with the accumulation of unfolded proteins or leading to cell death (8). It is unclear whether the UPR also includes the activation of signals involved in cell-cell communication. The possibility that secreted proteins expression could be regulated by ER stress is an important issue, because in this case the stressed cell would communicate the information to other cells and trigger a response at the tissue or systemic levels. Insulin-like growth factor-binding proteins (IGFBPs) are a family of secreted proteins that bind insulin-like growth factors (IGFs) with high affinities. The concentration of free IGFs is believed to represent the primary determinant of the tissue response to IGFs; free IGFs bind to the IGF type I receptor and modulate developmental growth and metabolism. By regulating IGF transport and half-life, IGFBPs modulate IGF bioactivity. But IGFBPs also act on cell migration, growth, and death by interacting with cell surface extracellular or intracellular partners (22). IGFBP-1 displays tissue-specific expression; it is mostly secreted by hepatocytes, decidualized uterine endometrium, ovarian granulosa cells, and kidney (23). IGFBP-1 acts both as an "endocrine" and an "autocrine/paracrine" factor. Although all the functions of IGFBP-1 are not understood, IGFBP-1 is known to influence glucose homeostasis and to play a role in the female reproductive functions (23,24). Indeed, various transgenic mice overexpressing the IGFBP-1 gene consistently display impaired glucose tolerance and abnormalities in insulin action in addition to alterations of reproduction, intrauterine, and postnatal growth restrictions (25)(26)(27)(28)(29). Gene knock-out studies in mice suggested another function of IGFBP-1 in the liver; IGFBP-1 may act as a pro-mitogenic and a protective protein of the injured liver probably through an IGF-independent mechanism (30,31). IGFBP-1 differs from the other IGFBPs in its rapid regulation by metabolic status; IGFBP-1 serum level decreases following food intake and increases between meals; insulin down-regulation of IGFBP-1 gene is believed to be responsible for this daily fluctuation (32). In contrast, increased concentrations of glucocorticoids and proinflammatory cytokines are believed to strongly regulate IGFBP-1 synthesis in catabolic conditions (33)(34)(35). In addition to these regulations, amino acid depletion and hypoxia were shown to up-regulate IGFBP-1 gene expression (36,37). Furthermore, we have recently shown that the environmental contaminant dioxin induced IGFBP-1 gene expression (38). During the course of exploring the mechanisms of dioxin action, in particular the contribution of induced ER-localized cytochromes P450, we observed a potent induction of IGFBP-1 upon ER stress. We show here that IGFBP-1 is highly induced during ER stress in human hepatocytes; both mRNAs and secreted proteins are induced up to 20-fold by different chemicals inducing the UPR. This induction requires the transcription factor ATF4 that binds to a distal regulatory region of the human IGFBP-1 gene promoter. MATERIALS AND METHODS Chemicals-All chemical products, including tunicamycin, brefeldin A, and thapsigargin, were obtained from Sigma. Oligonucleotides were obtained from Qiagen (Les Ulis, France) and siRNA from Eurogentec (Angers, France). Northern Blots-Total RNAs were isolated by using the RNeasy kit from Qiagen. Northern blots were performed using 10 g of total RNA per lane. The probes used to detect IGFBP-1, BiP, and GRP94 mRNAs were described previously (38). Herp probe was isolated by reverse transcription of HepG2 RNAs and specific amplification by PCR using the following oligonucleotides: Herp forward 5Ј-CTTCCAAAG-CAGGAAAAACG-3Ј; Herp reverse 5Ј-GGCTCCAGGAT-TAACAACCA-3Ј. These probes were labeled using the Megaprime DNA labeling system (Amersham Biosciences), and hybridizations were performed using Rapid-hyb buffer (Amersham Biosciences). Membranes were washed for 45 min at 65°C with 2ϫ standard saline citrate and 0.1% SDS and for 35 min with 0.5ϫ standard saline citrate, and 0.1% SDS. Quantifications were performed with a PhosphorImager and the Image-Quant software (Amersham Biosciences). Real time quantitative RT-PCR was performed with 40 ng of cDNA, 300 nM each primer, and SYBR-Green PCR Master Mix (AbGene) to a final volume of 10 l. Quantitative RT-PCR measurements were performed on an ABI Prism 7900 sequence detector system (Applied Biosystems). PCR cycles proceeded as follows: Taq activation (15 min), denaturation (15 s, 95°C), annealing (30 s, 60°C), and extension (30s, 72°C). The relative mRNA levels were estimated by the standard method using ribosomal protein L13a as the reference gene. Cellular and Medium Protein Extracts-Ten million HepG2 cells were treated with 2 g/ml tunicamycin, 0,25 M thapsigargin, 0,25 g/ml brefeldin A or the appropriate vehicle (ethanol or dimethyl sulfoxide (Me 2 SO)) in 6 ml of serum-free medium over 24 h. Culture supernatants containing secreted IGFBP-1 were saved, and protein extracts from cells were prepared. Cells were washed twice with Hanks' balanced salt solution and centrifuged at 1500 rpm for 5 min. The pellet was resuspended in 0.25 M sucrose, 10 mM Tris-HCl, pH 7.4, and 1 mM EDTA containing an antiprotease inhibitor mixture tablet (Roche Applied Science) and lysed by sonication 30 s with a Vibracell (Fisher). Centrifugation for 20 min at 15,000 rpm was then performed, and the pellets were resuspended in 200 l of 100 mM NaPO 4 , 10 mM MgCl 2 , and 20% glycerol, pH 7.4. Protein concentration of cell lysates and medium extracts were measured using the BCA protein assay reagent (Pierce) and bovine serum albumin as a standard. Cloning of IGFBP-1 and BiP Genes Fragments and Plasmid Construction-The first 1205-bp fragment of the human IGFBP-1 promoter was already cloned and sequenced (40). Comparison of this sequence with the GenBank TM data base (BLAST) allowed us to identify BAC number RP11-132L11 (GenBank TM accession number AC091524), which contains 7400 bp upstream of the transcription initiation site of the human IGFBP-1 gene. By using this sequence, we designed oligonucleotides that allowed the amplification of various fragments of the promoter. Nucleotide numbering represents the distance 5Ј (negative) or 3Ј (positive) to the mRNA capsite (nucleotide 1). The oligonucleotides used are as follows: PCR fragments were generated using the HotStar TaqDNA polymerase (Qiagen) and HepG2 genomic DNA as a matrix and subcloned into pGL3 Basic vector (Promega, Charbonnieres, France). Fragments ϩ493/ϩ1135, Ϫ3966/Ϫ2523, Ϫ4429/ Ϫ3778, Ϫ6819/Ϫ4030, 7129/Ϫ6288, and Ϫ6682/Ϫ6384 were subcloned into p-TATA-FL vector (which consists of pGL3 Basic in which the sequence AGGGTATATAATG was inserted between XhoI and BglII sites). The inserts of the resulting pϪ2644/ϩ102-FL, pϩ493/ϩ1135-TATA-FL, pϪ 3966/Ϫ2523-TATA-FL, pϪ4429/Ϫ3778-TATA-FL, pϪ6819/ Ϫ4030-TATA-FL, pϪ7129/Ϫ6288-TATA-FL, and pϪ6682/ Ϫ6384-TATA-FL were sequenced. The sequence corresponds to the one present in the GenBank TM data base (accession numbers AC091524 and AY434089). A fragment of the human BiP promoter containing the proximal ERSE motifs was also generated by PCR and subcloned into PGL3 basic vector pϪ339/ϩ41-BiP-FL. Transient Transfection Experiments-Expression vector of human ATF4 (pMycATF4) and its control vector (pCMV5myc vector) were the generous gifts from Dr. A. S. Lee (41). Expression vector of the human ␣1-antitrypsin mutant Hong-Kong, pA1AT⌬TC, was a kind gift from Dr. N. Hosokawa (42). HepG2 cells (4 ϫ 10 5 cells/well of a 6-well plate) were transfected in triplicate by the calcium phosphate coprecipitation technique (2 g of plasmid-FL/well) except that glycerol shock was omitted. Thirty hours later, cells were treated with Me 2 SO or 2 g/ml tunicamycin over 16 h, and cells were lysed in 200 l of 1ϫ passive lysis buffer (Promega). Firefly luciferase was assayed with the Promega kit. When cotransfection experiments were performed, either 500 ng of pMycATF4 or pCMV5myc or 2 g of pA1AT⌬TC vectors were included in the precipitate, and firefly luciferase was assayed 40 h after transfection. Electrophoretic Mobility Shift Assay-Eight million HepG2 cells were treated with 2 g/ml tunicamycin or Me 2 SO for 5 h. Nuclear extracts were prepared as described previously (38). Synthetic double-stranded DNA probes (4 pg) were labeled with [␣-32 P]dCTP (Amersham Biosciences) and the large Klenow fragment of DNA polymerase I (Ozyme, Saint Quentin en Yvelines, France). HepG2 nuclear extracts (10 g) were preincubated on ice for 15 min in the presence of 3 g of poly(dI-dC) (Amersham Biosciences) in a reaction mixture containing 25 mM Hepes, pH 7.9, 60 mM KCl, 2.5 mM MgCl 2 , 0.1 mM EDTA, 0.75 mM dithiothreitol, 1 mM phenylmethylsulfonyl fluoride, and 5% glycerol in the presence or absence of a 75-fold molar excess of the unlabeled competitors. Fifty fentomoles (100,000 cpm) of the labeled probe was then added and incubated for further 15 min. To test the effect of specific antibod-ies, 1 g of ATF4 antibody (also named CREB-2; tebu-bio) or IgG control was added to the incubation mixture on ice 2 h prior to the addition of the labeled probe. DNA-protein complexes were separated for 2 h at 4°C on a pre-run (30 min) 6% (w/v) polyacrylamide gel containing 2.5% glycerol, with 1ϫ TGE (25 mM Tris base, 190 mM glycine, 1 mM EDTA, pH 8.5) as a running buffer. One day before transfection with siRNA, HepG2 cells were plated on 6-well plates (500,000 cells/well). Then 2 g of siRNA were introduced into the cells using the calcium phosphate method as described above. Thirty hours later, cells were treated for 16 h with Me 2 SO or 2 g/ml tunicamycin, and mRNA levels were measured by real time quantitative RT-PCR. ER Stress Induces IGFBP-1 mRNA in Primary Cultures of Human Hepatocytes-The mRNAs of primary cultures of human hepatocytes treated or not for 24 h with 2 g/ml tunicamycin or Me 2 SO were analyzed by Northern blot using probes for IGFBP-1, for markers of ER stress: the chaperones BiP, GRP94, and Herp (involved in ERAD) as well as for 18 S ribosomal RNA. Fig. 1A shows that tunicamycin potently increased the mRNA levels of IGFBP-1 as well as those of the three ER stress markers. A 14-fold induction of IGFBP-1 was observed in Northern blot experiments, whereas classical ER stress-sensitive mRNAs were induced 4.5-8-fold. Data from Northern blot analysis were confirmed by real time quantitative RT-PCR. Tunicamycin was shown to increase IGFBP-1 mRNA 25-fold; the other ER stress-inducible mRNAs were induced 7-20-fold (Fig. 1B). We then asked whether other components of the IGF/IGFBP system were regulated by tunicamycin. The mRNA levels of other IGFBPs expressed in human hepatocytes, IGFBP-2 and IGFBP-4, were not modified, although the acid-labile subunit expression (a protein that interacts with IGFBP-3 and IGFBP-5 to form a stable ternary complex with IGFs in serum) was induced about 2-fold (Fig. 1B). IGF-I mRNAs were decreased by ϳ70%, whereas IGF-II and IGF receptor type I (IGF-IR) mRNAs were not regulated. Thus, the potent induction of IGFBP-1 mRNAs by tunicamycin in hepatocytes is specific among the IGF/IGFBP system and seems to be associated with a decrease of IGF-I mRNAs. Diverse ER Stress Inducers Increase IGFBP-1 mRNA and Protein Levels in HepG2 Cells-The effect of three different ER stress inducers was tested on IGFBP-1 mRNA levels in the human hepatocarcinoma HepG2 cells. After a 24-h treatment by tunicamycin (2 g/ml), thapsigargin (0.25 M), and brefeldin A (0.25 g/ml) or the appropriate vehicle (Me 2 SO for tunicamycin and thapsigargin and ethanol for brefeldin A), mRNAs were prepared, and Northern blot hybridizations were performed. As shown in Fig. 2A, all treatments elicited a severalfold induction of IGFBP-1. Three ER stress markers, BiP, Herp, and GRP94, were used to evaluate the efficiency of the different treatments to produce ER stress. Semi-quantifications of three different Northern blot experiments confirmed the induction of IGFBP-1 by all treatments with similar efficiencies (about 20-fold; data not shown). All treatments elicited an induction of the ER stress markers expression; the magnitude of the induc- JULY 14, 2006 • VOLUME 281 • NUMBER 28 tions varied from 3-to 19-fold. In conclusion, these data show that the IGFBP-1 mRNA level is very potently induced by ER stress in both primary cultures of human hepatocytes and the HepG2 hepatoma cells. Time course studies revealed that the induction of IGFBP-1 by tunicamycin (2 g/ml) could be detected 10 h following addition of the drug and was maximal at 16 h. At this time point, dose-response studies showed that increased IGFBP-1 expression was clear at 1 g/ml tunicamycin and maximal at the concentration of 2 g/ml (data not shown). IGFBP-1 Regulation by ER Stress Is Mediated by ATF4 The regulation of IGFBP-1 protein synthesis was assayed by Western blot. As shown in Fig. 2B, after a 24-h treatment by the three different ER stress inducers, IGFBP-1 protein levels were highly up-regulated in the cells. Moreover, the levels of IGFBP-1 protein in the culture medium were similarly elevated by tunicamycin and thapsigargin treatment. Thus, IGFBP-1 is translated and secreted during ER stress. When cells were treated with brefeldin A, a chemical that disrupts the structure of the Golgi apparatus and blocks protein secretion, no IGFBP-1 protein is detected in the medium. The expression of the ER resident chaperone BiP was also evaluated. As expected, the three ER stress inducers produced an induction of BiP expression, but the increase in IGFBP-1 protein was more potent than that of the BiP protein. Because BiP is localized in the ER and is not secreted, BiP expression was not observed in the medium. The absence of BiP in the culture medium also proves that the medium was not contaminated by cellular pro-teins. In conclusion, both cellular and secreted IGFBP-1 is highly increased during ER stress. A Distal Fragment of the Human IGFBP-1 Promoter Confers ER Stress Responsiveness-Various cis-elements are known to confer ER stress sensitivity to UPR target genes: ERSE, UPRE, and ATF4 composite sites. We searched for these regulatory elements within 10 kb upstream the initiation transcription site and within the first intron of the IGFBP-1 gene. As shown in Fig. 3A, two UPRE sites strictly identical to the consensus UPRE sequence are located at positions Ϫ4143/Ϫ4136 (UPRE1) and Ϫ6629/Ϫ6622 (UPRE2); one ATF4 composite site, very similar to the ATF4 binding consensus sequence, was located at position Ϫ6480/Ϫ6469 (see sequences on Fig. 4A). Because other poorly conserved sequences could mediate IGFBP-1 transactivation, we cloned genomic fragments covering 7129 bp of the IGFBP-1 promoter and 642 bp of intron 1 (which contains the hypoxia-responsive elements) (37). Each fragment of the human IGFBP-1 gene was subcloned upstream from a firefly luciferase reporter gene in either pGL3-basic vector or the pGL3-basic vector containing a TATA box (pTATA-FL, see "Materials and Methods"). HepG2 cells were transiently transfected with the recombinant plasmids, treated with Me 2 SO or tunicamycin for 16 h, and luciferase activities assayed. A plasmid containing the first 339 bp of the BiP promoter was used as a control of tunicamycin effect. As shown in Fig. 3B, the BiP promoter is activated about 3-fold by tunicamycin. Fragments of the IGFBP-1 promoter up to Ϫ4429 kb did not mediate activation of the reporter gene by tunicamycin. The ϩ493/ϩ1135 sequence of intron 1 also did not display any significant regulation. Three fragments were able to mediate induction of the reporter gene after tunicamycin treatment: fragments Ϫ6819/Ϫ4030, Ϫ7129/Ϫ6288, and Ϫ6682/Ϫ6384, which produced an increase in promoter activity of 3-, 10-, and 18-fold, respectively (Fig. 3B). Interestingly, these three fragments comprised the second UPRE and the putative ATF4 composite site. We focused our following studies on the Ϫ6682/Ϫ6384 fragment that mediated the highest induction by tunicamycin. We first determined the functional contribution of the ATF4 composite site and the UPRE site by targeted mutation of these responsive sequences (Fig. 4B). ER stress was mediated either by tunicamycin treatment or cotransfection with a plasmid encoding the ␣1-antitrypsin folding-incompetent Null Hong-Kong (NHK) variant. As shown in Fig. 4C, mutation of the putative UPRE site (pϪ6682/Ϫ6384-UPRE2mut-TATA-FL) did not significantly affect IGFBP-1 transactivation by tunicamycin treatment or ␣1-antitrypsin NHK expression. In contrast, mutation of the putative ATF4 composite site (pϪ6682/ Ϫ6384-mutATF4-TATA-FL) led to the loss of the regulation of the IGFBP-1 promoter by tunicamycin treatment or ␣ 1 -antitrypsin NHK expression. These data show a critical role of the ATF4 composite site in ER stress induction of the IGFBP-1 promoter. Role of the Transcription Factor ATF4 in IGFBP-1 Promoter Transactivation-ATF4 composite sites present in CHOP, ASNS, Herp, and GADD34 promoters have in common the property to bind the stress-sensitive factor ATF4. The ATF4 composite site found in the IGFBP-1 promoter is identical to IGFBP-1 Regulation by ER Stress Is Mediated by ATF4 the consensus sequence except for the first position in which a T is present instead of a G, A, or C (see Fig. 4A). In order to establish whether this site can be activated by ATF4, the pϪ6682/Ϫ6384-FL vector or the mutated vectors were cotransfected with the ATF4 expression vector (pMycATF4) or the mock vector (pCMV5Myc) (Fig. 5). Expression of ATF4 induced pϪ6682/Ϫ6384-FL promoter activity about 10-fold. Although the mutation of the UPRE2 motif did not alter ATF4 induction of the IGFBP-1 promoter, mutation of the ATF4 composite site completely abolished this transactivation. These data show that ATF4 transactivates the IGFBP-1 gene promoter through the ATF4 composite site. Binding of the Transcription Factor ATF4 to the ATF4 Composite Site of IGFBP-1-To examine whether ATF4 directly binds to the IGFBP-1 promoter upon ER stress, electromobility shift assays were carried out using a probe encompassing the putative ATF4 composite site named ATF4-IGFBP-1 oligonucleotide. The ATF4-IGFBP-1 oligonucleotide was labeled and incubated with nuclear extracts prepared from HepG2 cells treated or not with 2 g/ml tunicamycin for 5 h. In untreated cells, diffuse shifted bands could be detected and probably correspond to low affinity complexes ( Fig. 6 lane 1). Upon tunicamycin treatment, one major DNA-protein complex was observed (Fig. 6, lane 2). This complex was competed out by a 75-fold excess of homologous unlabeled probe (Fig. 6, lane 3) or a 75-fold excess of AARE-CHOP oligonucleotide (which contains the ATF4 composite site of the CHOP promoter) (lane 4). In contrast, the complex was not displaced by an oligonucleotide mutated on the ATF4 composite sequence (Fig. 6, lane 5) or by an oligonucleotide that binds the ubiquitous transcription factor SP1 (lane 6). Furthermore, this complex disappeared when the extracts were incubated with an anti-ATF4 antibody and a faint supershifted band appeared (Fig. 6, lane 8). These data were confirmed in two additional experiments. We conclude that ATF4 is the major factor able to bind to the IGFBP-1 ATF4 composite site in stressed cells. We next compared the binding properties of the CHOP and IGFBP-1 ATF4 composite sites. The oligonucleotide AARE-CHOP is known to bind different proteins, including ATF2 and ATF4 (15,16). Three major DNA-protein complexes were obtained using AARE-CHOP oligonucleotide as a probe and tunicamycin nuclear extracts. But among these complexes, only one was induced by tunicamycin (Fig. 6, lane 10; the fastest migrating complex). The inducible complex migrates similarly to the complex observed with the ATF4-IGFBP-1 probe and is the only one to disappear in the presence of the anti-ATF4 antibody (Fig. 6, lane 13). Therefore, in contrast to AARE-CHOP, which binds other proteins even under basal conditions, the IGFBP-1 ATF4 composite site binds mainly ATF4 upon ER stress. JULY 14, 2006 • VOLUME 281 • NUMBER 28 Effect of Specific Inhibition of ATF4 Synthesis on ER Stress Induction of IGFBP-1 Using siRNAs-We next assessed the role of ATF4 in up-regulation of IGFBP-1 mRNA levels upon ER stress using two different small interfering double-stranded RNAs directed specifically against ATF4 mRNA (siRNA A and B) as well as a non silencing mutated A siRNA (siRNA mutA). siRNAs A, B, or mutA were introduced into HepG2 cells using the calcium phosphate method. Twenty four hours later, cells were treated or not with 2 g/ml tunicamycin for 16 h, and the mRNA levels coding for three genes were measured: ATF4 (the siRNA target), IGFBP-1, and EDEM, a gene that is known to be regulated by the transcription factor XBP1 upon ER stress (7). As shown in Fig. 7A, both siRNAs A and B knocked down ATF-4 expression in basal and tunicamycin conditions, whereas siRNA mutA had no effect. Because we have shown that ATF4 is involved in the induction of IGFBP-1 by ER stress, we tested the effect of the various siRNAs on the induction of this gene by tunicamycin. As shown in Fig. 7B, both siRNA A and siRNA B led to a 2-3-fold decrease in the induction of IGFBP-1 by tunicamycin, whereas siRNA mutA had no effect. Furthermore, anti-ATF4 siRNA did not significantly interfere with the induction of EDEM mRNA levels, which highlights the specificity of the siRNAs effects (Fig. 7C). FIGURE 6. ATF4 binding to the IGFBP-1 ATF4 composite site. Electrophoretic mobility shift assays were performed using a 32 P-labeled oligonucleotide containing the ATF4 composite site of the IGFBP-1 gene (ATF4-IG-FBP-1) or of the CHOP gene (AARE-CHOP) and 10 g of HepG2 nuclear extracts (N.E.) prepared after Me 2 SO (DMSO) or tunicamycin (2 g/ml, 5 h) treatment of the cells. Competition experiments were performed with a 75-fold excess of unlabeled ATF4-IGFBP-1 or AARE-CHOP or mutATF4-IGFBP-1 or SP1-oligonucleotide (which specifically binds the ubiquitous transcriptional factor SP1). The binding reaction was also carried out in the presence of either 1 g of antibody directed toward the transcription factor ATF4 (anti-ATF4) or 1 g of control IgG (non-immune). Three independent experiments were performed with the same results. The two arrows to the right of the gels indicate a supershifted band. DISCUSSION In this study, we have shown that cellular and secreted IGFBP-1 are highly induced upon ER stress in human liverderived cells. We were particularly interested in this observation because IGFBP-1 is a secreted protein involved in signaling. Indeed, most of the UPR target genes encode intracellular proteins, which allow the cell to cope with stressful conditions. Induced proteins are involved in folding, secretion, ERAD, and quality control and increase the capacity of the secretory pathway (43)(44)(45). Increased levels of proteins involved in amino acid metabolism, transport, and redox control help the cell to adapt to the metabolic consequences of high ER activity (8). But not all UPR targets promote cell survival; indeed, the pro-apoptotic CHOP gene is also induced. Only a few UPR target genes encoding secreted or membrane proteins have been characterized. The vascular endothelial growth factor, a pro-angiogenic factor, was shown to be induced by ER stress in retinal epithelial cells; vascular endothelial growth factor mRNA was induced up to 10-fold, but the synthesis and secretion of the protein were modestly induced (up to 1.8-fold) (46). Expression of membrane transporters for cystine and glycine were also shown to be induced at the mRNA level, but nothing is known about the protein levels (8). Interestingly, a recent study of gene expression upon diverse stresses (heat shock, ER stress, oxidative stress, and crowding) in cultured human cells showed that most of the genes induced by multiple stresses are involved in cellcell communication thus suggesting a coordinated global response (47). Because IGFBP-1 acts as an endocrine and an autocrine-paracrine factor, its induction and secretion upon ER stress could contribute to such a response and inform the organism of the stressed state of the liver. IGFBP-1 was known to be induced during glucose deprivation of human hepatocytes (48). Our study provides mechanistic data for those initial observations. Indeed, proper protein folding requires extensive energy and is intimately coupled with asparagine-linked glycosylation. Because glucose deprivation reduces the amount of energy available and alters N-linked glycosylation, it can lead to protein accumulation in the ER and induce ER stress. Thus, ER stress may be one pathway by which glucose limitation induces IGFBP-1 expression. Most of ER stress-responsive genes are ubiquitously expressed, which is in line with the essential function of the UPR in the adaptation to the toxicity of unfolded proteins. IGFBP-1 differs from other UPR target proteins in that it is tissue-specifically expressed; the major sources are the liver and the endometrium (23). The regulation of a tissue-specific factor suggests that the UPR displays tissue-specific responses and that the consequences of the ER stress on the organism may differ according to the nature of the stressed tissue. Protein secretion is an important function of the liver (because of the large amount of serum proteins synthesized); it is thus possible that specific regulatory mechanisms are required in this organ. Other tissue-specific characteristics of the UPR have already been described such as the abundant expression of PERK in secretory and endocrine organs (pancreatic cells and the osteoblasts) and the exclusive expression of the ER transmembrane transducer IRE1␤ in the gut epithelial tissue (49). However, those tissue-specific aspects of the UPR have not been extensively explored so far. We showed that ATF-4 is critical for the induction of IGFBP-1 during ER stress. Indeed, both site-directed mutagenesis of the ATF4 DNA-binding site and the knockdown of ATF4 expression potently decreased the induction of IGFBP-1 upon ER stress. Moreover, no ER stress-specific element (such as ERSE or UPRE) appears to be involved in the IGFBP-1 regulation; among the cloned 7.12 kb of the human IGFBP-1 gene promoter, no ERSE was found, and the two identified UPREs were not functional under our conditions. These data show that IGFBP-1 induction upon ER stress involves ATF4; other factors may also contribute to this regulation as has been shown in the case of other genes. Thus, similarly to the GADD34 gene (19), IGFBP-1 induction upon ER stress depends on the UPR pathway shared by several stresses. This contrasts with Herp and CHOP genes regulations that are mediated by both the shared and the ER stress-specific pathways of the UPR (18) or with the regulation of genes involved in ER-specific functions (EDEM, Erdj4, and RAMP4 . . . ), which is mediated by the ER stress- A and siRNA B) or the nonsilencing siRNA mutA were introduced into HepG2 cells using the calcium phosphate precipitation technique. Calcium phosphate precipitates containing no siRNA (no) were also applied to the cells. The day after transfection, cells were treated with Me 2 SO (DMSO) or tunicamycin (Tn) for 16 h, and mRNA levels were measured using real time quantitative RT-PCR with RPL13a as the reference gene. Me 2 SO-treated cells that received no siRNA were considered as controls, and fold induction was calculated as a function of these control cells. A, effect of ATF4 knockdown on basal and induced levels of ATF4 mRNA. B, effects of siRNA A, B, and mutA on IGFBP-1 induction upon ER stress. C, effects of siRNA A, B, and mutA on EDEM induction upon ER stress. EDEM mRNA levels were measured as a negative control because EDEM ER stress induction is believed not to depend on ATF4. Data shown are the means Ϯ S.E. of three independent experiments. specific pathway (44,50). Moreover, these data highlight the contribution of a distal region in the regulation of IGFBP-1 by ER stress, which is distinct from other ER stress regulatory elements that are usually located in the proximal promoter region. In the case of the human C/EBP␣, the ER stress-responsive element is located downstream of the protein coding sequence (51). IGFBP-1 expression is increased upon amino acid starvation, a stress that also activates the ATF4 pathway. A recent study performed by Averous et al. (52) showed that IGFBP-1 induction upon amino acid depletion involves both mRNA stabilization and transcriptional activation and does not involve the ATF4 pathway. Using the CMV-IGFBP1-tag plasmid kindly provided by Dr. P. Fafournoux, we found that ER stress does not stabilize IGFBP-1 mRNA (data not shown). Thus, ER stress and amino acid depletion up-regulate IGFBP-1 expression using different mechanisms. Moreover, transcriptional regulation of IGFBP-1 by these two stress conditions relies on different transcription factors because ATF4 appears not be involved in the amino acid depletion effect. Another factor may be more essential in this stress condition. Genes regulated by the pathway shared by diverse stresses encode a large variety of functions; they are involved in amino acid import as well as in glutathione biosynthesis and resistance to oxidative stress (8). Various observations suggest that the integrated stress response is aimed at providing resistance to stressful conditions and at promoting cell survival (53,54). Liver IGFBP-1 and circulating IGFBP-1 are up-regulated in a number of catabolic conditions as follows: malnutrition, liver disease, and critical illness (55-57), but little is known about the physiological implication of stress-related induction of IGFBP-1. Because ATF4 target genes are mainly survival genes, this suggests that IGFBP-1 could have such a function under stressful conditions. Acute elevation of circulating IGFBP-1 levels in rats has been shown to decrease protein synthesis in specific muscles tissues thus saving energy for more essential functions (58). Studies performed in Zebrafish suggest a contribution of IGFBP-1 to growth retardation and survival under stressful conditions (59). Indeed, hypoxia leads to IGFBP-1 induction as well as to embryonic growth retardation and developmental delay. Growth impairment is significantly reduced in IGFBP-1 knock-out animals. These data suggest that stress-triggered induction of IGFBP-1 may divert important energy resources from growth toward survival metabolic processes; in Zebrafish, this is mediated by inhibition of IGF effects (59). However, one might expect that a sustained increase in IGFBP-1 may lead to detrimental effects. IGFBP-1 has been suggested to play a role in glucose homeostasis by modulating the bioavailability of IGFs that exert insulin-like metabolic functions (60). Several studies using mutant mice have established a connection between ER stress and glucose homeostasis. PERKϪ/Ϫ knock-out mice display hyperglycemia within several weeks after birth; this is mainly because of pancreatic cell death (61). We do not expect the liver encoded IGFBP-1 to have a significant contribution to such a phenotype. In contrast, the knock-in eIF2␣ mutant mice display severe hypoglycemia 6 -9 h after birth; they are defective in gluconeogenesis and glycogen storage (62). Indeed, the up-regulation of phosphoenolpyruvate carboxykinase (a rate-limiting enzyme in gluconeogenesis), which normally occurs in the liver shortly after birth, is prevented in eIF2␣ mutant neonates. If IGFBP-1 regulation is perturbed in eIF2␣ mutant mice as expected, this may contribute to disruption of glucose homeostasis. In human, some liver diseases are believed to be accompanied by ER stress. One example is a form of ␣ 1 -antitrypsin deficiency because of the PiZ variant. The PiZ ␣ 1 -antitrypsin variant forms large aggregates that are retained in the ER and can induce cirrhosis and liver failure (63,64). Elevated IGFBP-1 levels, if confirmed, may participate in metabolic and signaling perturbations associated with ER stress-related liver diseases.
2018-04-03T04:13:06.120Z
2006-07-14T00:00:00.000
{ "year": 2006, "sha1": "65e4d37f460648f04cd134bb550d6e713eb87f50", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/281/28/19124.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "6cd7a90f3eabf24348c1f5ff785964b3755a5576", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
173188147
pes2o/s2orc
v3-fos-license
RSS-Based Q-Learning for Indoor UAV Navigation In this paper, we focus on the potential use of unmanned aerial vehicles (UAVs) for search and rescue (SAR) missions in GPS-denied indoor environments. We consider the problem of navigating a UAV to a wireless signal source, e.g., a smartphone or watch owned by a victim. We assume that the source periodically transmits RF signals to nearby wireless access points. Received signal strength (RSS) at the UAV, which is a function of the UAV and source positions, is fed to a Q-learning algorithm and the UAV is navigated to the vicinity of the source. Unlike the traditional location-based Q-learning approach that uses the GPS coordinates of the agent, our method uses the RSS to define the states and rewards of the algorithm. It does not require any a priori information about the environment. These, in turn, make it possible to use the UAVs in indoor SAR operations. Two indoor scenarios with different dimensions are created using a ray tracing software. Then, the corresponding heat maps that show the RSS at each possible UAV location are extracted for more realistic analysis. Performance of the RSS-based Q-learning algorithm is compared with the baseline (location-based) Q-learning algorithm in terms of convergence speed, average number of steps per episode, and the total length of the final trajectory. Our results show that the RSS-based Q-learning provides competitive performance with the location-based Q-learning. I. INTRODUCTION Thanks to the extensive studies and massive cost reduction in manufacturing, the interest in the use of unmanned aerial vehicles (UAVs) is expected to increase significantly in the upcoming years. Besides their widespread recreational and military use, UAVs have already started to show up in civilian applications including but not limited to precision agriculture, infrastructure health monitoring, packet delivery, restoring service after natural disasters, patrolling missions, and search and rescue (SAR) operations [1], [2]. Deployment of UAVs can make a big difference in SAR missions by providing information and data about the environment or an injured or lost person, improving network access, delivering first aid equipment, among others. UAVs can be utilized by emergency services or rescue teams in the aftermath of a disaster (e.g., a hurricane or earthquake) and can help the first responders make better decisions and save time. However, due to the unavailability of a suitable data link or precise maneuver requirements that are sometimes outside human capabilities, human control over the UAVs may not be possible [3]. Thus, it is critical to develop effective technologies and algorithms to enable the UAVs to perform complicated tasks autonomously. One issue with the autonomous use of UAVs in SAR missions is that, most of the time, the prior knowledge regarding the environment is limited, if not completely unavailable. Moreover, the environment may change with time or the models defining the target and its location may not be accurate or descriptive enough. Therefore, This work is supported by NSF under the award CNS-1453678. The authors would like to thank Bekir S. Ciftler and Adem Tuncer for their initial inputs. a UAV is required to interact with the environment, learn and make decisions by itself. Reinforcement learning (RL), which is a class of machine learning (ML) algorithms, may help to overcome these issues. In RL, an agent learns in an interactive environment by using feedback from its actions and experiences. Usually, the environment is modeled as a Markov decision process (MDP) to leverage the dynamic programming technique that is used by the RL algorithms. The studies that do not make use of the ML either use exact models of the environment or assume the accurate information of the environment is predictable [4]. On the contrary, a branch of RL, known as Q-learning, requires a little or no prior/explicit knowledge of the environment. Q-learning is an off-policy RL algorithm which aims to find the best action to take given the current state. It learns from actions that are not known to the current policy by taking random actions and seeks to learn a policy that maximizes the total reward. RL algorithms have already been widely studied in UAVrelated researches as in many other fields of robotics. In [5], a model-based RL algorithm, TEXPLORE, is used for the autonomous navigation of UAVs. The value function is updated from a model of the environment, while also taking battery life into consideration. It is shown that their method learns faster than the traditional table-based Q-learning due to its parallel architecture. Pham et al. [6] use Q-learning to navigate the UAVs by defining states based on the UAV location. It is assumed that the UAV can observe its state at any position. In [7], GPS signal and sensory information of the local environment are used in deep RL for UAV navigation tasks in outdoor environments. In [8], deep Q-learning is used for the autonomous landing of UAVs on a moving platform. In [2], RF signals from devices are used to estimate users location using random-forest based ML technique. There are recent promising attempts of navigating UAVs in GPS-denied indoor environments using image processing based techniques. In [9], images from a single camera are input to a convolutional neural network (ConvNet) to learn a control strategy to find a specific target. In [10], monocular images are used in a deep neural network to navigate a UAV while avoiding crashes. Negative flying data created from real collisions are used during training along with the positive data, and all training is done offline. In [11], RGB images are fed to a deep ConvNet based learning method to enable UAVs to have collision-free indoor flights, again with offline training. Motivated by the above discussion, in this paper, we propose a new method for autonomous navigation of UAVs indoors using Q-learning. Smart devices (e.g., a smartphone) can be used to locate a victim in a SAR scenario through the propagated RF signals [12]. Presently, smart devices can continuously transmit RF signals to discover nearby APs. Furthermore, a smart device can be forced to transmit wireless signals in case of emergency [13], [14]. Based on this fact, unlike the location-based Q-learning, our approach uses RSS values instead of UAV location information while deciding future actions to navigate the UAV towards the target. It does not require any prior knowledge of the environment. There is also no need for an exact mathematical representation of the target or mapping of the environment to locate the target. A high-level view of the system architecture is shown in Fig. 1. The receiver mounted on a UAV continuously senses the environment and picks up the RF signals from a remote wireless transmitter referred to as the source. A unique state label is assigned to the RSS value at the current position. Rewards in the Q-learning algorithm are also defined as a function of successive RSS values sensed in the current and previous positions, and Q-table is updated accordingly. Finally, the UAV takes one of the possible eight actions in different directions separated by 45 • . The proposed RSS-based Qlearning is tested in two different indoor environments. The environments and corresponding heat maps showing the RSS values for each possible UAV location are generated in a ray tracing software for a more realistic evaluation. The proposed method is compared with the baseline (i.e., location-based) Qlearning algorithm for different UAV speeds in terms of convergence speed, the number of steps taken to reach the victim in the final route, and averaged number of steps per episode. The remainder of this paper is organized as follows. Section II briefly describes the Q-learning algorithm. Simulation setup is introduced in Section III. The RSS-based Q-learning algorithm for indoor navigation of UAVs is elaborated in Section IV. Experimental results are presented in Section V. Finally, Section VI concludes this paper. II. BACKGROUND ON Q-LEARNING As mentioned in Section I, RL is a branch of ML that addresses problems where there is no explicit training data available. Q-learning, proposed by Watkins [17], can be used to learn optimal policies in finite MDPs [18]. This traditional table-based Q-learning maximizes the expected value of the total reward over any and all successive steps by taking action in the current state and follows an optimal policy afterwards. It learns by interacting with the environment and approximates a value function of each state-action pair through a number of iterations. The goal is to select the action which has the maximum Q-value using the following update rule at each iteration: where s is the state reached from state s after taking action a, α ∈ (0, 1] is the learning rate, r(s) is the reward attained for the current state s, and γ is the discount factor which determines the importance of future rewards. The Q-learning loop is illustrated in Fig. 2. Note that a high γ sets priority towards distant future rewards whereas a lower one will force the agent to consider only immediate rewards. After updating the Q-table, the best policy can be obtained by acting greedily in every state by III. SIMULATION ENVIRONMENT SETUP In this section, we describe the simulation environment for testing the proposed method. We use Wireless InSite ray tracing tool, which can provide a deterministic way of characterizing the RSS in indoor scenarios. First, we generate two arbitrary floor plans with different complexities and of size 26 × 96 m 2 and 76 × 58 m 2 (hereinafter referred to as the Scenario 1 and 2, respectively) using the floorplan feature of the software. The two floor plans are shown in Fig. 3. The height is considered to be 3 m for all the walls. We set the UE at an elevation of 1.5 m from the ground. After generating the floor plans, we run the ray tracing simulations to obtain the RSS at each RX grid with the source being at a specified position. The UE is assumed to transmit RF signals using 25 dBm transmit power at 2.4 GHz. The RX grids are set 1 m apart from each other using XY grid option in the software. Half-wave dipole antennas with vertical orientation are used at each RX grid. The maximum antenna gains are considered as 0 dB for both the RXs and UE TX. The height of the grids is considered as 2 m to avoid crashing into obstacles such as tables, cubicles, chairs, etc. The height of the doors is considered to be 2.70 m. Other settings considered in the simulations are as follows. Diffuse scattering mode is disabled. A maximum of six reflections and one diffraction were allowed. We also create the same floor plans in MATLAB. Then, we transfer the resulting RSS maps from the ray tracing to MATLAB for use in the navigation simulations. For simplicity, we assume that the UAV flies at a constant altitude. Thus, all the allowable actions that are separated by 45 • lie in the xy plane. We consider three different UAV speeds, namely, 1 m/s, 2 m/s and 4 m/s. Most commercial drones available in the market come with a maximum speed limit of 40 mph or 18 m/s [19]. So, our assumptions about the UAV speeds are reasonable. Note that, for simulation purposes, floor areas are partitioned into the grids and hence the UAV is forced to move from the center of one grid to that of the other. That is, if the UAV speed is set to v m/s and the UAV makes a diagonal movement, e.g., moves from the grid index (1,1) to (2,2), its speed will be v √ 2 m/s. For simplicity, while presenting the results in Section V, we will refer to the UAV speed as v m/s independent of the movement direction. We also assume that the UAV senses the RSS intermittently with a 1-second interval. In other words, the UAV will detect the RSS only when it reaches a new location. Such a sensing method will help the UAV save battery power. IV. UAV NAVIGATION USING RSS-BASED Q-LEARNING In this section, we introduce the RSS-based Q-learning method for the navigation of a UAV to a wireless source. In the location-based Q-learning algorithm, states and rewards are defined based on the location of the agent, i.e., GPS coordinates. This method is not suitable for use indoors where the GPS signal is not available. It also requires the exact coordinates (or an accurate mathematical representation of the position) of the target which is also not available in most of the SAR scenarios. On the other hand, in our proposed approach, states are defined based on the RSS values at each particular grid or UAV location. RSS values are also used in the definition of rewards allowing the navigation of the UAV towards the target by providing a reasonable representation of the target location. A. State and Reward Definitions The UAV starts from an initial position and detects the RSS at that position. A state label is assigned to this particular RSS value. Based on the fact that no two grids (separated by 1 m in this case) will have the same RSS value, each location is represented uniquely by a state. Then, the UAV takes an action depending on the strategy of the algorithm in use and moves to a new location. The reward is defined as the difference between the RSS values associated with the latest and the previous position, i.e., RSS t −RSS t−1 , so that higher rewards are obtained when there is an increase in the RSS. Next, a state label is assigned to the new location based on the new RSS value, and the Q-table is updated using the update equation in (1). It is worth noting that there may be small deviations from the previous RSS values at the next visits to the same grid. These deviations may be due to the imprecise steps taken by the UAV or some small changes in the environment or the source position. Since the states are defined based on the RSS values, this situation may lead to representing a single grid by multiple states, which, in turn, delays the convergence of the algorithm. As a solution to this problem, states can be defined as the neighborhood of the detected RSS values. If the RSS value of a new location does not lie in an already defined interval, then a new state is defined; otherwise, the same state (as one of the previous states) is attained. That is, if a state is labeled as s i for the RSS value detected at time t, then the same state will be attained whenever a new RSS value is detected within the range (RSS t − T h, RSS t + T h). The threshold T h should be defined in such a way that the state will remain unchanged provided that the UAV hovers inside the boundaries of a grid. Alternatively, states can be defined based on a set of RSS intervals determined before running the algorithm. A sufficiently wide range of RSS values can be divided into a number of discrete segments, and the states are assigned based on which segment the RSS at a particular location falls into. This technique may result in a small number of states, but it creates another interesting problem. For instance, two or more different locations in the indoor environments can be of the same state due to having close RSS values. Hence, a good action at one location can be a bad action at another one leading the UAV to crash. Consequently, instability may be observed in the Q-table update process. For simplicity, we assume a static environment and use the special case of the above-mentioned solution with T h = 0, i.e., each RSS value detected at a location is given a single state label. Each episode ends when the UAV is close enough to the target. We assume an episode ends when the distance between the UAV and the victim is less than 2 m. Using free-space path loss model [20], we calculate this RSS threshold to be -21 dBm. Note that, if the distance between the UAV and victim is less than 2 m and there is a wall between them, the RSS value pertinent to that position will be far less than -21 dBm due to the presence of the wall. Collisions are major problems for autonomous UAV navigation, and can be avoided using a range sensor or video camerabased systems as suggested in [9], [10]. We do not address this problem in this study. However, to simulate the possible solutions, each time before the agent takes a new action, we Algorithm 1 RSS-based Q-learning for indoor UAV navigation. 1: start from an initial location and obtain associated state of that particular location by sensing the RSS 2: repeat (for each step): 3: if ≥ min 4: = × exp (−η) end if 5: if α ≥ αmin 6: α=α × exp (−η) end if 7: choose a using -greedy policy 8: take action a, observe s 9: check s for possible obstacle(s) 10: while any obstacle at s do 11: leave a and select any other action randomly, end while 12: Calculate reward for taking action a by subtracting RSS associated with state s from state s 13: update Q-value using (1) 14: s ← s 15: until s is terminal check if that action leads to a crash. If so, the action is dropped from the list of possible actions, and another action is picked. The overall Q-learning process is summarized in Algorithm 1. B. -greedy Method To overcome the exploration-exploitation dilemma in Qlearning, we deploy -greedy method. The main idea ofgreedy method is to choose a random number from [0,1] and check whether it is greater than . If it is lower than , the agent takes random action; otherwise, it goes with the greedy action that has the highest Q-value. It is shown in [18], that Hence, we also start with = 1 and decrease it exponentially with a decay factor η with iteration number. To increase the importance of the future rewards, we set discount factor γ to be 0.98. The learning model parameters used in this study are specified in Table II. In each iteration, the agent or UAV in our case, starts from an initial location and traverses through the indoor scenario. If the UAV detects the UE, it will get a reward of 1000. Once the UAV finds the target, the current episode finishes and the new one starts. Since the UAV becomes more experienced as it moves through the indoor environments, we also decay α exponentially with η. Proposition 1. RSS-based Q-learning algorithm is an MDP. Proof. According to [18] and [21], an MDP has five components: 1) finite states, 2) a finite set of actions, 3) a transition probability, 4) an immediate reward function, and 5) a decision epoch set that can be either finite or infinite. In our proposed algorithm, if the indoor scenario is of finite area, the total number of unique states will also be finite. The total number of allowable actions is eight and the UAV can choose an action by -greedy method. The reward function is defined as the difference between the RSS value of the current state and previous state and finally, the UAV takes decisions until it finds the victim, which leads to a finite decision epoch. Thus, we can conclude that the proposed indoor navigation framework is an MDP. Corollary 1. The Q-learning algorithm in the proposed RSSbased indoor navigation system will converge to an optimal action-value function with probability one. Proof. In our proposed method, the states and actions are finite and we consider γ to be less than one. The reward function is finite and α ∈ [0, 1]. All the Q-values are updated and stored in tables. Q-tables of both RSS-based and location-based algorithms get an infinite number of updates. Thus we fulfill all the conditions mentioned in [17] for convergence. C. Limitations There are a few limitations in our simulation setup which we plan to address in our future research. We assume a stationary indoor environment where the victim is stagnant, which might not always be the case. In fact, in case of emergencies, the victims might switch their locations abruptly and randomly for safety purposes. In addition, frequent sharp turns while traversing will cost the UAV with more battery power. We overlook this non-trivial issue intentionally for the sake of simplicity. We will consider battery constraints and a dynamic environment for the navigation of UAVs in our future research. V. EXPERIMENTS AND RESULTS We first investigate the trajectories followed by the UAV in both scenarios using the RSS-based algorithm. We consider a location-based Q-learning algorithm as the baseline, where we assume that the UAV can track its indoor location and the location of the target is known beforehand. Apart from these, the reward is defined as (1/D t ) in the location-based algorithm, where D t is the Euclidean distance between the UAV and the victim after taking an action at time t. In this way, the UAV will try to minimize its distance from the victim through the iterations. Note that, for UAV speeds greater than The resulting UAV trajectories for the RSS-based algorithm are shown in Fig. 4(a) and Fig.4(b). The UAV speed is considered to be 1 m/s. The UAV starts from the initial location (93 m, 2 m) in Scenario 1, and from the location (74 m, 55 m) in Scenario 2. The victim is considered to be situated at (5 m, 14 m) in Scenario 1, and at the location (4 m, 4 m) in Scenario 2. In both scenarios, we observe that the trajectories tend to avoid the regions with low RSS values. Since the reward is defined as the difference between the RSS values at successive states, the UAV shows an inclination to have higher RSS values at the next steps rather than finding the victim with the smallest path. We see the same trend for other simulations with different starting positions. Sensing the paths with higher RSS values eventually leads the UAV towards the victim. Although the UAV does not know the victim's location, it can successfully reach the destination. The same as those of the RSS-based Q-learning experiments. We observe that the UAV tries to find the shortest path towards the victim in both scenarios as expected. Note that, in Fig. 4(d), the UAV tends to enter some of the compartments. This is due to the fact that the points inside the compartments are nearer to the victim from any other point in the hallway area. For higher speeds, UAV avoids those points since the overall distance covered by the UAV will be increased otherwise. To have a better understanding of the learning processes, we investigate the relative frequency of the state visits through the episodes. Heat maps (averaged over 100 runs) in Fig. 5 show the results for three different episode intervals. Comparing Fig. 5(a) and Fig. 5(b), we observe that the locationbased Q-learning visits nearby locations to the starting point more frequently in the first 200 episodes than its RSS-based counterpart. This is because the RSS-based method tries to find the locations that provide higher signal strength and RSS values at different locations are unique. As a consequence, RSS-based method learns better policies faster. On the other hand, location-based Q-learning focuses on finding the shortest route and two or more locations might have same distances from the target. Hence, location-based method needs more explorations. From Fig. 5(c) and Fig.5(d), which show the frequency of the state visits in the first 500 episodes, we can also conclude that the RSS-based method finds the optimal policy earlier than the location-based method. Lastly, as it is clear from Figs. 5(e) and 5(f), both methods learn optimal policies during the the first 1000 episodes. Fig. 6 shows the average number of steps taken per episode by the UAV to reach its goal for different speeds in Scenario 1. The number of steps required in each episode is averaged over 100 realizations. As expected, the number of steps decreases with the episode index. The UAV learns the representation of the indoor environment better as it becomes more experienced and hence, it requires fewer steps to reach the goal. The UAV can move to fewer states as its speed increases, and thus, the Qlearning algorithms tend to converge quicker with higher UAV speeds. Moreover, we observe that the RSS-based navigation converges within about the same number of episodes as the location-based method. Similar to the observations in Fig. 5, since the RSS-based technique only focuses on getting higher RSS values as rewards, it quickly learns to skip the states that provide lower RSS values. Meanwhile, the location-based Qlearning treats every possible state equally and hence ends up with getting higher average steps during the early episodes. Next, we explore the convergence time of the algorithms for different UAV speeds. We record the trajectory followed by the UAV to reach its goal for each episode. If the UAV follows the same path for three consecutive episodes, we conclude that the Q-table is converged. The time elapsed until the convergence of the Q-tables is averaged over 100 executions. The results are shown in Fig. 7. Similarly to the above results, since the number of allowable actions decreases with the UAV speed, convergence time decreases for both algorithms. We observe that the RSS-based algorithm shows competitive performance in terms of convergence time with the locationbased algorithm, especially for higher UAV speeds. Finally, we provide the total length of the final trajectories in Table IV. Since Scenario 2 consists of longer hallways and include compartments, the UAV needs to take more steps to reach the goal when compared to Scenario 1. Overall, our proposed technique provides very close results to the location-based algorithm in terms of the number of steps in the final trajectory. However, having even the same number of steps does not always imply having the same computational time or path length. This is due to the fact that diagonal movements take more time than the movements in left-right and up-down paths. Since the location-based algorithm results in more straight trajectories as shown in Fig. 4, the total final path length and flight time will be smaller than those of the RSS-based algorithm. VI. CONCLUSION In this paper, we studied the problem of detecting or rescuing a victim in a GPS-denied indoor environment using the RSS of the RF signals sent by the victim's smart devices. We envisioned a rescue system by deploying a UAV, which will navigate through the indoor environments using Q-learning techniques. We presented simulation results for two indoor scenarios with different complexities. We also compared our proposed technique with the location-based Q-learning and find that RSS-based Q-learning provides competitive performance without requiring the UAV and target location information. Our results show that the RSSbased Q-learning shows less fluctuations during training than the location-based method. The convergence time decreases with the increasing UAV speed for both methods, and the RSS-based technique learns the environment earlier than its location-based counterpart.
2019-05-31T04:08:21.000Z
2019-05-31T00:00:00.000
{ "year": 2019, "sha1": "39803c59c444dacde61d7a70fe6291118f44f2ab", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1905.13406", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "39803c59c444dacde61d7a70fe6291118f44f2ab", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
54937660
pes2o/s2orc
v3-fos-license
Evaluation of nine microsatellite loci and misidentification paternity frequency in a population of Gyr breed bovines Paternity misidentification is harmful due to the reduction in annual genetic earnings of the population and because it endangers an efficient genetic improvement program. The objectives of the present study was to evaluate nine microsatellites in Paternity Testing and to investigate misidentification paternity frequency in families of Gyr breed bovines population. In the present experiment blood samples from forty Gir breed families ( bull / cow / calf ), registered pure breed in the Zebu Breeders Brazilian Association (ABCZ) were used. The most part of the microsatellites used in this work were recommended by the International Society of Animal Genetics (ISAG). The genomic DNA extraction was performed from whole blood samples. The microsatellites TGLA122, TGLA126, BM1824, BMS2533, SPS115, ETH3, ETH10, ETH225 and POTCHA were amplified by PCR. The amplification products were separated by electrophoresis in denaturing polyacrylamide gel. From the obtained data, allele frequencies, Gene Diversity, Polymorphism Informative Content and Probability of Exclusion for each microsatellite marker were calculated. The genotype frequencies, Heterozygosity, Combined Probability of Exclusion and Probability of Paternity have also been calculated in the considered families. The Combined Exclusion Probability for all microsatellites was around 0.9789. The Paternity Testing results showed misidentification in eleven of the 40 studied families, that means, 27.5% of the sample. The Paternity Probability ranged from 0.8691 to 0.9999, and the mean was 0.9512. INTRODUCTION G yr breed is essential to Girolando cattle formation (5/8 Holstein + 3/8 Gyr).Both Gyr and Girolando are phenotypically superior, much more adapted to climate and economical traits in Brazil and with crucial qualities for a good milk production on the tropics.Girolando cattle is treated by the hybrid strenght showing rusticity and adaptation to the tropics peculiar to Gyr breed and good milk production trait from Holstein breed. The right kinship between members of a population is prerequisite for an efficient genetic improvement program 22,8,11 .The estimation of population genetic standards and individual genetic merit through an animal pattern depends on genealogy once these patterns use performance data from kinship animals 17 .A small misidentification percentage excessively endangers genetic patterns estimation 15 .In despite of this, several farms employ management practices that endanger the information related to genealogy. Geldermann et al. 8 suggested a decrease between 8.7% and 16.9% on cattle genetic profit yearly, for a 15% misidentification frequency.Ron et al. 17 suggested a 5% increase to yearly genetic profit when Paternity Test is performed on bulls sibs tested yearly in Israel.Bovine cattle paternity studies, using blood type, proteins and molecular markers showed a high frequency of incorrect paternity in Israel (5%), Germany (4-23%), Denmark (8-30%) and Ireland (20%) explaining its use in genetic improvement programs 8,3,17 . Rosa 18 performed Paternity Testing by molecular markers in Nelore bovine families.Results showed paternity misidentification in 15% of the studied families which justifies the use of these tests on genetic improvement tests. Parental relations between individuals may be proved using several genetic markers categories, resulting on Paternity Testing.Paternity biological information consists on genetic inheritance that a sib inherited from a mother and a prospective father.Once the inherited genetic part from the mother is screened, it is necessary to investigate if the rest of the information is transmitted from the prospective father.If the latter possesses hereditarian characteristics transmitted to the sib, it can not be excluded from paternity and the result is shown in paternity probability.If, on the other hand, the prospective father does not have these traits, he is excluded from paternity possibility. At first, polymorphism of morphological markers and of blood types, biochemical polymorphisms generated by Major Hystocompatibility Complex (MHC) were used for this purpose 18 .Therefore, these markers categories do not give conclusive results, and due to the great number of genetic systems need for an adequate final result, Paternity Testing once done, has its use limited due to costs. Recent advancements on molecular biology such as Polymerase Chain Reaction (PCR) development 16 and the constant discovery of new molecular markers are significantly helping solve these limitations.Molecular markers which can be highlighted as the most apropriate ones for Paternity Tests are: Restriction Fragments of Length Polymorphism (RFLPs) and multilocus minisatellites which allows an individual pattern of bands known as DNAfingerprinting and mainly specific microsatellites loci.Microsatellites are generally highly polymorphics (great number of alleles in a locus) even in endogamic populations and also with the advantage of being codominants.They are highly frequent, well distributed along the genome and easily amplified by PCR.These traits contribute to determine, with a higher reliance, paternal and maternal origin of each allele from microsatellite expressed in progeny.Microsatellites are being used aiming to identify paternity in several domestic animals species like bovines 11,21,23 ,swine 12 , canines 7 and caprines 1 . The objectives of the present study were to investigate misidentification paternity frequency in a population of Gyr breed animals using DNA microsatellites and the evaluation of potential use of these microsatellites in Paternity Tests and individual identification in Gyr breed bovines. MATERIAL AND METHOD The experiment was conducted at BIOGEM (Laboratory of Biotechnology and Molecular Genetics) of the Department of Genetics, Institute of Biosciences, São Paulo State University (UNESP), campus of Botucatu.For experimental analysis blood samples from forty Gyr breed families (bull/cow/calf) registered pure breed in the Zebu Breeders Brazilian Association (ABCZ) were used.The families were sampled in a way to guarantee proportionality using bulls on the examined herd, i.e., bulls which were more used had a greater representative family in the sample. Seven of nine microsatellites used on this work are recommended for bovine paternity tests by the International Society of Animal Genetics 13 (ISAG) based on criteria established by Food and Agriculture Organization (FAO). Total blood samples (5ml) were collected using vacutainer tubes with 7.5mg EDTA.The blood was homogenized in EDTA and kept frozen on ice.After being collected the blood was kept in a refrigerator at 4°C until DNA extraction.For genomic DNA extraction a Genomic Prep™ Blood DNA Isolation Kit (AMERSHAM PHARMACIA) was used.DNA extraction was done from 300ìl whole blood. After quantification and dilution of DNA samples, DNA regarded regions were amplified by PCR technique.Each reaction was done with a 25ìl final volume and amplification mixture was: 50 ng genomic DNA from whole blood leukocytes; 0.16 ìM from each primer; 10 mM Tris-HCl pH 8.0; 50 mM KCl; 2.0 mM MgCl 2 ; 0.2mM of each dNTP and 1U Taq DNA polymerase.Amplification reactions were done in a M.J. Research, PTC 100 Model thermocycler and the following 5 steps were: (1) initial denaturation of double band at 94°C for 3 minutes, (2) denaturation at 94ºC for 1 minute, (3) primers annealing between 54°C and 60ºC depending on their constitution for 30 seconds, (4) extension at 72°C for 1 minute and ( 5) final extension 72ºC for 3 minutes.Steps 2,3 and 4 constitute a cycle that was repeated 32 times.After the last cycle being completed by step 4 and final extension occurred the temperature fell down and was kept at 4ºC (cooling) only to preserve the products . The Tab. 1 presents primers pairs used for amplification, annealing temperature (AT) and reference of each microsatellite studied. The gel for vertical electrophoresis separation of amplified DNA fragments was done in sequencing glass plates (36.4 cm x 19.6 cm), due to the need of a migration distance of at least 30 cm, for a perfect separation of different alleles allowing estimation and identification of its length in base pairs (pb).A 6% denaturated polyacrylamide gel was used, to permit a good fragment separation of different sizes. A 20 ul sample with denaturing loading buffer and amplified DNA 1:1 proportion was applied in the gel after denaturation at 95ºC for 2 minutes.A constant potency of 40 W was applied for the necessary period (2 to 4 hours depending on the average size of alleles) for fragments migration.A molecular weight pattern with a 10 pair basis intervals fragments (10 bp DNA Ladder) from GIBCO BRL company was added in two lanes of each gel.From comparing the migration distances of the bands and the standard ones it was possible to determine DNA fragment sizes of each individual for each microsatellite. After electrophoresis, DNA fragments (bands) were detected by silver nitrate staining.The amplified fragments were visualized under white light and photographed in 667 Polaroid film (Fig. 1).From these data were calculated allelic frequencies and Gene Diversity (GD) 24 , Polymorphism Informative Content (PIC) 5 and Probability of Exclusion (PE) 6 for each microsatellite marker.Combined Probability of Exclusion (CPE) 17 was calculated for microsatellites group used.It were also calculated, genotypic frequencies and heterozigosity (Het) 24 .Probability of Paternity (PP) estimative 10 was performed in all the families where there was not paternity exclusion.Hardy-Weinberg equilibrium was tested for each marker locus. RESULTS AND DISCUSSION Paternity Testing effectiveness do not depend upon the number used microsatellites but on the level of informativeness that these markers provide.The level of informativeness of a microsatellite is determined by its values of Polimorphism Informative Content (PIC), Heterozigosity (Het), Gene Diversity (GD) and Probability of Exclusion (PE) and these values are dependent on the number of alleles and on the frequency distribution of these alleles on the population. The results of the microsatellites markers potential use in paternity tests and on the control of individual identification on the studied population are showed on Table 2. PIC, GD, Het and PE results for BMS2533 and TGLA122 microsatellites indicate the high level of informativeness of these markers on the studied samples in terms of highly variability found.So, these microsatellites showed to be adequate to perform a Paternity Testing and for the individual characterization on the samples population of Gyr breed.The same is not true for ETH10, SPS115, TGLA126, ETH3, POTCHA, BM1824 and ETH225 markers. Microsatellites like ETH3, ETH225 and BM1824, recommended by ISAG for Paternity Tests in bovines, showed low PE.On the other hand, BMS2533 microsatellite not commonly used for these aims showed high Exclusion Probability.This fact makes clear the need of characterization for different populations or lineages within a breed in which one wants to perform a Paternity Testing, since the number of alleles and allelic frequencies can be different in different populations of the same breed. The Table 3 shows Combined Probability of Exclusion (CPE) increase in terms of the microsatellites numbers used. Combined Probability of Exclusion (CPE) obtained with the use of 9 microsatellites (0.98) was smaller than the optimal value (0.99).Combined Exclusion Probability for 7 markers with PE near to 0.5 would be 0.992.A CPE of same magnification (0.991) would be reached with only 4 markers with PE near to 0.7. The frequency of paternity misidentification found in this study was 27.5% (11 in 40).Baron et.al 2 used microsatellites markers to find 36% of paternity misidentification in Gir Breed families of bulls submitted on a progeny tests.These values are above the values obtained by other authors, studying other races in another countries, mentioned by Ron et al. 17 .Misidentification Paternity frequency found in this study may reflect brazilian reality, and proves that a more efficient control over genealogical Table 4 Excluded families in the Paternity Tests and microsatellites markers responsible for the exclusion.Botucatu -SP, 1999.T = bull , V = cow , F = calf records is necessary in genetic improvement programs for Gyr breed. The Table 4 shows the excluded families in the Paternity Tests and microsatellites markers responsible for the exclusion. Paternity Probability estimation was done in all the families in which paternity exclusion did not occurred (Table 5).Families where exclusion occurred, the Probability of Paternity is null.Paternity Tests results showed a Probability of Paternity varying between 0.8691 and 0.9999 with 0.9512 on average.Only 8 families reached recommended probability of 0.99.Bull A (TA), the more used one in the herd (breeding), had its alleles frequency increased along generations.As a consequence, there was a possible increase in genotypes of bulls compatible to cows genotypes and progeny, decreasing Probability of Paternity of this bull.In bull D (TD) the opposite occurred.So, in confined herds with high levels of endogamic breedings it is necessary to use a greater number of microsatellite markers to reach the optimal Probability of Paternity. PALAVRAS-CHAVE Besides more polymorphic microsatellites for Gyr breed, variants of the PCR technique, as the PCR multiplex, are necessary to decrease costs related to genotyping of the animals and the commercial use of Paternity Tests.Paternity Testing could be applied in many practical situations such as selections programs which use multiple reproducers on the field performance evaluation programs on young bulls, in families of bulls submitted on a progeny tests and in families of animals registered in the associations, so that we could verify the truthfulness of information given by the producers. Lanes 1,2,3,5....17,19 and 20 DNA of the animals studied.Lanes 4 and 18, DNA ladder 10 pb.The numbers on the left side of the figure indicate DNA fragments size in base pairs. CONCLUSIONS PIC, DG, Het and PE results obtained for BMS2533 (not commonly used in Paternity Tests), BM1824 and ETH 225 (recommended for Paternity Tests by ISAG based mainly on studies performed in European cattle) shows that appropriate microsatellites for Paternity Tests in European breeds, maybe are not the most adequate ones for zebu breed and vice versa. Table 3 Combined Probability of Exclusion (CPE) in terms of microsatellites number (MS) used, with PE combinations in decreasing order of values.Botucatu -SP, 1999. Table 5 Probability of Paternity (PP) in families in which paternity exclusion did not occurred.Botucatu -SP, 1999.TA = bull A, TB = bull B, TC = bull C , V = cow , F = calf Polyacrylamide denaturing gel electrophoresis of bovine DNA microsatellite revealed with silver staining.Band patterns observed for BMS2533 microsatellite alleles.
2018-12-12T05:35:42.797Z
2002-01-01T00:00:00.000
{ "year": 2002, "sha1": "bad9349be475994ac8bd096fbc648b0a482f88a2", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1590/s1413-95962002000300004", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "bad9349be475994ac8bd096fbc648b0a482f88a2", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
220349233
pes2o/s2orc
v3-fos-license
Concerns of a Post‐Chemotherapy/Radiotherapy Patient of Nasopharyngeal Carcinoma Presenting with Sustained COVID‐19 Infection IntroductIon Nasopharyngeal carcinoma (NPC) has a low incidence in India except in the north‐eastern region of the country. Radiotherapy (RT) is the primary treatment modality for NPC because of the anatomical location and radiosensitivity of cancer. Early‐stage disease is often successfully treated with RT alone with a 5‐year overall survival of 87%–96% in Stages I and II.[1] Cancer patients, facing a state of immune‐compromised are regarded as a highly vulnerable group in the current COVID‐19 pandemic. It is recommended that cancer patients receiving antitumor treatments should have vigorous screening for COVID‐19 infection and should avoid treatments causing immunosuppression or have their dosages modified in case of COVID‐19 co‐infection.[2] IntroductIon Nasopharyngeal carcinoma (NPC) has a low incidence in India except in the north-eastern region of the country. Radiotherapy (RT) is the primary treatment modality for NPC because of the anatomical location and radiosensitivity of cancer. Early-stage disease is often successfully treated with RT alone with a 5-year overall survival of 87%-96% in Stages I and II. [1] Cancer patients, facing a state of immune-compromised are regarded as a highly vulnerable group in the current COVID-19 pandemic. It is recommended that cancer patients receiving antitumor treatments should have vigorous screening for COVID-19 infection and should avoid treatments causing immunosuppression or have their dosages modified in case of COVID-19 co-infection. [2] case rePort A 47-year-old male visited an otorhinolaryngologist with a chief complaint of bilateral hearing loss since July 2019, which was gradual in onset. In November 2019, the treating otorhinolaryngologist after managing conservatively and reporting no relief in symptoms referred the case to the medical oncology department of a private hospital in New Delhi. On examination by a medical oncologist, the patient reported the loss of hearing and pain. He complained of pain over the preauricular region, which was described as dull, aching, and bilateral. Pain which used to be on and off and started as mild but gradually increased on rare occasions which got relieved by taking tablet tramadol 50 mg. At the clinic, the patient got investigated, and one small growth was found at the posterior nasopharyngeal wall which was confirmed to be NPC on biopsy. PET scan report shows that the disease is confined to its anatomical site of origin and is not metastasized yet. He was given six cycles of chemotherapy (CT) over a span of 9 weeks. He responded to the treatment and planned for further therapies. Then, the patient was given concurrent chemoradiation. The patient got the last radiotherapy on April 10 and chemotherapy on April 8. One week later, he developed a low-grade fever which was considered as a complication of chemotherapy. However, when fever continued, he was suspected of corona infection, so he was sent to the COVID-19 testing center where he came negative. Then, the patient developed urinary retention the next day and found to have some growth in urinary bladder and bladder was overflowing according to CT scan. Urinary catheterization tried but failed, so the patient was planned for suprapubic catheterization. Before the procedure, he was again tested for COVID-19, and the report came out to be positive. Plan for suprapubic catheterization was changed and urinary catheterization tried which was successful this time with smaller size catheter (10 F). Then, the patient was sent to RML Hospital, New Delhi, on April 28, and from there, he was referred to the COVID care facility at NCI-AIIMS Jhajjar. The patient is asymptomatic given COVID-19 infection and being managed conservatively. His vitals are stable; oxygen saturation is well maintained above 98% on room air. He has been tested thrice for COVID-19. He is still getting reverse transcription polymerase chain reaction (RT-PCR) positive for a month. As the patient got COVID-19 infection just after his therapies for NPC, immunosuppression can be the reason. This shows that cancer patients with ongoing treatments have a higher risk. Communication barrier He is not able to communicate properly with health-care workers and his family members on the phone as he has hearing difficulty. Due to a lack of communication, he feels lonely and unattended. His father is the main caregiver, and his family has been quarantined so he is unable to get a few items which he needs from home. As the father is unable to come to meet his son, he is also feeling guilty for not being able to fulfill his needs. Lack of trust in the report His first report was negative then subsequent 3 RT-PCR test reports came positive. He is regularly asking about his report and doubting the results. He is worried and in doubt that why others are getting negative after 2 tests, but he is not. He does not trust the report. His father is also repeatedly calling to ask why his report is coming positive. Anxiety and fear He has this fear that his condition will deteriorate as he is getting repeated positive reports. He is afraid that his cancer treatment plans will get delayed and will affect his outcome. Cancer treatment and COVID-19 Due to the overwhelming number of people seeking medical care and the burden that COVID-19 is placing on health-care providers, it has become more difficult to access regular cancer treatments during this time. Further treatment of cancer and follow-up is getting interrupted due to COVID-19 infection. Delay in the treatment is making the patient more anxious and irritable. His father believes that his cancer will get cured if he gets further treatment. But due to COVID-19, his treatment is getting affected. Role of caregiver The role of a caregiver during this time is to provide support and stability. Recommend patient or family members to avoid watching the news if it causes them anxiety or concern. Reassure patient that you will always be available by phone or video call. Continue to remain in contact with your patient. As a caregiver, he should be the pillar of security and comfort. Preparing a safe environment and providing the right resources will ensure the patient. conclusIon The cancer therapies cannot be stopped, the change in dosage and duration should be considered. In COVID-19-positive patients, radiotherapy should be deferred until the patient become negative and asymptomatic. [3] Immunity boosting diets and habits to be incorporated and encouraged. Patients and caregivers should be prognosticated about the condition and how it is going to affect the course of the original disease. These patients should be counseled regularly so that they do not lose hope. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2020-07-02T10:05:32.604Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "41b631218a643aa269e48897cd0300e6e3fd33b5", "oa_license": "CCBYNCSA", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7534983", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f083039a2fac9e57e17abdc0a4bb858928bebfc8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
169588281
pes2o/s2orc
v3-fos-license
“Lending as motivation for innovative activity of a modern enterprise” The most important motivating factor for enhancing innovation activity is lending as a stimulus for the development of a modern enterprise. The motivation of the Ukrainian enterprise, based on the need for innovative activity lending in ensuring the efficiency of its economic activity, was explored. The authors use different methods of research, for example, analysis and synthesis methods, method of scientific substantiation and comparison of the main indicators of the activity of the investigated enterprise, as well as correlation and regression analysis method. Here is also used the method of correlation and regression analysis to determine the effect of changes in the average annual cost of fixed assets and investments in their modernization on the motivation to increase revenue from the sale of products (goods, products, services), as well as to characterize the functional relationship between income from the sales of products and capital, expenses, investments. The results of the study indicate a close relationship between the indicators, thus there is a high dependence on the increase of the volume of income from the sale of products due to the need to attract financing in the form of lending to innovative products of the investigated enterprise. Lending of innovative activity contributes to the increase of sales volumes and the emergence of its new products, and also serves as a form of strengthening the motivation of enterprise development. As a result of the research, the theoretical principles of using Ukrainian enterprise motivational space with lending involvement for the introduction of innovations have been substantiated. Abstract The most important motivating factor for enhancing innovation activity is lending as a stimulus for the development of a modern enterprise.The motivation of the Ukrainian enterprise, based on the need for innovative activity lending in ensuring the efficiency of its economic activity, was explored.The authors use different methods of research, for example, analysis and synthesis methods, method of scientific substantiation and comparison of the main indicators of the activity of the investigated enterprise, as well as correlation and regression analysis method.Here is also used the method of correlation and regression analysis to determine the effect of changes in the average annual cost of fixed assets and investments in their modernization on the motivation to increase revenue from the sale of products (goods, products, services), as well as to characterize the functional relationship between income from the sales of products and capital, expenses, investments.The results of the study indicate a close relationship between the indicators, thus there is a high dependence on the increase of the volume of income from the sale of products due to the need to attract financing in the form of lending to innovative products of the investigated enterprise. Lending of innovative activity contributes to the increase of sales volumes and the emergence of its new products, and also serves as a form of strengthening the motivation of enterprise development.As a result of the research, the theoretical principles of using Ukrainian enterprise motivational space with lending involvement for the introduction of innovations have been substantiated. INTRODUCTION The main factors holding back the development of the lending system are the lack of economic interest in the implementation of investment projects and the inability to mobilize a sufficient amount of long-term financial resources at acceptable interest rates.The current state of investment support for innovations is generally unsatisfactory, so there is a problem of financing innovative activity in the form of lending. Innovative activity development at enterprises enables not only to change the nature of production activity, but also to provide it with new content and value.Innovation activity is aimed at creating innovations, moving to a rational organizational and technological structure of production and ensuring the competitiveness of products on the markets.This circumstance requires not only a reasonable choice of innovations, but also an effective financial and credit support for innovation activity of enterprises. For a stable functioning of the production process, domestic enterprises should have certain stocks of material and financial resources.A natural phenomenon for Ukrainian enterprises is the formation of resources, first of all, at the expense of their own sources, however, due to various circumstances, both general and specific for each production enterprise, the need for additional financial resources can suddenly and rapidly increase, which also determines the need for obtaining loan.Therefore, the lending process issues as forms of stimulation of innovation activity of enterprises remain controversial. LITERATURE REVIEW For effective development and functioning in a competitive environment, the company faces the need to introduce innovations that are directly related to financing, first of all, there are own financial resources of enterprises, but the issues of increasing the share of lending in financing the innovative activity of Ukrainian enterprises are relevant. Botvina (2016) fairly notes that in practice, the lending of enterprises by domestic banks has not yet become widespread.This is due to many reasons, among which the most significant is the financial condition of the borrower and his ability to secure a loan repayment.Therefore, there is a need to search for new segments for lending investment and innovation development of enterprises. First of all, there are own financial resources of enterprises, but the issues of increasing the share of lending in financing the innovative activity of Ukrainian enterprises are relevant.Mykytiuk's (2008) researches cover the problem of financial provision of investment and innovation activity, in particular, the volumes of long-term lending to the economy show that Ukrainian banks prefer those projects that can make profits in the shortest possible time, while short-term bank loans do not have investment and innovation orientation.Pschyk's and Sukharevich's (2008) work is worthwhile pointing out where they consider the sources and problems of financial and credit provision of innovation activity in Ukraine.The authors suggest a system of measures to stimulate the activities of financial and credit institutions, aimed at the development of innovative processes.The proposed structure of financial and credit provision mechanism of innovation activity deserves attention.Pashova (2012) explores the place of bank investment lending in the overall structure of sources of financing for innovation activities, performs a thorough analysis of the bank volume investment loans for innovation activities and notes the main obstacles to the development of bank investment loans for innovation in Ukraine that require urgent resolution, by introducing effective measures of a stabilizing nature. Considering great attention, which is being paid by economists, it should be noted that financial and credit support for innovation requires new approaches.So, Palcevich (2010) reveals the essence of sources of financial support for innovation activities and mechanisms for their involvement.The author describes the peculiarities of state financing of innovation development, lending of innovative programs, and the use of leasing as a form of innovation financing.Koroljova-Kazanska (2010) explores the sources of funding for innovative enterprise projects and identifies the features of attracting borrowed financial resources as the main source of financing for innovation projects.The author focuses on long-term commercial loans, which are provided for the period of implementation of the innovation project, the terms of lending are agreed directly between the bank and the borrower company. In this spectrum of research conducted, Fedorenko and Pinchuk (2011) determine the alternative financing options, which are long-term lending without state guarantees and financial leasing.The authors substantiate that one of the most important arguments in the selection of innovative projects is their compliance with the priority directions of foreign loans use and economic development. One should pay attention to Professor Maznev's (2014) profound studies that paid much attention to the problems of financial support for innovation development, examining the volume of lending of enterprises, average interest rates on loans, financing of investment and innovation projects.Also, Chemodurov (2013) devotes his works to the problems of financing innovation activity of enterprises, focusing on the possibilities of expanding funding and their effective use.Significant potential for attracting additional financial resources, which remain unused today, is revealed, and proposals for its use are substantiated.Klimova (2009) evaluates the financial and credit support of innovation activity in Ukraine and abroad, using financial and credit leverage for state support of innovation, supporting enterprises and facilitating the development of enterprises in Ukraine.Komelina (2009) suggests directions for improving the financial mechanism for ensuring innovation and investment activity in Ukraine under the conditions of deepening global financial and economic crisis. In turn, Korniychuk (2014) notes the use in practice of lending by commercial banks of such a loan product as a credit line, which will increase investment volume and innovative lending.The requested credit line contributes to savings on interest for the borrower, and the bank receives reliable borrowers for the long-term period.Boyarinova (2009) examines the financial support of Ukraine innovation development and distinguishes the sources of funding for innovative lending activity.The author notes that the share of such financing is low because of the high risk of lending of innovative projects, so banks provide short-term or medium-term loans. Maidanevich and Rudenko (2016) consider the issues of investment lending for enterprises development and suggest the introduction of special conditions for compensation of interest rates on long-term loans, stimulating those enterprises to develop and implement long-term investment and innovation projects, which will result in an increase in gross output, as well as the quality and competitiveness of this product and the creation of its new types.Pshyk (2003) highlights a series of stages at which the bank will work on lending of innovative projects and notes that one of the mechanisms that would be appropriate to use for revitalizing the processes of lending of innovation in Ukraine may be the application of subsidies to interest rates for loans granted by the state to banks under the conditions of investing funds in scientific and technical and innovation activities of priority industries and enterprises. In the context of the study of topical issues of bank lending for innovation, a group of authors led by Podderyogina (2009) proposed areas of financial support for the food industry of Ukraine innovative development, pointing out the need for targeted state support and attracting bank loans and loans from international financial institutions.Emanuele Brancati (2015) states the need and effectiveness of establishing close relationship between the lending bank and the company to overcome the financial barriers to innovation.Savchuk and Grydzhiuk (2017) investigate the tendencies of banking system development in the Ukrainian economy, noting the increase in the level of lending of business and population; lowering the key interest rate of the National Bank of Ukraine, which creates positive conditions for raising the level of the economy, reducing interest rates on deposits, increasing retail lending and increasing the loan portfolios of individuals. The group of authors, Girma Sourafel, Gong Yundan, and Görg Holger (2008), explores the relationship between FDI and enterprise innovation.Their findings show that enterprises with foreign capital participation or with possible access to domestic bank loans implement more innovations than others. Nick Rees (2017) conducts an applied research on lending, defining a strategy to achieve long-term capital growth, seeking to minimize the risk of loss through strategic investment of capital into an actively managed portfolio of private loans to companies in Latin America.The portfolio consists mainly of medium-term current assets for export, medium-term asset-based loans, import financing and working capital loans.All loans are provided with various assets, including export contracts, warehouse receipts and accounts receivable. Based on high positive evaluation of scientific research on this issue, some aspects of investing in innovation in the future related to lending remain controversial and require a comprehensive scientific study. The purpose of the article is to study theoretical and methodological approaches of solving the problem of lending as a means for ensuring effective activity by motivating business entities innovation activity. METHODS In order to achieve scientific research goal, the following methods have been used: analysis and synthesis method for the research of credit support for investing in innovations; scientific substantiation method and comparison of main indicators of the activity of the investigated enterprise; the method of correlation and regression analysis to characterize the functional relationship between income from sales of products and capital, costs, investments. RESULTS Considering development of the economy under the conditions of European integration, the innovative component "requires large-scale investment, which is impossible without sufficient credit provision" (Mayorova & Urvantseva, 2014, p. 30).The authors support Krupki's (2009) opinion that lending of innovation is an important element in national economy development, and credit is an incentive to work and a source of investment. The authors conduct a thorough research of the innovative enterprise PJSC "Plasmatek", which forms the motivational space through the implementation of the need to avoid over-spending during the period 2012-2016, which allows us to conclude on the entire period of the research 2008-2016, based on the policy of investing in innovations in the form of modernization and acof new equipment, attracting loans.And although information on significant investment in human capital is officially provided (Stock market infrastructure development agency of Ukraine (SMIDA)), however, real calculations show the motivating advantage of fixed assets renewal.It looks like upgrading and acquiring real estate, upgrading equipment, purchasing new equipment. So till 2016, innovations relate to the production and sales of five types of welding electrodes, as well as the production of kaolin equipment and products. The motivational space of PJSC "Plasmatek" is an orientation towards the production and sale of products through the use of both conventional and upgraded equipment, but without taking into account the motivation of hired workers (Table 1). Table 1.Influence of changes in the average annual cost of fixed assets and investments on their modernization to form the motivation to increase revenue from the sale of products (goods, works and services) of PJSC "Plasmatek" * Source: Formed by the authors on the basis of (stock market infrastructure development agency of Ukraine (SMIDA).Proceeding from the fact that 0.66 < 1 and othgerwise 2.35 > 1, one can conclude that innovative activity allows to motivate production at a much faster pace, and therefore, we evaluate the activity of the enterprise by 2009, approximately 2, from 2010 to 2016 -line 1 (Figure 1). Indicator Thus, using the orientation factor, one can determine the likely achievements in using the potential of the motivational space due to changes in the volume of fixed assets and sales of the enterprise (Table 2). Changing the amount of capital (fixed assets) is the difference for the first two years (K 1 -K), for the following (K -K 1 ). Changes in income from sales of products (goods, works, services) are the difference for the first two years (Y 1 -Y), for the following (Y -Y 1 ). Motivation for reaching the benchmark is the motivation to use (overcome) space as a result of changes in the orientation factor.It is calculated as a percentage, as the ratio of the indicator of change in the amount of capital (sales) to the cost of capital (income (revenue) from the sale of products (goods, works and services). In 2008 and 2009, the need to use investments to motivate the achieved capital (fixed assets) is estimated at 24.5% and 3.9%, with an appropriate orientation of 18.0% and 24.0%.In subsequent years, the need to motivate the amount of capital (fixed assets) increases due to the intensification of investments, because during all those years except 2012, the value of the indicator exceeds 100% when targeting 12-29%. Motivation estimation to achieve income (revenue) from the sale of products (goods, works and services) through the use of investments during the period 2008-2016 ranges from 2.8% to 37.4% with orientation rates 12.0-29.0%There is no excess of more than 100%, which indicates the problem of increasing the motivation to increase revenue from the sale of products (goods, works and services) through investments and the justified compliance of the orientation indicator.Negative values of the indicators of motivation of income from sales of products (goods, works and services) in 2009 and 2012 indicate the exhaustion of investment opportunities in the existing development of the enterprise. Investigations on PJSC "Plasmatek" are carried out with the help of correlation and regression analysis of the influence on motivators of the activity of motivational tools (Table 3).Taking into consideration the peculiarities of determining the reference point in motivating revenue from the sale of products (goods, works and services) of PJSC "Plasmatek" for the period 2008-2016 due to investing in innovation activities (Table 4), it is possible to scale down the correlation coefficients and regression equations to the level of influence on connections between motivational tools and motivators (Figures 2-5). Motivator Motivation tool Revenue from the sale of all products (Y) Revenue from the sale of invested products (Y1) Capital (fixed assets) of the enterprise (C) Purchased investments (C1) Revenue from the sale of all products (Y) Capital (fixed assets) of the enterprise (C) Revenue from the sale of invested products (Y1) Purchased investments (C1) Figure 2 shows the high dependence of the increase of volume of the income from the sale due to the need for growth of investment in innovative products of the investigated enterprise, since the correlation coefficient in this case is the largest, which indicates a close connection.The determination coefficient R = 0.9711 shows that 97.11% of total volume fluctuation in revenue from the sale is due to differences in investment in innovative products, while the remaining 2.89% are other factors that were not taken into consideration in this case.Capital growth motivation (fixed assets) due to the need to increase the investment of PJSC "Plasmatek" is shown in Figure 3.The analysis shows that the density of communication is average.The determination coefficient is 0.5814, therefore, it should be pointed out that the growth of capital (fixed assets) due to the need to increase the investment of PJSC "Plasmatek" depends on 58.14% and 41.86% of the remaining factors. The motivation to increase the volume of income from the sale of products due to the need to increase the cost of capital for PJSC "Plasmatek" is shown in Figure 4. Figure 4 indicates a high dependence, that is, a high level of motivation to increase the volume of income (revenue) from the sale of products due to the need to increase the cost of capital for the enterprise. The motivation to increase the volume of sales of innovative products due to the need to increase the investment costs of PJSC "Plasmatek" is shown in Figure 5. Dependence of sales volume of innovative products due to the need to increase the cost of investment is high and therefore motivation of the production of innovative products is high, which affects the economic development of the enterprise. CONCLUSION There is a need for the effective use of bank loans in the interest of economic development of enterprises in modern conditions.The insufficiency of own funds to finance innovation activity at enterprises leads to the search for additional external sources of funding, the most available among them are loans. Lending is a motivating factor for introducing innovations at an enterprise and, in particular, a new direction of research, which involves the use of developments in the practical activity of the enterprise in ensuring the most effective types of activities, which are divided into innovation and investment incentives in effective activities development. The conducted research of PJSC "Plasmatek" gives grounds to assert that the company is effective, as a result, it is motivated for innovation, which, in its turn, stimulates additional investments that allow to form a new financial policy, in particular, loans as a stimulating investments means. Investing in innovation activity contributes to increasing the volume of sales of innovative products and the emergence of its new types, and lending acts as a form of investment motivation enhancement. Figure 1 . Figure 1.Orientation of PJSC "Plasmatek" in motivating income from the sale of products (goods, works and services) during the period 2008-2016 Figure 2 .Figure 3 .Figure 4 .Figure 5 . Figure 2. Motivation to increase the volume of income from the sale due to the need to increase investment in innovative products of PJSC "Plasmatek" Table 2 . Motivation to achieve the amount of capital (fixed assets) and income (revenue) from the sale of products (goods, works and services) of PJSC "Plasmatek" according to the orientation factor Table 4 . Motivators and al tools of motivation in terms of reaching the benchmark of PJSC "Plasmatek" Source: Authors' own development.
2019-05-30T23:44:41.729Z
2018-05-31T00:00:00.000
{ "year": 2018, "sha1": "9a5cc75d51bb7a6029c22fe14dd5d0e1474a7959", "oa_license": "CCBY", "oa_url": "https://businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/10395/imfi_2018_02_Polishchuk.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "89837c12a46b48a83eb195f90bc3ce64ce6d5fea", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
209655083
pes2o/s2orc
v3-fos-license
Health-care investments for the urban populations, Bangladesh and India Abstract Objective To estimate the costs and mortality reductions of a package of essential health interventions for urban populations in Bangladesh and India. Methods We used population data from the countries’ censuses and United Nations Population Division. For causes of mortality in India, we used the Indian Million Death Study. We obtained cost estimates of each intervention from the third edition of Disease control priorities. For estimating the mortality reductions expected with the package, we used the Disease control priorities model. We calculated the benefit–cost ratio for investing in the package, using an analysis based on the Copenhagen Consensus method. Findings Per urban inhabitant, total costs for the package would be 75.1 United States dollars (US$) in Bangladesh and US$ 105.0 in India. Of this, prevention and treatment of noncommunicable diseases account for US$ 36.5 in Bangladesh and U$ 51.7 in India. The incremental cost per urban inhabitant for all interventions would be US$ 50 in Bangladesh and US$ 75 in India. In 2030, the averted deaths among people younger than 70 years would constitute 30.5% (1027/3362) and 21.2% (828/3913) of the estimated baseline deaths in Bangladesh and India, respectively. The health benefits of investing in the package would return US$ 1.2 per dollar spent in Bangladesh and US$ 1.8 per dollar spent in India. Conclusion Investing in the package of essential health interventions, which address health-care needs of the growing urban population in Bangladesh and India, seems beneficial and could help the countries to achieve their 2030 sustainable development goals. Introduction Cities promote national economic growth and prosperity, innovation and overall national welfare. The United Nations (UN) has pointed out that modern cities exhibit contrasts between wealth and poverty, opportunity and deprivation and vibrant potential and systemic decay. 1 Cities have natural advantages in providing all kinds of services, not least because they are national economic drivers, with access to proportionately greater financing mechanisms than rural areas. Their size allows for a greater variety of services and economies of scale compared with sparsely populated areas. However, they also face challenges, for example, in establishing new health facilities where real estate is expensive and scarce, and in incorporating the long-neglected urban poor into comprehensive planning. 2 According to the UN, the proportion of the world's population living in urban areas will increase, from an estimated 55% in 2018 to an estimated 68% by 2050. 3 Bangladesh and India are experiencing some of the highest urban population growth rates in the world. The UN projects that the urban population will grow from 48 million in 2011 to 84 million in 2030 in Bangladesh. In India, the projected population increase is from 377 million to 612 million. 3 This increase will be largely due to internal migration and natural population growth. Slums will account for an ever-greater proportion of urban inhabitants. In 2015, 62 cities in Bangladesh and India had more than 1 million population and five had more than 10 million; by 2030, 77 cities will exceed a population of 1 million and eight will exceed 10 million. 3 To meet the health needs of the growing urban population, health-care services need to expand. Currently, both countries have a mix of public and private health-care provision. In India, publicly-financed health services have been provided exclusively by public sector facilities, with little formal attention to either regulating the private sector or to delivering publicly-financed services through private providers. 4 The Bangladeshi government has for the past two decades used some public financing to fund services through the nongovernmental organizations. In both countries, health infrastructure and services have steadily improved, but are still inadequate to serve the population need. Scarce efforts to improve urban health have been made over the last several decades, either by national governments or external partners. 5,6 To assess the cost and benefit of providing interventions for major public health, prevention and treatment needs for populations in Bangladesh and India, we identified interventions from the nine volumes of the third edition of the Disease control priorities. 7 The 208 interventions identified constitute a package of essential health services covering the most common causes of visits to doctors and admission to hospitals during the life course. This package includes almost all of the 218 interventions included in Disease control priorities, omitting only those that are not relevant to South Asian populations, e.g. prevention and treatment of African trypanosomiasis. Box 1 presents some examples of the interventions; the full list is Objective To estimate the costs and mortality reductions of a package of essential health interventions for urban populations in Bangladesh and India. Methods We used population data from the countries' censuses and United Nations Population Division. For causes of mortality in India, we used the Indian Million Death Study. We obtained cost estimates of each intervention from the third edition of Disease control priorities. For estimating the mortality reductions expected with the package, we used the Disease control priorities model. We calculated the benefit-cost ratio for investing in the package, using an analysis based on the Copenhagen Consensus method. Findings Per urban inhabitant, total costs for the package would be 75. available in the data repository. 8 Here we estimate the costs, mortality reduction and the benefit-cost ratio of providing this package for the urban populations in Bangladesh and India. Demography and disease burden For both countries, we used population data from the 1991, 2001 and 2011 censuses, 9,10 21 We based costs on a population of one million and an 80% coverage level for all included interventions. We calculated two cost estimates of the package, incremental and total costs. Incremental cost, that is, the additional cost that would be needed to provide 80% population coverage, was calculated as: where P i is the population in need of the intervention i, Δco i is the additional proportion of individuals that is needed to reach 80% coverage (that is, 80% minus current coverage level), c i is the yearly cost per person of intervention i. The total cost for 80% coverage, that is the total costs of current spending plus incremental cost, was calculated as: where t is the target coverage level of 80%. 22 To estimate the population in need of an intervention, we used national surveys, ministry reports and population-based registries to obtain incidence or prevalence data of relevant conditions. For various reproductive, maternal and child conditions, and gender-based violence, we used urbanspecific data from the India National Family Health Survey, 23 Bangladesh Demographic and Health Survey, 24 and Bangladesh Report on Violence Against Women Survey. 25 For cancer incidence, we used 2012 data from the International Agency for Research on Cancer GLOBOCAN database for Bangladesh 26 and data for urban India (2012-2014) from the National Population-Based Cancer Registry. 27 For conditions where incidence and prevalence data were not reported by urban and rural sectors, we used national estimates from published literature, government reports, World Health Organization (WHO) reports and the Global Burden of Disease model-based estimates for South-Asia for 2016. 28 The earliest data were from 2011. Where epidemiological data were not available, we used estimates from the third edition of Disease control priorities for lower middle-income countries. 22 For baseline coverage data, we used the Indian National Family Health Survey and the Demographic Health Survey in Bangladesh, which provide data on the urban population for most reproductive, maternal and child health, and Box 1. Examples of interventions included in the suggested urban package of essential health services for Bangladesh and India Maternal, perinatal and childhood conditions • Management of labour and delivery in low-risk women by skilled attendants, including basic neonatal resuscitation following delivery, and in high-risk women, including operative delivery. Infectious diseases • Active case finding of high-risk individuals (e.g. people living with HIV) with tuberculosis symptoms and linkage to care. • In all malaria-endemic areas, diagnosis with rapid test or microscopy followed by treatment with artemisinin-based combination therapy (or current first-line combination). Where rapid test and microscopy are unavailable, patients with febrile illness receive presumptive treatment with artemisinin-based combination therapy and patients with severe illness receive in addition antibiotics. Noncommunicable diseases (such as cardiovascular disease, cancer, mental health, rehabilitation and palliative care) • Substantial increases in the excise taxes on manufactured cigarettes. • Opportunistic screening for hypertension for all adults and initiation of treatment among individuals with severe hypertension and/or multiple risk factors. • Long term management of ischaemic heart disease, stroke and peripheral vascular disease with aspirin, β blockers, blood pressure lowering pills, and statins (as indicated) to reduce risk of further events. • Management of acute exacerbations of asthma and COPD using systemic steroids, inhaled β-agonists, and if indicated, oral antibiotics and oxygen therapy. • Early detection and treatment of early-stage breast, cervical, breast, and childhood cancers. • Management of depression and anxiety disorders with psychological and generic antidepressant therapy. • Rehabilitation programmes for cardiac and pulmonary conditions. • Essential palliative care and pain control measures, including oral immediate release morphine, and medicines for associated symptoms. Injuries • Trauma related surgical procedures, such as laparotomy and amputations. • Rehabilitation for patients following acute injury or illness. • Gender-based violence care, including counselling, provision of emergency contraception and rape-response referral. BCG: bacillus Calmette-Guérin; COPD: chronic obstructive pulmonary disease; DPT: diphtheria, pertussis and tetanus; HIV: human immunodeficiency virus. Note: All 208 interventions included in the package are available from the data repository. 8 household sanitation interventions. 23,24 For other interventions, we used data from the published literature. For interventions similar to an intervention with available coverage data, we used that intervention as a proxy. We used WHO coverage estimates for malaria and tuberculosis diagnosis and treatment. 29,30 For missing data (e.g. for mental health disorders) we used baseline coverage estimates for lower middle-income countries from the third edition of Disease control priorities. 7,22 To account for the costs of infrastructure, surveillance, regulation and other support activities, we added 40% of the total direct cost, that is, personnel, drugs and equipment costs. This infrastructure excess was based on an earlier detailed costing analysis for India. 4 Using the above inputs, we estimated overall current annual spending, current annual spending per capita, and the incremental and total annual cost needed to achieve 80% coverage of the package. We also allocated the cost across major disease groups, and by platforms of health system (that is, population-based interventions, community services, health centres, first-level hospitals and referral and specialized hospitals) and the type of care provided (that is, urgent, recurrent for chronic diseases and others, such as childhood immunization). Mortality reduction We estimated the number of premature deaths (before 70 years of age) averted by the package by first estimating the age and sex distributions for urban population in Bangladesh and India for 2030. We did so by applying the age and sex distributions of the urban population of the 2011 Bangladesh and India censuses to the UNPD urban population projection for 2030. 3,9,10 We projected 2030 baseline deaths using cause-specific mortality rates from the Million Death Study for urban India. 12 Such data were unavailable for Bangladesh, so we used the average cause-specific mortality rates in urban West Bengal and Assam, the major Indian states bordering Bangladesh, as proxies. We used the effect sizes of the package interventions on mortality reduction for lower-middle income countries from a published working paper, 31 assuming uniform effect sizes across all age groups and 80% efficiency in intervention delivery at baseline. We compared the estimated mortality reduction to the so-called 40x30 reduction target, which is a set of selected disease-specific targets to help achieve the sustainable development goal 3. 32 This reduction target aims for a 40% reduction in deaths among people younger than 70 years; a twothird reduction in child and maternal mortality and mortality due to human immunodeficiency virus infection, tuberculosis and malaria; and one-third reduction in premature deaths from other communicable diseases, injuries and noncommunicable diseases. 32 Benefit-cost ratio To estimate the benefit-cost ratio for investing in the package, we used a published method 33 that is based on the Copenhagen Consensus method. 34 We converted the number of deaths averted in the age groups 0-4 years and 5-69 years to disability-adjusted life years (DALYs). For the age group 0-4 years we used a factor of 97 DALYs per death averted. For the age group 5-69 years, we used 97 DALYs per death averted for the ages 5-49 years and 42 DALYs per death averted for ages 50-69 years. The conversion factors were derived by dividing the total all-cause DALYs by the total number of deaths in each age group in lower-middle income countries from the 2016 WHO global health estimates. 35 We monetized the DALYs conservatively by multiplying by twice the 2016 gross domestic product (GDP) per capita in each country. We obtained GDP per capita from the World Development Indicators. 21 We applied a 3% discount rate to costs and benefits over 15 years. Sensitivity analysis To examine the effect of the package on mortality reduction at different levels of delivery efficiency and coverage levels, we conducted sensitivity analyses at 70%, 80%, 90% and 95% efficiency in intervention delivery and at 60%, 70%, 80%, 90% and 100% coverage levels. All analyses were performed in Stata version 15.1 (StataCorp. LCC, College Station, United States of America). Demographics in 2030 As urban populations are increasing over the next decades, the population structure will shift towards middle and older ages, with the largest increases in the 30-69-year age group in both countries. The proportion of population aged 30-49 years will increase from 26.7% (available in the data repository). 8 This shift reflects migration patterns, a progress in preventing deaths in infancy and childhood, natural population growth and increased life expectancy due to income growth, education and better health-care services. 4 Causes of mortality In urban India, noncommunicable disease deaths are rising as a proportion of overall mortality, such as cardiovascular disease, respiratory diseases and injuries ( Fig. 1 and data repository). 8 However, infectious diseases are still a problem. Urban crowding, lack of clean water and sanitation and mobility contribute to continuing transmission of infectious diseases. For example, India has a high tuberculosis burden, and Mumbai, a megacity of more than 18 million people, is a particular hotspot. 36 The intervention package Cost (Table 1). In both countries, most of the incremental cost would be invested in health centres, followed by first-level hospitals and community-and population-based interventions. Investment in referral and specialized hospitals accounts for only 5.3% (US$ 1.9 million) of the total incremental cost in Bangladesh and 3.8% (US$ 2.0 million) in India (Fig. 2). Examining the distribution of package costs by type of provision showed that in both countries, more than half of the incremental cost, 50.9% (US$ 17.9 million) in Bangladesh and 58.6% (US$ 31.2 million) in India, would be invested in management of chronic conditions to reduce risk of further events. Routine interventions would account for 28.2% (9.9 million) of incremental costs in Bangladesh and 26.5% (14.1 million) in India. Urgent conditions account for the remaining incremental costs (Fig. 3). Mortality reduction If the countries' governments would implement the recommended package, we estimate that per million population, the number of premature deaths in the Benefit-cost ratio We estimated that in Bangladesh, the benefits of investing in the package would yield US$ 1.2 of benefits to each dollar spent; in India, the benefit would be US$ 1.8 (Table 3). Sensitivity analysis In Fig. 4 and Fig. 5, we have plotted varying intervention delivery efficiencies and coverage levels to demonstrate whether they would achieve the 40x30 reduction target for a city of one million population. Both countries could only achieve the 40x30 target for under-five mortality with at least 90% coverage and at least 90% efficiency in intervention delivery. In the 5-69-year age group, we found that in Bangladesh, the 40x30 reduction target could be achieved if the package was implemented with at least 90% efficiency and at least 85% coverage, or at least 80% efficiency and at least 90% coverage. In India, while substantial progress could be made, the 40x30 target could not be achieved even at the highest coverage and efficiency levels tested. Discussion We estimated that investing in a hypothetical package of 208 cost-effective health interventions that addresses the health-care needs of the growing urban population in Bangladesh and India is beneficial. For example, noncommunicable disease burden can be controlled with treatments that are low cost and feasible to deliver in primary care and hospital facilities, coupled with public health measures to reduce the impact of major risk factors, such as smoking and obesity. However, access to many of the most cost-effective health system interventions is currently limited, especially among the poorest population groups. Expanding universal coverage of essential health interventions for adults could have a similar levelling effect as seen for improving child heath with free or inexpensive vaccines and primary care. In the last decades, the advance towards universal health coverage (UHC) and the recognition that a healthy population is cost beneficial with substantial welfare gains 33,34 make a compelling case for public investment in urban health. In India, public expenditure on health was just over 1.0% of GDP in 2015 37 and in Bangladesh, this percentage was 0.8% in 2014. 38 In both countries, outof-pocket spending on health accounted for more than two-thirds of the total health expenditure. 38 Increasing public spending on health could reduce outof-pocket payments, as shown in other countries, 39 while improving the quality of services. 4 Increased public health expenditure advances UHC and avoids the impoverishment that often results from out-of-pocket expenditures. 7 We estimated that to cover all million-plus cities in Bangladesh and India by 2030, governments must increase their current health spending about threefold. While this increase is large, this level of health spending is consistent with WHO recommendations, 40 India's Choosing Health Report 4 and the National Commissions for Macroeconomics and Health in both countries. 41,42 Expecting that the governments of Bangladesh or India can immediately increase their expenditures to the suggested levels would be unre-alistic, but they can plan for a decadelong scale-up of health spending and deploy new tactics to increase revenue to finance health-care. For example, they could require mandatory contributions from people with high income through taxation, and/or compulsory earmarked contributions for health insurance. 43 Innovative financing schemes, such as issuing diaspora bonds to expatriates and imposing taxes on foreign exchange transactions, could be adopted. 44 For instance, in 2017, an average daily turnover of foreign exchange amounted to US$ 58.0 billion in India; 45 a transaction levy of 0.005% on every working day would yield about US$ 630.0 million a year. Taxes on tobacco and other harmful substances and reduced government subsidies on fossil fuels have also been recommended as strategies to increase revenue available for the health sector. Although tobacco taxes would not themselves provide enough to cover the financial needs of UHC, 46 they could make significant contributions. 47 Given the enormous gains in health and welfare from healthy populations in cities, countries could also responsibly take low-interest loans from the international market, with federal guarantees for on-lending or for grants to cities. a The 40x30 reduction target aims for a 40% reduction of deaths among individuals 0-69 years, a two-third reduction in child and maternal mortality and mortality due to human immunodeficiency virus infection, tuberculosis and malaria, and one-third reduction in premature deaths from other communicable diseases, injuries and noncommunicable diseases. 32 b In age group 5-69 years. Notes: The package consists of 208 health interventions we identified through the third edition of Disease control priorities. 7 Examples of interventions are presented in Box 1 and the full list is available in the data repository. 8 Implementation of the package would also reduce mortality in people older than 70 years. However, we have not included these benefits. Notes: The estimations are based on cities of one million population. The 40x30 reduction target aims for a 40% reduction in deaths among people younger than 70 years; a two-third reduction in child and maternal mortality and mortality due to human immunodeficiency virus infection, tuberculosis and malaria; and one-third reduction in premature deaths from other communicable diseases, injuries and noncommunicable diseases. 32 given the large returns produced by investments in population health. Novel mechanisms to enable cities to borrow or spend with federal financing and support could also be developed. For example, in addition to providing loan financing, the Asian Development Bank provides technical assistance and advisory services to enhance and accelerate operationalization of the government's investment on health policies, programmes or projects. This investment will be returned by improved economic growth, and by lifting many out of poverty and maintaining the vibrancy and enhancement of many of the world's largest cities. Our study has several limitations stemming mainly from the limited reliable data available for many of the inputs. Data on intervention costs in low-and middle-income countries are particularly sparse. We believe the Disease control priorities cost information is the best currently available data, but many of the costs are based on very few studies and in some cases, based on similar interventions because no reliable cost studies were found. Improved studies of local intervention costs in multiple sites are needed to improve estimates, 48 including the benefits that may come with economies of scale in urban areas. 49 Data on current coverage levels for several interventions are lacking. In our study, we used global estimates for lower-middle income countries for coverage and populations in need of mental health interventions and rehabilitation services from Disease control priorities because country-specific data are missing for both countries. Reliable mortality data for Bangladesh are also missing. Better population health is a profitable investment, resulting in increased productivity and economic stability. 50 Expanding health expenditure increases productivity and years lived with good health, and the health sector is a source of employment at every level, raising the national GDP. 34 Sufficient funds for expansion of coverage of health interventions may not be immediately available, but future economic growth, driven by cities, is justifying that Bangladesh and India should expand investments in urban health. ■ Inversiones en atención de la salud para las poblaciones urbanas, Bangladesh y la India Objetivo Estimar los costos y las reducciones de la mortalidad de un conjunto de intervenciones sanitarias esenciales para las poblaciones urbanas de Bangladesh y la India. Métodos Se utilizaron datos de población de los censos de los países y de la División de Población de las Naciones Unidas. Para las causas de mortalidad en la India, se utilizó el Indian Million Death Study (Estudio del Millón de Muertes en la India). Se obtuvieron estimaciones de costos de cada intervención a partir de la tercera edición de Disease control priorities (Prioridad para el control de enfermedades). Para estimar las reducciones de mortalidad esperadas con el paquete, se utilizó el modelo de Disease control priorities. Se calculó la relación costo-beneficio para invertir en el paquete, utilizando un análisis basado en el método del Consenso de Copenhague. Resultados Por habitante urbano, el costo total del paquete sería de 75,1 dólares estadounidenses (USD) en Bangladesh y 105,0 USD en la India.
2019-11-22T17:08:11.660Z
2019-10-21T00:00:00.000
{ "year": 2019, "sha1": "3d6cf2959812e9115982e52b4363e2d3f0e4ffb5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2471/blt.19.234252", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "576f47e1c1013f21c25aaed42d9c8baff0571f4a", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
248296901
pes2o/s2orc
v3-fos-license
Cohort Profile: The Green and Blue Spaces (GBS) and mental health in Wales e-cohort Cohort Profile: The Green and Blue Spaces (GBS) and mental health in Wales e-cohort Daniel A Thompson , Rebecca S Geary, Francis M Rowney, Richard Fry , Alan Watkins, Benedict W Wheeler, Amy Mizen, Ashley Akbari, Ronan A Lyons, Gareth Stratton, James White and Sarah E Rodgers* Population Data Science, Swansea University Medical School, Faculty of Medicine, Health and Life Science, Swansea University, Swansea UK, Department of Public Health, Policy and Systems, University of Liverpool, Liverpool, UK, European Centre for Environment and Human Health, University of Exeter Medical School, Knowledge Spa, Royal Cornwall Hospital, Cornwall, UK, Department of Sport and Exercise Sciences, Applied Sports Technology, Exercise and Medicine A-STEM Research Centre, School of Engineering and Applied Sciences, Faculty of Science and Engineering, Swansea University, Swansea UK and Centre for Trials Research, School of Medicine, Cardiff University, Cardiff, UK is constructed using data from the Welsh Demographic Service Dataset (WDSD). This dataset contains demographic characteristics of everyone registered with a general practitioner (GP) in Wales, providing data to the SAIL databank (80% population coverage 15 ). It is used as the primary population register in the SAIL Databank. The WDSD contains the names and addresses with from-to dates of residency in each home; these are updated when patients inform their GP they have moved home. Researchers accessed an anonymised version of the WDSD, and calculated residency dates in each home and also house moves. All members of the household are included in the cohort, with individuals nested within each household. The demographic dataset was used as the population spine, with additional data linked as follows: • Welsh Longitudinal General Practice (WLGP): information on symptoms, diagnoses, prescriptions, and referrals 1 ; • Annual District Death Extract from the Office of National Statistics (ONS) mortality register 2 ; • Welsh Index of Multiple Deprivation (WIMD), the Welsh Government's official measure of relative deprivation for small areas in Wales 3 ; • Rural-urban ONS classifications at Lower Layer Super Output Area (LSOA) 4 ; • National Survey for Wales (NSW), an annual, repeated, cross-sectional survey of about 12 000 adults in Wales (2016-17 16 and 2018-19 17 surveys) including responses on wellbeing and visits to outdoor spaces. The cohort comprises 2 801 483 individuals-all persons aged 16 and over registered with a practice providing GP records to the SAIL Databank. We intentionally removed people who did not fit with the cohort criteria ( Figure 1). We excluded 839 063 individuals who had missing data, e.g. they were not registered with a GP providing data to the SAIL Databank, did not have a Welsh residential address between January 2008 and October 2019 or did not have sex or week of birth recorded in WDSD. We created measures of GBS exposure and access for all homes in Wales, using several environmental datasets: (i) satellite data (Landsat TM 18-21 2008-19) to create annual greenness densities of the mean Enhanced Vegetation Index (EVI) and Normalised Difference Vegetation Index (NDVI) within 300 m of each residence; (ii) Ordnance Survey MasterMap Topography Layer 22 (2018) to capture natural and man-made features, including the outline of homes and parks; (iii) Ordnance Survey MasterMap-derived Greenspace dataset (2018) 23 ; (iv) local authority (LA) technical advice notes, legally required records of data on sport, recreation and open spaces managed by local authorities (LAs); (v) open source portal data from Lle (forestry, urban tree cover) 22 ; and (vi) OpenStreetMap road/footpath data. 24 Environmental data were linked to the cohort at individual-level data, using a residential version of the split file linkage process. 25,26 A final GBS typology (Supplementary Table S1, available as Supplementary data at IJE online) was used to create GBS access metrics for each home in Wales. A cohort subgroup responded to Natural Resources Wales (NRW) questions in the 2016-17 and 2018-19 National Survey for Wales (NSW). 16,17 The NSW is an annual repeat, cross-sectional, government-sponsored, omnibus survey of a representative sample of the population of Wales (annual n 12 000). Topics include education, culture, health and wellbeing and more detailed information on socioeconomic circumstances than administrative data. The NRW questions (sub-sample, n ¼ 5312) 27,28 record whether respondents visited outdoor spaces in Wales, including time spent outdoors on leisure activities, and types of activities undertaken. NSW respondents aged 16 years, Key Features • The Green and Blue Spaces (GBS) e-cohort includes 2.8 million UK adults and was established to quantify the impact of natural environments on mental health and wellbeing in Wales, UK. • This is the first e-cohort with national household-level longitudinal environment metrics (annual) for 1.4 million residences linked to longitudinal electronic health records (updated quarterly), with a subgroup of 5312 linked survey responses on visits to outdoor spaces and wellbeing. • Baseline and follow-up information was extracted quarterly through electronic record linkage, including mental health service use and sociodemographic and economic characteristics. • After almost 12 years' follow-up, 0.7% were lost to follow-up due to migration out of Wales and were replaced with in-migration and those reaching the age of 16 years (25%), 9.9% died and 28% had at least one common mental health episode recorded with their general practitioner (GP). We derived environmental metrics for all potential residences in Wales (n ¼ 1 498 120). Of these, 1 179 817 (78%) residences were linked to the cohort through the WDSD. There were 318 303 unlinked potential homes (likely holiday homes, caravans, guest-houses), either because they did not match an address of an individual registered with a GP in Wales or were inhabited by people not registered at a GP practice. Area-level characteristics of residences linked and unlinked to the cohort were compared to check for potential bias (see 'What has it found?'). Of the 2 801 483 individuals in the cohort, 622 025 (22.2%) moved home once between 2008 and 2019, and 567 877 (20.3%) moved home more than once. Exposures and outcomes are extracted/updated quarterly. How often have they been followed up? Health-related outcomes were extracted quarterly. Environmental metrics were calculated annually but updated quarterly if cohort members moved home (see 'What has been measured'). The dynamic cohort design allows new people to enter the cohort each quarter as they reached age 16 years or moved into Wales. Cohort sample size in each quarter is provided in Supplementary Table S2 (available as Supplementary data at IJE online). The current linkage of environmental and administrative data sources ended in September 2019, creating an 11-year cohort with annual follow-up for all, and quarterly follow-up for people moving home. Nonenvironmental datasets are routinely updated in SAIL, enabling health outcomes for the cohort to be followed up for longer. A total of 5 791 cohort members completed NRW questions in the 2016-17 and 2018-19 NSW. Further waves of the NSW have been consented for data linkage in SAIL. The GBS e-cohort cohort was created from multiple data sources with varying levels of completeness across different variables. Known exclusions, due to missing data on age or sex (0.4%) or at least one primary environmental measure (EVI, <0.01%), resulted in a cohort of 2 801 483 people ( Figure 1). This cohort has 24.9 million-personyears of follow-up. An additional average of 30 238 people joined the cohort annually through migration into Wales or reaching age 16 years (34 709 people annually), What has been measured? Cohort variables are presented in themes: (i) sociodemographic and economic characteristics; (ii) common mental health disorders/wellbeing; (iii) comorbidity index; (iv) social environment and life events (births/deaths in the household); (v) environmental metrics; and (vi) other administrative cohort information (Table 1). Key health metrics are (quarterly): Common Mental Health Disorder (anxiety and depressive disorders) and a count of all GP events (extracted from WLGP). The WLGP is collated from clinical information systems in use at each general practice around Wales, and uses Read codes recorded during a GP consultation. Test results are electronically transferred into the WLGP from secondary care systems. To identify people with Common Mental Health Disorders (CMDs), we applied an existing validated prevalence algorithm with high sensitivity to detect cases of CMD (anxiety and depression). 33 We identified people with CMD each quarter when they had either a historical diagnosis(es) currently treated, and/or current diagnoses or symptoms (treated or untreated) from Read codes (detailed in Supplementary Table S3, available as Supplementary data at IJE online) in their GP record in the WLGP data (Algorithm 10). 33 The algorithm identifies 'current' diagnoses/symptoms as relevant Read codes in the preceding 1-year period. It identifies 'historical' diagnoses through a search for relevant Read codes through the cohort data outside the 'current' period. The length of retrospective data available varied between individuals in the cohort, depending on the length of their registration with a GP supplying data to SAIL. CMD treatment was identified as at least one prescription for an antidepressant, Anonymised Linking Field (ALF) and Residential Anonymised Linking Field (RALF) are individual and household anonymised linking fields, respectively, within the Secure Anonymised Information Linkage (SAIL) Databank. 31,32 anxiolytic or hypnotic in the 1-year current period. 1 We did not include cognitive behavioural therapies or other non-drug treatments in our CMD case definition, as this information was not available in WLGP. The algorithm applied to identify probable cases of CMD has high specificity and positive predictive value for detecting CMD (anxiety and depression) but, as expected, has low sensitivity. 33 We identified adults (16þ years) with CMD in the GP dataset. We refer to people 'having a CMD', but we acknowledge that this only captures those who have sought care for their CMD in primary care. Community prevalence will be significantly higher, because only about one-third of people affected by CMD seek help in primary care. 4 GP-specific events were converted from daily counts to a binary variable and then aggregated to quarterly counts. This eliminated counting multiple test results. Each individual in the cohort also had quarterly measures for Charlson comorbidity index 30 and a count of hospital admissions. Environmental metrics GBS exposure within 300 m of each home in Wales was measured yearly from open source satellite imagery. Three variables representing ambient green/blueness were linked to the cohort: • mean EVI (minimum, mean, median, max); • mean Normalized Difference Vegetation Index (NDVI) (minimum, mean, median, max); • coastal and/or inland water (yes/no); We used imagery with less than 20% cloud cover to estimate EVI/NDVI, resulting in 87.7% of homes with full coverage of EVI and NDVI values from 2008 to 2019. Where homes were missing an EVI/NDVI value for a given year, and neighbouring years were available, we imputed these values. The potential for an individual to access a range of types (Supplementary Table S1) of GBS, along a network of paths and roads within 1600 m of each home, was modelled for 2012 and 2018. Ambient green/blueness, and potential to access GBS, were augmented by survey responses about leisure time visits to outdoor spaces in Wales for the NSW subgroup. Household-individual data linkage methods created a longitudinal dataset with the potential for a granular temporal examination of the impact of changes in green and blue space on health inequity for individuals. This design is more appropriate than previous studies for inferring causal links. [1][2][3] Cohort members have their home location linked to appropriately synchroniezd environmental data, extracting subsequent health outcomes from their electronic health records. This provides the opportunity to construct natural experiments or pragmatic trials within the cohort 5,6 . What has it found? Using a combination of open source environmental and national mapping agency data, we have demonstrated the feasibility of creating individual-level, longitudinal, environment exposure data with national coverage for 2.8 million adults in Wales . Longitudinal linkage of national-level environmental data, for 1.4 million homes with routinely collected electronic health records and socioeconomic data, allows this cohort to be used to assess the impact of a changing environment on subsequent common mental health disorders, wellbeing and other health outcomes. 26 At an individual level, there was little variation in data completeness between those identified as having a CMD at least once and those without having a CMD: 99.9% (n ¼ 816 020) and 99.4% (n ¼ 1 983 590), respectively. At a household level, 92.3% (n ¼ 2 598 211) of the cohort were linked to a home address for every quarter they were in the e-cohort. Individuals were censored during a quarter if no place of residence could be linked, or if their GP did not provide data to the databank. Individuals with at least one CMD episode had 90.4% (n ¼ 739 054) residential data completeness compared with 93.1% (n ¼ 1 859 157) of those without a CMD. Full environmental data (EVI and NDVI) were linked for 85% of the cohort (n ¼ 2 384 489) for their complete cohort duration. We examined the linkages to check for bias by deprivation and rurality. The percentage of unlinked homes did not increase with deprivation. However, we found that a higher proportion of unlinked homes were in rural areas. We did not find a systematic bias with EVI; mean EVI for unlinked and linked homes were similar (0.3, Table 2). A total of 29% of the cohort (816 242) sought care for a CMD in general practice between January 2008 and October 2019. A total of 461 728 (16%) people in the cohort had a previously diagnosed CMD for which they sought care in general practice, subsequently entering the e-cohort ('historical diagnosis'). For the more than 300 000 people newly seeking treatment for a CMD from their GP (i.e. who had no 'historical diagnosis', n ¼ 305 779), a larger proportion (14%, n ¼ 43 350) were living in more affluent, greener areas (measured by mean EVI) by the end of their time in the cohort (relative to when they entered the cohort) compared with only 8% (n ¼ 23 795) who were living in deprived areas with less greenery immediately surrounding the home. In contrast, most people (75%, n ¼ 267 446) who had a 'historical' CMD diagnosis and who also had a CMD during the cohort period (2008-19, n ¼ 358 126), lived in greener areas by the end of their time in the cohort. People living in the most deprived areas had on average less ambient greenness around their home than those living in the least deprived areas (mean EVI 0.25 vs 0.31, respectively, Table 2). The dynamic cohort captures abrupt GBS changes resulting from home moves as well as in situ slower changes in ambient greenness. More than one-fifth (22.6%) of the adult population in the most deprived quintile moved home at least once during the cohort period, with fewer moving in the least deprived (18.7%) and nextleast deprived (18.2%) quintiles (Table 3). Younger people (<30 years old) and those living in the most deprived areas had the highest prevalence of moving at least once during their time in the cohort (48.9% and 22.6%, respectively, Table 3). We will apply advanced analytical approaches to the longitudinal health and exposure cohort, with the aim of quantifying the impact of GBS on individual-level mental health and wellbeing. 1 The use of routinely collected historical data and established linkage mechanisms allows this e-cohort to be extended, either to include those under 16 years and/or to evaluate the impact of natural environments on further health, social and public health outcomes. Published cohort papers are listed What are the main strengths and weaknesses? The cohort is subject to minimal attrition due to the inclusion of all GP-registered individuals, unless individuals have opted out by making a request to their GP (see https://saildatabank.com/faq/). This minimizes the potential for selection bias. The cohort currently contains 2 801 483 adults. This will change with further follow-up years because the dynamic e-cohort structure accommodates migration in and out of Wales, as well as deaths and ageing into the cohort (i.e. reaching age 16 years). This large adult population cohort provides sufficient power to examine variations between subgroups to investigate inequalities. We reduced ecological fallacy using privacy-protecting data linkage methods to construct household measures of GBS. 5,6 Longitudinal environmental metrics, and linkage methods, enable an objective assessment of environmental changes, with no research burden for individuals. [34][35][36] A strength of this cohort is the ability to disentangle health outcomes from 'greening gentrification' by anonymously 'tracking' individuals over time. 37 System-wide natural changes may be slowly evolving and so the impact on population health requires longer follow-up. Over a long duration, place-based improvements may displace an area's original population with those who are more affluent and healthier ('gentrification'). Results of place-based intervention studies investigating area-level health effects over long periods of time are therefore likely to record health outcomes of a different, healthier, population. Like other electronic health records cohorts, the GBS ecohort data are predominantly routinely recorded and lack data on behaviour, some potential confounding factors and outcomes such as wellbeing. There is no health-related quality of life instrument routinely used to assess changes in health status in general practice in Wales. The cohort is largely restricted to detecting changes in outcomes that involve health service use. However, through linkage to survey data, a subset of the cohort has information on wellbeing as well as on behaviours such as time spent visiting GBS (n ¼ 5312 adults). The validity and reliability of research using routinely collected data depend upon its quality and completeness. Overall, the validity of primary care diagnoses in the UK tends to be high. 38 Case-finding for CMD in routinely collected administrative health data can unobtrusively identify patients for mental health research, including on the effects of intervention. 39 Diagnostic coding can differ between clinicians/practices over time, which may influence the sensitivity and specificity of algorithms to identify patients using a specific case definition in e-cohorts over time. A validation study, comparing using Read codes and algorithms for CMD case-finding (including the algorithm we have used) with the five-item Mental Health Inventory, demonstrated that using diagnosis and current treatment alone to identify CMD using routinely collected GP data would miss a number of true cases, given changes in GP recording behaviour between 2000 and 2010. Including historical diagnoses with current treatment and symptoms, as in this cohort, increases sensitivity. We captured annual ambient exposure to greenness, and temporally matched these to subsequent health outcomes. This improves on previous studies that did not have the data or systems to achieve this. We were unable, however, to continue this with the access metrics because several key data sources were not updated frequently and do not currently capture change in land use consistently. This has created a temporal mismatch between (annual) greenness measures (EVI, NDVI) and access measures (2018), which means we could not allocate a precise period when access to a GBS (new or old) may have changed. We recommend that GBS data providers update data regularly using consistent standards to capture changes in access to, and quality of, GBS through time. Can I get hold of the data? Where can I find out more? This cohort is stored and maintained in the SAIL Databank at Swansea University, Swansea, UK. This is a controlled access cohort; all proposals to use SAIL data are subject to review by an independent Information Governance Review Panel. Where access is granted, it is gained through a privacy protecting safe haven and remote access system (SAIL Gateway). The cohort data will be available to external researchers for collaborative research projects after 2022. For further details about accessing the cohort, contact [saildatabank.com] and Sarah Rodgers [ARCNWC@liverpool.ac.uk] for opportunities to collaborate with the original investigator team. Ethics approval This cohort is based on routinely collected administrative, environment and survey data. All data will be anonymised into a secure databank, and therefore there will be no mechanism for informing potential cohort participants of possible benefits and known risks. The cohort received approval from an independent Information Governance Review Panel, an independent body consisting of membership from a range of government, regulatory and professional agencies. We obtained informed consent to use the linked and anonymised NSW data within the SAIL databank. All routinely collected anonymised data held in SAIL are exempt from consent due to the anonymised nature of the databank (under section 251, National Research Ethics Committee). Data availability See 'Can I get hold of the data?', above. Supplementary data Supplementary data are available at IJE online. Author contributions S.E.R. designed and led the development of the cohort. D.T. produced the analysis and cohort linkage and drafted the paper with R.G. R.F. and A.M. produced the exposure metrics and reviewed the paper. A.W. provided input on analytical strategy. F.R. and B.W. produced the analysis and linkage for individuals linked to NSW survey and reviewed the paper. R.L., G.S. and A.A. reviewed the paper. All authors contributed to cohort design through input to regular meetings. All authors reviewed the final submitted paper. Funding The GBS and Mental Health in Wales cohort was developed as part of independent research funded by the National Institute for Health Research (NIHR), project number 16/07/07, and the UK Prevention Research Partnership, GroundsWell (MR/V049704/1). The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.
2022-04-22T06:23:04.947Z
2022-04-21T00:00:00.000
{ "year": 2022, "sha1": "8f8ea6b6a1bce48bbff36a2794afc329f7b8ec53", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/ije/advance-article-pdf/doi/10.1093/ije/dyac080/43406569/dyac080.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ddd017f7ea67c395ff72682a3999686460358684", "s2fieldsofstudy": [ "Sociology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
251068733
pes2o/s2orc
v3-fos-license
NLRC5 Might Promote Endometrial Cancer Progression by Inducing PD-L1 Expression Aims: The NOD-like receptor (NLR) family, caspase recruitment (CARD) domain containing 5 (NLRC5) was dysregulated in endometrial cancer (EC). However, the potential regulatory mechanisms of NLRC5 in EC remained unclear. We aimed to explore whether NLRC5 could regulate the programmed cell death protein ligand 1 (PD-L1) in EC. We also investigated the related molecular which led to the inactivation of NLRC5 in EC. Methods: The expressions of NLRC5 and PD-L1 in endometrium tissue microarray were detected by immunohistochemistry. Pearson’s correlation analysis was performed to detect the expression correlation between NLRC5 and PD-L1. Immunofluorescence staining, western blotting, and quantitative real-time PCR (qRT-PCR) were used to detect the role of NLRC5 in PD-L1 in EC cell lines. The somatic mutation in EC patients was detected by whole-exome sequencing (WGS). Results: NLRC5 was downregulated in the endometrium of EC patients when compared to those in the normal endometrium. The level of PD-L1 in the endometrium of EC patients was higher when compared to those in the normal endometrium. There was a negative expression correlation between NLRC5 and PD-L1. NLRC5 could promote the expression of PD-L1 in EC cell lines. The mutations of ANKRD20A2, C2orf42, ADGRB3, AVPR2, GOLGA6C, and IPPK may lead to the downregulation of NLRC5 in EC patients. Conclusion: NLRC5 could inhibit the activation of PD-L1 in EC. Mutations of ANKRD20A2, C2orf42, ADGRB3, AVPR2, GOLGA6C, and IPPK may lead to the downregulation of NLRC5 in EC patients. Future study should investigate the mechanism of NLRC5 in PD-L1, as well as the mechanism of ANKRD20A2, C2orf42, ADGRB3, AVPR2, GOLGA6C, and IPPK in NLRC5. Introduction Endometrial cancer (EC) is the most common cancer of uterine cancer, and defined as the fifth leading cause of cancer death in the United States. 1 In 2020, there were an estimated 79 420 new women are diagnosed with uterine cancer and 16 880 deaths in the United States. 2 Approximately 80% of EC patients developed symptomatic presentation and the 5-year survival rate of these cases was 75% to 90%. 3 However, approximately 20% of EC patients are diagnosed with metastatic or recurrent, and these kinds of cases led to considerably poorer outcomes with a relative 5-year survival rate of 9% to 17%. 4 Therefore, it is imperative to elucidate new molecular targets and therapeutic approaches for EC. NOD-like receptor family caspase recruitment domain family domain-containing 5 (NLRC5) is a newly identified member of the NLR family. 5 Human NLRC5 is located in the 16q13 locus and consists of 1866 amino acids (aa) while mouse NLRC5 is at chromosome 8 and contains 1915 aa. NLRC5 possesses 3 structural domains including the N-terminal atypical caspase activation and recruitment domain (CARD), which is completely distinct from the other NLRs, the central NACHT domain, which contains the nucleotide-binding domain (NBD) and 27 leucine-rich repeats (LRRs) at the C-terminal. 6 It has been demonstrated that NLRC5 is a negative regulator of inflammatory response and type I interferon (IFN) production, and an activator and synergetic component of the inflammasome. 6 Moreover, NLRC5, known as an major histocompatibility complex (MHC) class I transactivator, is a novel target for immune evasion in cancer. Downregulation of NLRC5 in cancer is crucially correlated with decreased levels of MHC class I molecules and impaired cytotoxic T cell activities. Recruitment of NLRC5 contributes to the presentation of tumor antigens to CD8 + T cells, which further increases the antitumor immunity. 7 Intriguingly, in several cancer cells, accumulation of NLRC5 contributes to cancer progression by promoting cancer cell proliferation, migration, and invasion. 8,9 This interesting and even conflicting evidence might arise from the fact that the exact function of NLRC5 could be cell type-and tumor microenvironment-dependent. Our previous study in EC cells showed that NLRC5 may contribute to EC progression by promoting cell migration and invasion. 10 Nevertheless, the mechanism underlying NLRC5 in EC progression needs further determined. The programmed cell death ligand 1 (PD-L1) receptor is one of the important immune checkpoint proteins and is mainly expressed on mature cytotoxic T lymphocytes in the tumor microenvironment. PD-L1 is also expressed on cancer cells 11 and tumor intrinsic signaling pathways, 12 and restricted by the tumor suppressor genes. 13 Accumulating evidence shows that high expression of tumor cell-intrinsic PD-L1 contributes to cancer initiation, metastasis, development, and recurrence in multiple tumor types. 14,15 However, the mechanism related to NLRC5 downregulation in EC patients remains unknown, and whether NLRC5 promotes EC progression by regulating PD-L1. In our study, we detected the expressions of NLRC5 and PD-L1 in the endometrium tissue microarray. Furthermore, we analyzed the expression correlation between NLRC5 and PD-L1. We also determined the correlation of expression levels of NLRC5 and PD-L1 with different clinicopathologic features in EC patients. In addition, we explore whether NLRC5 could regulate the tumor cell-intrinsic PD-L1 in EC cell lines. Lastly, we explored the potential mechanism related to downregulated NLRC5 in EC patients. Our evidence presented that NLRC5 was downregulated and PD-L1 was upregulated in EC. Furthermore, there is a negative correlation between the expressions of NLRC5 and PD-L1. Additionally, NLRC5 promotes the expression of PD-L1 in EC cells. The mutations of ANKRD20A2, C2orf42, ADGRB3, AVPR2, GOLGA6C, and IPPK may lead to downregulation of NLRC5 in EC patients. Patients and Tissues Microarray The Anhui Medical University's Institutional Review Board approved the current study (Approval No.: 20180023), and all patients provided written informed consent. A tissue microarray composed of 60 EC endometrial tissue samples and 36 control endometrial tissues was analyzed. Clinical data collected for all EC patients included histological subtype, lymph node metastasis, tumor grade, tumor stage, and age. The 2009 Federation International of Gynecology and Obstetrics (FIGO) criteria were used to validate tumor staging. The World Health Organization (WHO) criteria for histological subtyping and tumor grading were followed. Age and body mass index (BMI) were collected for all 36 participants from who normal endometrial tissue samples were collected. Participants had not undergone chemotherapy, radiotherapy, immunotherapy, or other treatments before sample collection. Immunohistochemistry PD-L1 and NLRC5 expression levels were detected via immunohistochemistry as in prior studies. Antigen retrieval was accomplished by microwaving tissue microarray samples for 15 min in a citric saline solution after deparaffinization with xylene and dehydration with ethanol. After 15 min of treatment with 0.3% H 2 O 2 to quench endogenous peroxidase activity, sections were blocked with 2% bovine serum albumin (BSA) and incubated overnight at 4°C with anti-NLRC5 antibodies (ab105411, Abcam, 1:100) or PD-L1 (ab213524, Abcam, 1:100). Sections were then washed, probed for 1 h at room temperature with a biotinylated secondary antibody (G1210-2-A, Servicebio), and protein detection was then visualized with 3 ′ -diaminobenzidine tetrahydrochloride. Following hematoxylin counterstaining, slides were dehydrated, mounted, and 5 random fields of view per slide were imaged for quantification (200x magnification) using a fluorescent microscope. Background lighting was consistent for all samples. Image-Pro Plus 6.0 (Media Cybernetics, Inc.) was used to examine staining intensity and found that dark brown staining indicated a positive staining reaction. Cell Culture The American Type Culture Collection (ATCC) provided the HEC-1A human EC cell line (Accession number: HTB-112), while the European Collection of Authenticated Cell Cultures supplied the Ishikawa cells (Accession number: 99040201). At 37°C in a 5% CO 2 incubator, cells were cultured in Roswell Park Memorial Institute (RPMI)-1640 (Invitrogen) containing 10% fetal bovine serum (FBS, HyClone). Immunofluorescence Staining HEC-1A and Ishikawa cells were permeabilized for 20 min with 0.1% Triton X-100 (Thermo Fisher Scientific) after being fixed for 30 min with 4% paraformaldehyde. Samples were then stained for 45 min at 37°C with primary anti-NLRC5 and anti-PD-L1 antibodies, rinsed (5 min/wash at room temperature) with an immunohistochemical washing solution (Beyotime), and stained with appropriate secondary antibodies for 45 min at 37°C. Immunofluorescence blocking solution (Sigma) was applied after 3 further washes as described above, and slices were mounted and photographed using laser scanning confocal microscopy (Nikon). Cell Transient Transfection The sequences of NLRC5 plasmid and siRNA-NLRC5 was performed as our previous study. 16 HEC-1A and Ishikawa cells were transfected with these constructs using Lipofectamine™ 2000 (Invitrogen) based on provided directions. Whole-exome Sequencing (WGS) From 3 EC patients, the genomic DNA was extracted from peripheral blood samples. All gDNA samples met the purity requirements (OD260/280 > 1.8) and concentration levels (50 ng/mL) necessary for sequencing. Upon fragmentation with NEBNext dsDNA Fragmentase into 100 to 800 bp segments with a peak size of ∼250 bp, samples were treated to end repair, dA-Tailing, and adaptor ligation with the Illumina NEBNext DNA Library Prep Reagent Set (acquired from New England Biolabs). DNA fragments were separated using a 2% agarose gel electrophoresis following adaptor ligation, and 300 to 400 bp fragments were removed for further study. Ten cycles of PCR amplification with PE primers (Illumina) and Phusion DNA polymerase (New England Biolabs) were performed on the samples. The amplified sequences were then utilized to capture exome sequences using an Illumina Truseq Exome Enrichment kit V6, which contained a 31.3 Mb CCDS (97.2% of the US NCBI CCDS Library) region covering approximately 20 794 genes within 62 Mb of coding exons (Illumina). After enrichment, 10 additional rounds of PCR amplification were performed as above, after which the sizing of amplicons was checked and a BioAnalyzer 2100 was used to quantify amplicon concentrations. The amplicons weresize-checked and quantitated using a BioAnalyzer 2100, and then subjected to 2 ×150 bp pairedend massively parallel sequencing using a Hiseq2500 SequencingSystem (Illumina). Statistical Analysis All data were analyzed using SPSS 23.0 (IL, USA), which is provided as means ± SEM and analyzed using one-way ANOVA tests. To find significant variations across groups, Duncan's multiple range tests were performed and to investigate correlations between samples and Pearson's correlation analysis was utilized. P < .05 was the significance threshold. General Data Endometrial tissues obtained from 60 patients with EC and 36 normal endometrial tissues were used for the tissue microarray. Demographic characteristics are summarized in Immunohistochemistry analysis of tissue microarrays revealed that the NLRC5 expression in the endometrium of EC patients was significantly lower than that in the normal endometrium (t = 5.053, P < .001), and PD-L1 was upregulated in the endometrium of EC patients when compared to that in normal endometrium (t = 8.704, P < .001; Figure 1A, Table 2). No correlation was found between the level of NLRC5, PD-L1 and histology, myometrial invasion, lymphatic node metastasis, FIGO stage (2009), and histological grade in EC patients (Table 3). Furthermore, there is no correlation between NLRC5, PD-L1, and cumulative survival in EC patients ( Figure 1B). NLRC5 Promotes PD-L1 Expression in EC Cells Immunofluorescence staining analysis results showed that NLRC5 and PD-L1 are co-located in the cytoplasm of HEC-1A and Ishikawa cells (Figure 2A). There is a negative expression correlation between NLRC5 and PD-L1 (r = −0.666, P < .001; Figure 2B). Moreover, western blotting analysis results showed that NLRC5 plasmid could promote NLRC5 protein expression effectively in HEC-1A cells ( Figure 2C). Overexpression of NLRC5 led to PD-L1 protein being upregulated in HEC-1A and Ishikawa cells ( Figure 2D). siRNA-NLRC5 could inhibit NLRC5 protein expression effectively in HEC-1A cells ( Figure 2E). Downregulation of NLRC5 inhibited PD-L1 protein expression in HEC-1A and Ishikawa cells ( Figure 2F). These results indicated that NLRC5 played a positive role in PD-L1 expression in EC cells. Mutations of ANKRD20A2, C2orf42, ADGRB3, AVPR2, GOLGA6C, and IPPK may Lead to Downregulation of NLRC5 in EC Patients To assess the mutational statuses of the EC patients, WGS was performed for the 3 EC patients and their own peripheral blood, which was set as control group. The top 10 mutated genes were shown in Figure 3A. However, we did not find somatic mutations of NLRC5 in the samples. Furthermore, the mutation type was shown in Figure 3B. Additionally, using the muTarget database (http://www.mutarget.com/), we suggested that mutations of ANKRD20A2, C2orf42, ADGRB3, AVPR2, GOLGA6C, and IPPK may lead to the downregulation of NLRC5 in EC patients ( Figure 4). Discussion EC cases are broadly classified into type I tumors, which express moderate-to-high levels of estrogen receptor (ER) and account for ∼90% of all EC cases, and type II tumors, which are estrogen-independent and generally more severe with respect to their clinicopathological characteristics. 17 Histological classification of EC tumors, however, is subject to relatively poor reproducibility with overlapping IHC and morphological features among different EC subtypes. 18 The Proactive Molecular Risk Classifier for EC instead reclassified EC tumors based on IHC findings into p53 WT, p53 abnormal, polymerase ϵ-mutated, and mismatch-repair-deficient, with these subtypes being better able to predict patient tumor responses to immunotherapeutic treatment. 19 Clinical immunotherapy trials have explored the use of immune checkpoint inhibitors for the treatment of EC, 20 but most patients were found to exhibit unsatisfactory responses owing to the incidence of immune escape. 21 PD-L1 is a key immune checkpoint that can mediate immune evasion in a broad range of cancers. Moreover, PD-L1 can serve as an oncogene to enhance initial tumor development, growth, and metastatic progression through the activation of intrinsic signaling pathways within these tumor cells. 15,22 The expression of PD-L1 on the surface of tumor cells enables them to more effectively evade antitumor immune responses, 23 interacting with PD-1 expressed on T cells surfaces to restrict the effector functions of these cells, thus shielding tumors from immunemediated rejection. 24 PD-L1 inhibitor treatment has been linked to improved outcomes in many cancers, including in EC, which indicated that PD-L1 represents a promising target for immunotherapy in EC. 25,26 Overall, PD-L1 expression has been reported in 83% of primary EC cases and 100% of metastatic EC cases. 27 Here, significantly increased PD-L1 expression was detected in EC patients when compared to normal endometrial tissues. As a result, PD-L1 overexpression may contribute to EC formation, invasivity, metastasis, and/or immune evasion. The specific mechanisms underlying PD-L1-mediated immune evasion, however, remain to be fully clarified. NLRC5 serves as an MHC class I transactivator that modulates MHC class I-dependent immunity. Yoshihama et al analyzed 7747 patients with 21 different types of solid cancers and determined that NLRC4 represents a potential immune evasion-related target, with the upregulation of this molecule being correlated with augmented MHC class I expression, CD8 + T cell activation, and improved survival outcomes in cancer patients. 7 From a mechanistic perspective, somatic mutations, copy number deletion, and promoter methylation can drive reductions in NLRC5 expression or activity, whereas NLRC5 recruitment can restore the immunogenicity of tumors and drive enhanced antitumor immune responses by rescuing MHC class I expression and thereby improving tumor antigen presentation to CD8 + T cells. 7,28 No NLRC5 somatic mutations were detected among EC samples in the present study, however. An analysis of the muTarget database suggested that mutations in the ANKRD20A2, C2orf42, ADGRB3, AVPR2, GOLGA6C, and IPPK genes may contribute to NLRC5 downregulation in patients with EC. Both clinical HCC samples and HCC cell lines have been found to express NLRC5 at high levels. In nude mice, NLRC5 knockdown was sufficient to dramatically reduce these cells' proliferation, motility, and invasivity, as well as their ability to form tumors, promoting G0/G1 phase cell cycle arrest. NLRC5 overexpression, in contrast, was sufficient to enhance HCC cell proliferative, migratory, and invasive activity through the downstream activation of the Wnt/ β-catenin signaling pathway. 9 NLRC5 has recently been demonstrated to enhance the proliferation of HCC tumor cells via activating the AKT/VEGF-A pathway, whereas NLRC5 knockdown was linked to decreased in vivo HCC tumor development. 29 Wang et al discovered that overexpression of the NLRC5 gene was connected to more advanced staging and poorer prognostic outcomes in patients with clear cell renal cell carcinoma (ccRCC). From a mechanistic perspective, NLRC5 was able to promote ccRCC cell proliferative, migratory, and invasive activity through Wnt/β-catenin pathway activation, while silencing NLRC5 led to the suppression of in vivo tumor growth through the inhibition of Wnt/β-catenin signaling. 8 Recent work also suggests NLRC5 to be a miR-4319 target, with NLRC5 overexpression promoting positive miR-4319-related effects on esophageal squamous cell carcinoma (ESCC) cell growth and cell cycle progression. 30 These data thus further support a role for NLRC5 as a driver of tumorigenesis, with NLRC5 inhibition thus representing a promising target for the treatment of HCC, ccRCC, and other cancer types. NLRC5, however, may exhibit tissue-specific regulatory roles. In a recent study, NLRC5 was found to be downregulated in EC tissues relative to normal endometrial tissues, promoting AN3CA cell invasion and migration through PI3K/AKT pathway activation. 10 Even so, the role that NLRC5 plays in the context of EC development and whether this role is regulated by PD-L1 has yet to be established. Prior studies showed that PD-L1-mediated tumor immune evasion was intimately implicated in reduced activity of T cells, and PD-L1 inhibitor facilitated depletion of tumor cells by rescuing CD8 + T cells. 31,32 Furthermore, it has demonstrated that there was a negative correlation between MHC class I molecules and PD-L1 expression on tumor tissues. 33 Of special note is that a recent study indicated that NLRC5 variants could interact with PD-L1 variants in colorectal cancer, which provides a novel biological information to improve colorectal cancer risk management and immunotherapy. 34 These studies provided a potential role of PD-L1 by NLRC5. In our study, we found NLRC5 and PD-L1 was co-localization in the cytoplasm of HEC-1A and Ishikawa cells. Furthermore, there is a negative correlation between NLRC5 and PD-L1. Importantly, further investigations found that NLRC5 could promote the expression of PD-L1 in EC cells. We suggest that the downregulation of NLRC5 in EC patients owning to the mutations of ANKRD20A2, C2orf42, ADGRB3, AVPR2, GOLGA6C, and IPPK. Unlike the NLRC5-mediated immune surveillance in the tumor microenvironment, NLRC5 may contribute to the cell growth by promoting PD-L1 in the single EC cells. Conclusion Our study presented a novel regulating role in EC cells that NLRC5 promotes PD-L1 expression, suggesting that inhibiting NLRC5 in combination with inhibiting PD-L1 may contribute to the therapy for EC. Nevertheless, the sample size in our study was relatively small. Furthermore, our study is to focus on the role of NLRC5 in single EC cells. On account of the critical role of NLRC5 in immune activation, which is essential for tumor immunotherapy, simply to increase the expression of NLRC5 may lead to tumorigenesis by promoting cell proliferation, migration, and invasion, and if simply to reduce the expression of NLRC5 may inhibit NLRC5-mediated cancer immune surveillance. Further experiments are needed to demonstrate the effect of promoting or inhibiting NLRC5 expression as adjuvant therapy in the therapy for EC, including immunotherapy. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Ethical Approval Our study was approved by The Institutional Review Board of Anhui Medical University (approval No. 20180023). All patients provided written informed consent prior to enrollment in the study. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Nature Science Foundation of China, Research Fund of Anhui Institute of translational medicine and Natural Science Foundation of Colleges and Universities (grant numbers 81802586, 81871216, ZHYX2020A001, and KJ2017A197).
2022-07-27T06:17:51.261Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "29cffaa7865cee5b90d3e798c79be1e0ec260c20", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15330338221112742", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36e1a50567de22f55fe71270c763d5791d184575", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49679627
pes2o/s2orc
v3-fos-license
Depression and anxiety disorders among patients with human T-cell lymphotropic virus type-1 : a cross-sectional study with a comparison group Introduction: Studies have linked human T-cell lymphotropic virus type-1 (HTLV-1) to psychiatric disease. Methods: Patients with HTLV-1 were compared to patients seen by family doctors using a semi-structured questionnaire and the Hospital Anxiety and Depression Scale. Results: Participants with (n=58) and without (n=340) HTLV were compared. Anxiety and depression were associated with greater age, being a woman, spastic paraparesis (depression: PR=4.50, 95% CI: 3.10-6.53; anxiety: PR=2.96, 95% CI: 2.08-4.21), and asymptomatic HTLV (depression: PR=4.34, 95% CI: 3.02-6.24; anxiety: PR=2.81, 95% CI: 2.06-3.85). Conclusions: Symptomatic and asymptomatic patients with HTLV-1 experienced more anxiety and depression than uninfected patients. Human T-cell lymphotropic virus type-1 (HTLV-1) infects approximately 20 million people worldwide and 2.5 million people in Brazil 1 .HTLV-1 causes HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP) and is associated with adult T-cell leukemia.Between 2 and 5% of those infected with this virus will also develop neurological symptoms, though predicting which patients will develop them is not currently possible 2 . Patients with HTLV experience greater psychological distress than do those without HTLV.Some studies have shown that patients with HTLV infections have higher frequencies of psychiatric diseases and especially depression disorders.The prevalence of depression disorders ranges from 5 to 52% [3][4][5][6][7][8][9] and anxiety disorders from 5 to 40% 5,8,9 in this population.There are divergences in the literature regarding whether patients with HAM/TSP have more symptoms of depression than do those with asymptomatic HTLV [4][5][6][7][8]10 . There is no treatment for HTLV-1 that prevents disease progression.Current treatments aim towards relieving symptoms such as spasticity, bladder symptoms, and pain.Co-morbid psychiatric disorders worsen these patients' quality of life [3][4]11 and are sometimes not addressed during the management of these patients. Thrugh understanding how to improve depression and anxiety symptoms more broadly, better approaches for caring for HTLV patients, specifically, will be developed. The present study aimed to compare the frequency of depression and anxiety disorders between patients with HTLV (symptomatic and asymptomatic) and non-HTLV patients attended to within primary care, and to further identify which characteristics of these patients are associated with depression and anxiety. This was a cross-sectional study with a comparison group.Patients who were attended to at the HTLV outpatient clinic of the infectious-contagious and parasitic diseases service of the Oswaldo Cruz University Hospital between March 2014 and June 2015 were included.This outpatient clinic provides care for symptomatic and asymptomatic patients with HTLV-1.Most asymptomatic patients were identified after blood donations and referred to this outpatient clinic. The comparison group consisted of patients who sought medical care for a variety of reasons, described previously 12 , between January and August 2013 (uninfected patients) at the Alto do Maracana primary care unit in the City of Recife, Brazil.Primary care units in Brazil are responsible for providing outpatient care for common, less complex diseases.Medical care is provided by family doctors 13 .None of the patients had been previously diagnosed with HTLV, however this had not been confirmed via a serological test for HTLV. A semi-structured questionnaire was used with the aim of obtaining sociodemographic data and information about the HTLV infection and its clinical form (among HTLV patients, only), as well as age, sex, schooling level. To evaluate depression and anxiety symptoms, the Brazilian version of the Hospital Anxiety and Depression Scale (HADS) was used 14 .This scale has one subscale for depression (with seven questions) and another for anxiety (with seven questions).Each subscale is scored between zero and 21.The cutoff points used for designating depression or anxiety were scores greater than or equal to nine points on each respective subscale 14 . Ethical considerations All patients gave their informed, written consent.The study was approved by the Research Ethics Committee of the Oswaldo Cruz University Hospital. Statistical analyses Continuous variables were presented in the form of means with standard deviations (SD) and the categorical variables in the form of the absolute distribution with percentage distribution. To ascertain whether any associations existed between categorical variables, chi-square tests were used.Continuous variables were compared between groups using a student's t-test when the distribution was normal, or the Mann-Whitney U test in cases of a non-normal distribution.The significance level for all statistical tests was fixed at 5%.Statistical analyses were performed using the Stata/SE 12.0 software (Statacorp, USA).Poisson regression via the Enter method was used, with anxiety and depression as the dependent variables. Patients with HTLV had been diagnosed for a mean of 5 years (SD: 4.5).Only four patients knew how they had become infected (three via sexual transmission and one via an infected blood transfusion).Twenty patients with HTLV-1 had a diagnosis of HAM/TSP.These patients had experienced symptoms for a mean period of four years (SD: 3.9).Fourteen patients with HAM/TSP had anxiety and 14 had depression. Table 1 shows the association between the patients' characteristics and depression.Patients over the age of 40, women, those with HTLV-1 and spastic paraparesis, and those with HTLV-1 who were asymptomatic had depression significantly more often. Table 2 shows the association between patients' characteristics and anxiety.Patients over the age of 40, women, those with HTLV-1 and spastic paraparesis and those with HTLV-1 who were asymptomatic had anxiety significantly more often.As far as we are aware, our study is the first to compare patients infected with HTLV with those seen by family doctors for another medical complaint.We chose to use patients with low-complexity diseases in the comparison group because people who are ill for any reason have a greater chance of having depression or anxiety than do healthy individuals.This would allow us to assess whether the presence of HTLV-1 or simply being ill was related to an experience of anxiety and/ or depression. Studies that have evaluated anxiety and depression among patients with HTLV-1 have generally compared individuals with symptomatic HTLV to those with asymptomatic HTLV 4,6,10 or individuals infected with HTLV to blood donors without HTLV 7,8 . Comparison between symptomatic and asymptomatic patients does not allow for determining whether patients with HTLV have a greater risk for anxiety or depression than the general population.Some of these studies have shown that there is greater prevalence of anxiety 4 and depression 4 among symptomatic patients, while others have shown equal prevalence in the two groups 6,10 . Comparisons with blood donors have the limitation that donors are generally volunteers.Voluntary participants are often healthier than the general population 8 , and this may thus lead to an overestimation of the risk for depression or anxiety in patients with HTLV.Even with this limitation, studies using such comparisons have found conflicting results 7,8 . We found that patients infected with HTLV-1 (both symptomatic and asymptomatic individuals) had anxiety and depression significantly more often than did those with other diseases.The chance that asymptomatic patients with HTLV-1 will become symptomatic is relatively low and the latency period between contamination and the development of symptoms is normally a few years 2 .Nonetheless, this is a disease without any cure that has severe functional repercussions.It is further impossible to predict which individuals will become symptomatic.Patients who are followed up in HTLV-1 outpatient clinics are given information about the natural history of the disease and asymptomatic individuals often come into contact with symptomatic individuals during clinical consultations.We therefore raise the hypothesis that this insecurity regarding whether the disease will develop increases the risk for developing depression and/or anxiety in asymptomatic individuals. In the present study, we found that women experienced more depression and anxiety than men.This is in accordance with the broader literature 15 .Older patients also had a higher risk for these conditions in our study.There is no clear consensus on the cause of this elevated risk in the existing research 15 . Our study has some limitations.Because it was crosssectional study, we cannot conclude a causal relationship between HTLV-1 and depression and/or anxiety disorders.As we did not perform serological testing for HTLV, we cannot be sure that there were no patients infected with HTLV in the comparison group.Patients with HTLV who were being TABLE 1 : The association between multiple characteristics and depression*. TABLE 2 : The association between multiple characteristics and anxiety*.
2018-07-16T23:18:09.904Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "f39bbc79da200a8bfdbb7bd36b5f4f49615f9a23", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rsbmt/v51n3/1678-9849-rsbmt-51-03-357.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e5d0edffdacad82910c1e86408e32bd46aaa8770", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
41987556
pes2o/s2orc
v3-fos-license
Active Site Phosphopeptides from Pea Seed Nu cl eosid e Dip hos p ha te Ki nase Nucleoside diphosphate kinase from pea seed was incubated with (3zP)ATP, inactivated with alkali and digested with trypsin. From the digest the main part of the bound phosphorus was isolated as two phosphopep-tides, both containing 14 amino acid residues. These phosphopeptides had an identical amino acid sequence Asx-Val-Ile-His-GI y-Ser-Asx-Ala-Val-GIx-Ser - Ala - Asx - Lys, as determined by dansyl-Edman technique. The only difference found between the two phosphopeptides was a different lability to acid of the phosphoryl bond. The possibility that the appearance of two phospho-peptides was due to a specitic migration of the phosphoryl bond within the peptide chain from I-phospho-histidine to 3-phosphohistidine is discussed. INTRODUCTION 'Pea seed nucleoside diphosphate kinase (ATP: nucleoside diphosphate phosphotransferase, EC 2.7.4.6) has been shown to consist of four subunits which probably have an identical primary structure and are phosphorylated by ATP (1, 2).Data from rapidmixing experiments and initial velocity analysis indicate that the phosphoryl enzyme is an intermediate of the enzyme reaction (3). The phosphoryl binding site of nucleoside diphosphate kinases has hitherto been studied by alkaline hydrolysis of the phosphorylated enzyme.In an alkaline hydrolysate of phosphorylated nucleoside diphosphate kinase from baker's yeast the main part of the bound phosphate is isolated as 1-phosphohistidine, while in alkaline hydrolysates of other nucleoside diphosphate kinases, a few phosphopeptides accounted for most of the bound phosphate (4,5, 12). The phosphopeptides from bovine liver nucleoside diphosphate kinase are probably l-phosphohistidine peptides (13).Similar phosphopeptides are obtained from pea seed nucleoside diphosphate kinase ( 5 ) , suggesting similarities in the amino acid sequence around the phosphoryl binding site of the two enzymes. Since a main part of the bound phosphate of these two enzymes cannot be isolated as a single phosphoamino acid after alkaline hydrolysis alternative degradation methods are of importance in order to establish the nature of the phosphoryl binding.One approach seemed to be the proteolytic degradation of the phosphorylated enzyme.By analysing the amino acid sequence of these fragments a rational basis may be obtained for the choice of methods for further degradation of the phosphopeptides to a free phosphoamino acid, accounting for most of the bound phosphate.Sequence data would also give further support to previous results on the identity of the subunits of the nucleoside diphosphate kinase. In the present study alkali-inactivated, 32P-labelled pea seed nucleoside diphosphate kinase was digested with trypsin and the main part of the bound radioactivity was isolated as two phosphopeptides.The amino acid sequence was the same for both phosphopeptides as determined by dansyl-Edman technique.A transformation of one of the phosphopeptides into the other is discussed on the basis of phosphoryl group migration. MATERIALS The enzyme from pea seed was purified as described earlier (5).(32P)ATP labelled at the 7-P, was prepared according to Engstrom (6).The specific radioactivity of the (3zP)ATP ranged from 0.2 to 1 .Ox 106 counts.min-l.nmole-I.Sephadex (G-50 and G-25) and DEAE-Sephadex (A-SO and A-25) were purchased from Pharmacia Fine Chemicals.Trypsin (EC 3.4.21.4) code TRTPCK, was obtained from Worthington.Precoated polyamide Upsala J Med Sci 79 thin layer sheets for identification of dansyl amino acids were obtained from Cheng Chin Trading Company, Taiwan, Taipei.They were washed with formic acid before use (1 1).Dansyl chloride, phenyl isothiocyanate and trifluoracetic acid were obtained from Pierce Chemical Company.The phenyl isothiocyanate was stored under nitrogen at -20". Pyridine was purchased from Mallinckrodt.It was refluxed for 3 hours over phtalic anhydride and distilled off at 114" to 117" and then stored under nitrogen at -20".Butylacetate from Merck was distilled from K,CO, at 127".Dansylamino acids were obtained from BDH Chemicals Ltd., Poole, England, with exception for E - dansyllysine and a-dansylhistidine, which were obtained from Pierce Chemical Company.Histidyllysine was from Sigma.All chemicals were of highest grade available. METHODS All preparations were carried out at 5" unless otherwise stated.Ninhydrin analysis was performed with alkaline hydrolysis essentially according to Hirs (8).400 p1 of sample corresponding to 20 to 40 nmoles of amino acid residues were diluted with 200 p1 of 12 M sodium hydroxide and heated on a boiling water bath for 2.5 hours.After chilling 200 PI of concentrated acetic acid were added, followed by 400 pI of a ninhydrin reagent (10).The solution was then heated in a boiling water-bath for 15 min, chilled and diluted with 500 pl of 50% (viv) ethanol and the absorbance at 570 nm was measured.Glycylglycin was used as a standard.Radioactivity and stability to acid of the phosphoryl bond were assayed as described previously (14).Amino acid analysis was carried out on a BioCal 2000 amino acid analyzer using the two column system.About 50 nmoles of peptide were hydrolyzed for 24 and 72 hours in 6 M HCI at 110" under reduced pressure in sealed ampouls. The tryptophan content of the peptides was assayed by measuring the fluorescence intensity at 350 nm after exitation at 280 nm using an Aminco Bowman fluorescence spectrophotometer.Quantitation was achieved by using a standard solution of tryptophan equimolar to the peptide solution.Amino acid sequence determinations were made essentially according to Hartley (17).At the identification of dansylhistidine, a hydrolysate of dansylated histidyllysine was used as a reference.All spectrophotometric measurements of absorbance were made using a Zeiss PMQ I1 spectrophotometer. Phosphorylation of the enzyme with (32P)ATP and digestion of the alkali inactivated 32Plabelled enzyme with trypsin About one pmole, i.e. 70 mg, of pea seed nucleoside diphosphate kinase in 0.01 M triethanolamineacetic acid buffer (pH 7.4) was diluted in the same icecold buffer (pH 7.4) to a final concentration of 0.2 mg per ml.The solution was kept in an icewater bath.To' one tenth of this solution were added 25 pl to 50 pl of 1 mM solution of (32P) ATP in the same buffer (pH 7.4).These conditions were chosen to optimize 32P-phosphate incorporation from (32P)ATP into the enzyme.To the rest of the nucleoside diphosphate kinase solution was added a solution of ATP in the same buffer (pH 7.4) giving a molar excess of ATP to nucleoside diphosphate kinase of 50 to 100 in different preparations and a final concentration of about 0.2 mM ATP.This was done to obtain optimal conditions for phosphate incorporation.The incubations were stopped after 40 seconds by the addition of 2.5 mM sodium hydroxide to a final concentration of 0.1 M in each incubation mixture.The two incubation mixtures were pooled and kept in an icewater bath for 2.5 hours to obtain complete denaturation. A 2 M potassium hydrogen carbonate solution was then added to a four-fold molar excess over the sodium hydroxide added giving a pH 9.3 to 9.4 at 25".The solution containing 0.3 mg enzyme per ml was then kept at 25".A fresh solution of trypsin, 1 mg per ml 1 mM hydrochloric acid, was added giving a trypsin to nucleoside diphosphate kinase ratio of 1 : 10 (wlw).The mixture was gently stirred for 3 hours. Purifcation of phosphopeptides First chromatography on Sephadex G-50.The digestion was interrupted by chromatography of this mixture on a ( 6 . 5~4 8 cm) Sephadex G-50 column, equilibrated and eluted with a 5 mM potassium hydrogen carbonate buffer (pH 9.4) collected in 30 ml fractions.After elution of about 0.6 column volume, the main part of radioactivity appeared as one broad peak corresponding to about 60% of the total radioactivity of the (32P)ATP used.The rest of the radioactivity was eluted after about one column volume and only background activity could be found with the void volume.The peak fractions from the peak appearing after 0.6 column volume were pooled.Two to three moles of phosphate were incorporated into the pooled material per mole of enzyme incubated with (32P)ATP of known specific activity. Chromatography on DEAE-Sephadex A-50.The pooled material was applied to a (2.0x15 cm) DEAE-Sephadex A-50 column, equilibrated with 17.5 cm) DEAE-Sephadex A-SO column and in B a (1.4~17.5 cm) DEAE-Sephadex A-25 column, eluted first with 100 ml and 50 ml, respectively, of 0.01 M potassium hydrogen carbonate buffer (pH 9.4) and then with a linear gradient (total volume 1 1) formed from 0.01 M and 0.2 M potassium hydrogen carbonate buffer (pH 9.4).15 ml fractions were collected after starting the gradient.0-0, 32P radioactivity.0-0, days, the ratio of the radioactivity of fraction I to fraction I1 was 0.1 and 1.5 respectively.The overall yield of radioactivity in fractions I and 11 together in relation to the pooled material from the first chromatography on Sephadex (3-50 varied between 60% and SO%, mean value 7 0 % , in four different preparations. Chromatography on Sephadex G-25.The pooled fractions were freeze-dried and then dissolved in water to a volume of 5 to 10 ml.The solutions were then desalted by chromatography on a (2.1 x 180 cm) Sephadex (3-25 column equilibrated and eluted with 0.01 M ammonium hydrogen carbonate.Almost all of the radioactivity pooled from the DEAE-Sephadex A-25 chromatography appeared after about 0.4 column volume and was pooled and concentrated.The pooled material was freeze-dried and dissolved in water.From this solution samples were taken for amino acid and amino acid sequence analysis.Calculated from data obtained with amino acid analysis (Table I) on aliquots from these pools about 2.5 moles of peptide were obtained per mole of enzyme incubated with ATP. Ninhydrin positive material assayed as described under Methods.The figures correspond to the amount of ma-Properties of the phosphopeptideJ terial present in-1 ml.Fractions were pooled as indicated in the figure. The stabilio to acid of the p h o s p h o ~d bond. In samples from the peak fractions of the DEAE- Amino acid sequence analysis.An identical amino acid sequence, Asx-Val-Ile-His-GI y-Ser-Asx-Ala-Val-Glx-Ser-Ala-Asx-Lys, was found for both fractions I and 11.In the identification system used, some problems were encountered in demonstrating a-dansylhistidine in the presence of E - dansyllysine.Therefore, amino acid analysis was also performed on aliquots after the third and fourth Edman degradation steps.The results are given in Table 11, and were in accordance with the findings obtained with the chromatographic method, i.e. histidine was demonstrated as the fourth amino acid residue.From amino acid analysis data and the known sequence of the thirteen additional amino acid residues lysine was assumed to be C-terminal.This would also be expected from the known specificity of trypsin. DISCUSSION One phosphoryl group is bound to each subunit of pea seed nucleoside diphosphate kinase, presumably as 1-phosphohistidine, during its action (1, 2, 3, 5).In the present investigation the aim was to isolate and determine the structure of a part of the active site of the enzyme containing this histidine residue.When 32P-labelled pea seed nucleoside diphosphate kinase was digested with trypsin after inactivation with alkali, the main part of the bound radioactivity was recovered as two phosphopeptides both having the same sequence. As far as the amino acid sequence is concerned the present work supports the previous suggestion of the existence of identical subunits in the peak seed nucleoside diphosphate kinase. The appearance of two phosphopeptides with phosphoryl linkage with different stability to acid at first seemed to contradict this conclusion.However, during the isolation procedure, the most labile phosphopeptide, with a lability similar to I-phosphohistidine gave rise to a more stable form.It was also found that the more rapidly the isolation was carried out, the more the most labile form dominated.It is therefore suggested that the most labile phosphopeptide represents the phosphoryl binding site of the phosphorylenzyme and that the phosphoryl group is bound as I-phosphohistidine. The most plausible explanation for the appearance of the more stable phosphopeptide is a phosphoryl group migration from I-phosphohistidine.Hultquist found that 3-phosphohistidine is formed at alkaline pH in a solution containing I-phosphohistidine and therefore the more stable phosphopeptide may be a 3-phosphohistidine peptide (12).However, since N-&-phospholysine has also been a product of alkaline hydrolysis of phosphorylated nucleoside diphosphate kinases (4, 5, 12) the presence of this phosphoamino acid in the more stable phosphopeptide cannot be excluded.In the present work the nature of the phosphoryl linkage has not been further investigated.This problem is preferably approached by further proteolytic degradation, as discussed below. The dansyl-Edman technique does not discriminate between an acidic amino acid residue and its amide.The transition between the two phosphopeptides may therefore be explained by deamidation, making the more stable phosphopeptide the most acid.This explanation is, however, not valid since the stable phosphopeptide was found to be the least acid one, as judged by its migration on DEAE-Sephadex chromatography. Mainly 32P-orthophosphate and only small amounts of l-("P) phosphohistidine were obtained when the "P-phosphopeptides obtained from an alkaline hydrolysate of the 32P-labelled bovine liver nucleoside diphosphate kinase were further hydrolyzed in alkali (13). Since the tryptic phosphopeptides correspond to the main part of the radioactivity bound to the enzyme it is evident that proteolytic degradation does not labilize the phosphoryl bond to the same extent as alkaline hydrolysis does.It therefore Seems reasonable to believe that further proteolytic degradation is the more suitable way to definitely establish the nature of the phosphoryl bond of the phosphorylated nucleoside diphosphate kinase of pea seed at present. Earlier studies of the phosphoryl bond of phosphorylated pea seed and bovine liver nucleoside diphosphate kinase by alkaline hydrolysis pointed to similarities in the amino acid sequence of the phosphorylated binding site (5, 12).The approach used in the present work would then be useful in the search for ways of studying other nucleoside diphosphate kinases, as well as other phosphorylated enzymes containing the same type of phosphoryl bond. 9. 4 ). 7 - The radioactive material was eluted as described in the legend of Fig.I A, which shows a typical chromatogram.Almost all of the radioactive material appeared as one peak and was pooled.Chromatography o n DEAE-Sephaciex A-25.The pooled material was rechromatographed on a ( 6 .5 ~ 48 cm) Sephadex G-50 column in 5 mM potassium hydrogen carbonate buffer (pH 9.4) in order to decrease the ionic strength of the sample before its application t o a ( 1 .4 ~ 17.5 cm) DEAE-Sephadex A-25 column, equilibrated with the above-mentioned buffer (pH 9.4).The main part of the radioactivity appeared in two peaks, called fractions I and I1 in order they were eluted from the column (Fig.I B).The fractions were pooled as indicated in the figure.30% of the radioactivity applied to the column appeared in fraction I, and 66% in fraction 11.The duration of the preparation was eightTable I .Amino acid composition of' the phosphopeptide fractions I and I I from the second chrotnutography on DEAE-sephudexThe values given are mean values from two different preparations hydrolyzed for 24 and 72 hours.Each value is calculated as moles of amino acid per mole of peptide (tryptophan could neither be detected in fraction I nor in fraction11) days from the first chromatography on Sephadex (3-50 to the c ~r o m a t o g r a p ~y on D E A E -s ~~~~~~~ A-25.At two other preparations taking 7 and 22 ', The v a k w &'ere obtained by extrapolation to 7,ero time of hydrolysis.b 72-~OUI-S hydrolysis value.I 10-742856 Table 11 . Amino acid analysis data for samples from fractions I and II after the third and fourth Edman degradation step The peptides were hydrolyzed for 24 hours. Fraction number is indicated by roman and degradation step by arabic numerals. Numbers given are mole of amino acid per mole of peptide where the nearest integer is given within brackets (14).The values given are the mean values of four different preparations.
2018-04-03T05:12:36.954Z
1974-01-01T00:00:00.000
{ "year": 1974, "sha1": "b2d0e5bc68a8fe9fc8457d190ed61f126d43a798", "oa_license": "CCBY", "oa_url": "https://ujms.net/index.php/ujms/article/download/7392/13161", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "29816e923f5ca103634a793e0d67cdf5d6f94e71", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
1062662
pes2o/s2orc
v3-fos-license
Anticancer activity of biostabilized selenium nanorods synthesized by Streptomyces bikiniensis strain Ess_amA-1 Selenium is an important component of human diet and a number of studies have declared its chemopreventive and therapeutic properties against cancer. However, very limited studies have been conducted about the properties of selenium nanostructured materials in comparison to other well-studied selenospecies. Here, we have shown that the anticancer property of biostabilized selenium nanorods (SeNrs) synthesized by applying a novel strain Ess_amA-1 of Streptomyces bikiniensis. The strain was grown aerobically with selenium dioxide and produced stable SeNrs with average particle size of 17 nm. The optical, structural, morphological, elemental, and functional characterizations of the SeNrs were carried out using techniques such as UV-vis spectrophotometry, transmission electron microscopy, energy dispersive X-ray spectrometry, and Fourier transform infrared spectrophotometry, respectively. The MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay revealed that the biosynthesized SeNrs induces cell death of Hep-G2 and MCF-7 human cancer cells. The lethal dose (LD50%) of SeNrs on Hep-G2 and MCF-7 cells was recorded at 75.96 μg/mL and 61.86 μg/mL, respectively. It can be concluded that S. bikiniensis strain Ess_amA-1 could be used as renewable bioresources of biosynthesis of anticancer SeNrs. A hypothetical mechanism for anticancer activity of SeNrs is also proposed. Introduction Selenium (Se) is an important trace element that plays a crucial role in human health and regulates many crucial cellular functions mediated through its incorporation into selenoproteins. 1,2 This inorganic element also occurs in trigonal and monoclinic phases of crystalline microstructures. The monoclinic phase is less stable and occurs in α, β, and γ forms, which differ only in the way that the crystals are packed. 3 The antioxidant function of Se is conferred by some of these selenoproteins that directly or indirectly protect against oxidative stress. The extensive experimental evidence indicates based on the in vitro, animal, geographic, and prospective studies that Se supplementation reduces the incidence of various types of cancers. Since as early as the 1960s, geographical studies have proved a consistent trend for populations with low Se compounds intake to have higher cancer mortality rates. 2,4 However, at elevated doses, Se compounds usually turn into a prooxidant with well-established cell growth inhibiting/killing properties. 1 Thus, the uses of Se compounds for anticancer therapy have been greatly explored during the last decade and results of studies have shown that Se compounds reduce the risk of various types of cancers, such as mammary, prostate, lung, colon, and liver cancers. 1,[4][5][6][7] The research findings also suggest that the concentration, chemical species, and redox potential of Se compounds are critical for Dovepress Dovepress 3390 ahmad et al their anticancer activity. 1,5 These Se compounds have more promising anticancer activity at high dosage; however, high doses of Se compounds give rise to greater concerns about its toxicity. 1,5 In this regard, selenium nanostructured materials (SNMs) could reduce the risk of Se toxicity and be widely used in cancer biology due to its promising anticancer activity and less toxicity, compared to Se compounds (inorganic and organic). [6][7][8][9] SNMs also exhibit unique physical, chemical, and biological properties compared to that of Se compounds. 10 Various types of SNMs, which are stabilized and modified with different kinds of biological macromolecules, are reported to possess excellent anticancer activities. 1,[11][12][13] Based on this, some researchers suggest that biological macromolecules stabilized and modified may have potential applications as anticancer agents for the killing of human cancer cells. For these reasons, the study of SNMs has gained considerable importance in recent years and therefore, various types SNMs have been obtained by employing physico-chemical methods, ie, amorphous, 3,11 trigonal, nanorods, 12,13 nanoribbons, 14 hexagonal prism, 13 nanoplates, 15 nanotubes, 16 and spheres. 13 Therefore, SNMs are being widely used in basic and applied areas of chemistry, physics, environmental science, material science, and biomedicine. 7,13,17,18 However, concern is now growing regarding the environmental impact of the SNM synthesis process based on physico-chemical methods that require high pressures, temperatures, and toxic chemicals. These methods have some drawbacks: (i) production of stable SNMs dispersions only at lower concentrations and unsuitability for large-scale production; (ii) requirement of additional stabilizing agents; (iii) production of hazardous byproducts, and (iv) increased pollution in the environment. Consequently, significant efforts are ongoing toward the development of novel nontoxic methods for the synthesis and surface modification/stabilization of SNMs. [19][20][21] Biogenic methods are a renewable, clean, nontoxic, and environmentally friendly procedure for the synthesis of these types of nanomaterials. 7,22,23 Recently, biogenic methods have been utilized for the synthesis of a variety of SNMs at ambient conditions. 6,19,[24][25][26] It is well established that biosynthesized SNMs have several important characteristics, including more stability and higher biological activity, due to the surface functionalization by biological macromolecules secreted by fungi and bacteria. 7,9,17,23,27 Among the different microorganisms, Actinomycetes are less explored for the synthesis of SNMs. Actinomycetes are a diverse group of filamentous true bacteria found in a variety of habitats in terrestrial and aquatic environments. 28 Hence, there are reports that have shown that actinomycetes are efficient bioagents for the intracellular and extracellular synthesis of metal nanostructured materials (MNMs). 22 Most of the studies have been done on species of Streptomyces genus due to its inherent capability for the production of redox active macromolecules/secondary metabolites. 29,30 The capability of Streptomyces genus for the biosynthesis of MNMs has previously been reported. 18,22,[31][32][33][34] However, the synthesis of biostabilized selenium nanorods (SeNrs) using any strain of actinomycetes has not been reported yet. In the present study, we reported: (i) a simple, environmentally friendly, and renewable biogenic method for synthesizing disperse and stable SeNrs by Streptomyces bikiniensis strain Ess_amA-1 and (ii) presented evidence that biosynthesized SeNrs have the anticancer activity. To the best of our knowledge, this is the first report on the synthesis of SeNrs by S. bikiniensis as a novel renewable bioresource and opens up the possibility of commercially viable biogenic production of SeNrs for novel anticancer nanostructured materials. Isolation of S. bikiniensis An insect Tapinoma simrothi was collected from Eldrieh, Riyadh, Saudi Arabia (24.7 N latitude and 46.7 E longitude), and used for the isolation of S. bikiniensis. The suspension of T. simrothi was prepared in normal saline solution (NSS) for the isolation of saccharolytic actinomycetes and an appropriate dilution was spread on the starch casein agar (SCA) medium (pH 7.2±0.2), supplemented with antibiotics (cycloheximide [40 mg/L], nystatin [30 mg/L], and nalidixic acid [10 mg/L]). 35,36 The inoculated Petri plates were incubated aerobically at 30°C until the appearance of powdery texture colonies with branching filaments and aerial mycelia. The selected colonies were subcultured and further purified by streaking and among them the strain Ess_amA-1was selected and maintained on International Streptomyces Project 2 (ISP-2) agar medium by periodical subculturing. Morphological and physiological characterization of S. bikiniensis strain ess_ama-1 The color of aerial mycelium was determined from mature and sporulating aerial mycelia of the actinomycete colonies on different media such as ISP-2, ISP-4, ISP-6, ISP-7, Czapek Dox, and SCA. The color was determined using color names lists. 37 The color of the soluble pigments was determined International Journal of Nanomedicine 2015:10 submit your manuscript | www.dovepress.com 3391 Bio-selenium nanorods as anticancer agents visually by observing the color changes in the medium due to the diffusing pigments produced by strain Ess_amA-1. 38 Carbohydrates and physiological tests were performed using the specific media and methods. [39][40][41] All the cultures were incubated at 30°C for 7 days. The assay for enzymatic activity was performed according to Bibb et al. 42 genomic DNa extraction and purification Total genomic DNA of strain Ess_amA-1 was isolated from the mycelium biomass (0.1 g), which was harvested from the freshly grown culture in ISP-2 medium as described. Briefly, the collected mycelium biomass was crushed with liquid nitrogen and the powder was mixed with 500 µL lysis buffer (containing 50 mM Tris-HCl, pH 8.0; 5 mM EDTA, pH 8.0; 50 mM NaCl; and 20 µL lysozyme, 10 mg/mL). The cells were lysed by vigorous vortexing and lysate was incubated at 37°C for 30 minutes. Subsequently, 20 µL SDS (10% w/v) and 20 µL of proteinase K (10 mg/mL) were added into the Eppendorf tube and incubated at 55°C for 30 minutes. The cell lysate was cooled down and extracted once with an equal volume of phenol and chloroform (1:1 v/v). The aqueous phase was collected by centrifugation at 10,000 rpm for 5 minutes. Total genomic DNA was precipitated from the obtained aqueous phase by the addition of two volumes of chilled isopropanol. The precipitated genomic DNA was palletized by centrifugation at 13,000 rpm for 30 minutes and the pellet was washed with 70% ethanol. The washed pellet was air dried under laminar flow and dissolved in 50 µL TE buffer (containing 50 mM Tris-HCl and 1 mM EDTA; pH 7.2). Multiple sequence alignments and phylogenetic analysis The obtained 16S rRNA gene sequence was compared with the homologous sequences retrieved from GenBank using the Blastn tool. 43 Multiple sequence analysis with the sequences of different actinomycetes groups was performed using CLUST-ALW with default parameters. 44 A phylogenetic tree was constructed by the neighbor-joining method with nucleotide pair-wise genetic distances corrected by the Kimura two-parameter method 45 using the TreeCon tool. The reliability of the tree topology was subjected to a bootstrap test and numbers at the nodes indicate bootstrap support values as a percentage of 1,000 replications. All branches with 50% bootstrap support were judged as inconclusive and were collapsed and branch lengths for all trees were normalized to 0.02% divergence. Based on biochemical and molecular characterization, the characterized strain was designated as S. bikiniensis strain Ess_amA-1. Biosynthesis of seNrs Briefly, sterile 100 mL ISP-2 medium containing 1 mM selenium dioxide (SeO 2 ) was inoculated with 1 mL of the fresh inoculums (OD600, 0.5) of strain Ess_amA-1 and incubated in an orbital shaker incubator (150 rpm) at 30°C for 48 hours. A control flask containing ISP-2 without SeO 2 was inoculated with a test strain and incubated under the same conditions. The reduction of SeO 2 into the elemental selenium (S 0 ) and the nucleation/growth of SeNrs were monitored by sampling an aliquot of the medium at different time periods of incubation (6 hours, 12 hours, and 48 hours). The cells were then removed by filtration and the resulting cell-free filtrate was then centrifuged at 14,000 rpm for 15 minutes to obtain the biosynthesized SeNrs. characterization of biosynthesized seNrs The optical, structural, morphological, elemental, and functional characterizations of the SeNrs were carried out using UV-Vis spectrophotometer, transmission electron microscope (TEM), energy dispersive X-ray (EDAX) spectrometer, and Fourier transform infrared (FTIR) spectrophotometer, respectively. In order to ascertain the optical characteristics of synthesized SNTs, the absorption spectrum was recorded by Lambda 35 double beam UV-Vis spectrophotometer (Hitachi, Japan) in the wavelength range of A 200 -A 800 nm using a quartz cuvette. The size and structure of the biosynthesized SeNrs were analyzed by JEM-1010 TEM (JEOL, Tokyo, Japan) at an accelerating voltage of 80 kV. For this analysis, the sample was prepared by placing drops of SeNrs aqueous solution on carbon coated copper grids and air dried under dark conditions. The elemental analysis of SeNrs was done using the energy dispersive X-ray spectroscopy (EDS) equipped with JSM-6380 LA scanning electron microscope (SEM) (JEOL, Tokyo, Japan). For functional characterization of SeNrs, the FTIR spectrum was recorded in the range of 400-4,000 wave number (cm −1 ) on Nicolet 6700 FTIR spectrometer in the transmittance mode at a 4 cm −1 resolution. 3-(4,5-dimethylthiazol-2-yl)-2,5diphenyltetrazolium bromide assay The anticancer activity of biosynthesized SeNrs was tested on human breast adenocarcinoma cell line (MCF-7) and human liver carcinoma cell line (Hep-G2) cells (ATCC, Manassas, VA, USA) using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) dye reduction assay. This assay is based on the reduction of MTT dye to a blue colored formazan product by mitochondrial dehydrogenase. The cells were cultured in a humid environment at 37°C and 5% CO 2 in a cell culture minimum essential medium (Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 15% fetal bovine serum and 1% penicillin/streptomycin (Thermo Fisher Scientific). At 85%-90% confluence, cells were harvested using 0.25% trypsin/EDTA solution and subcultured into a 96-well plate. The MTT colorimetric assay developed by Mosmann 46 with modification was used to screen the cytotoxic activity of SeNrs. Briefly, the MCF-7 and Hep-G2 cells (1×10 4 cells/well) were grown overnight in 96-well flat bottom culture plates, and then exposed to seven different concentrations (1.0, 2.0, 5.0, 10, 25, 50, and 100 µg/mL) of SeNrs for 24 hours. In addition, negative/ vehicle control and positive control (doxorubicin) were also used for comparison. After the completion of the desired treatment, 10 µL of MTT reagent (Thermo Fisher Scientific) was added to each well and further incubated for 3 hours at 37°C. Finally, the medium with MTT solution was removed, and 100 µL of DMSO (Sigma-Aldrich Co., St Louis, MO, USA) was added to each well and further incubated for 20 minutes. The optical density (OD) of each well was measured at 570 nm using a microplate reader (Synergy; BioTek, Winooski, VT, USA). The percentage of cytotoxicity compared to the untreated cells was determined. Triplicates were maintained for each treatment. Lethal concentration (LC 50 ) was determined by calculating the cell viability: statistical analysis The triplicate sets of data for the various parameters evaluated were subjected to analysis of variance in accordance with the experimental design (completely randomized design) using SAS statistical packages (Cary, version 6.12; SAS Institute Inc., NC, USA) to quantify and evaluate LSD. The values were calculated at P level of 0.05%. Results and discussion characterization of strain ess_ama-1 In the present study, the actinomycete strain Ess_amA-1 was isolated from an insect T. simrothi with the aim of exploiting its SeNrs synthesizing potential for anticancer therapy. The presumptive taxonomic identification of the strain was done by its morphological and biochemical characteristics. For the identification of morphological characteristics, a specimen of the strain was examined under the bright-field microscope and analysis revealed that the strain Ess_amA-1 produces a light brown and gray substrate ( Figure 1A). These morphological characteristics of the strain were precisely confirmed by the SEM. The image showed that the smooth-surfaced spores are held in straight chains (rectiflexibiles) ( Figure 1B). 47,48 The morphological characteristics of the strain Ess_amA-1 were further substantiated by physiological and biochemical tests. The results of the physiological and biochemical characteristics are summarized in Tables 1 and 2. The strain showed fast growth behavior on media (ISP-2, 4, 6, and 7) and moderate growth behavior on ISP-5 and Czapek-Dox agar medium; melanin pigment production was determined on ISP-6 medium. Thus, based on the morphological and biochemical characteristics of strain Ess_amA-1 was identified as a member of Streptomyces genus. 38 For the authentic taxonomic characterization of the strain, 16S rRNA gene sequencing was performed and obtained data were analyzed carefully. The strain was identified as a member of S. bikiniensis by 16S rRNA gene sequencing and in silico analysis that has been deposited in NCBI GenBank (Accession Number: KF588366 3393 Bio-selenium nanorods as anticancer agents phylogenetic analysis indicated a close genetic relatedness of strain Ess_amA-1 with S. bikiniensis and we therefore designated the strain as S. bikiniensis strain Ess_amA-1 ( Figure 2). The members of the Streptomyces genus have been largely exploited for the production of bioactive secondary metabolites (ie, antimicrobials, antitumorals, antihypertensives, and immunosuppressants) with wide uses in medicine and agriculture. 30,49,50 Thus, the species of the genus Streptomyces are well established bioresources for the production of valuable nanostructured materials. 18 Biosynthesis of seNrs The bioreductive capability of the strain was utilized for the synthesis of SeNrs. The strain when challenged with 1 mM SeO 2 exhibited a time-dependent change in the color of the ISP-2 liquid culture medium from light gray to red after 6 hours incubation period. The intensity of the red color of culture medium increased upon further incubation up to 48 hours (Figure 3). The emergence of a red-brick color in the culture medium after incubation of 48 hours was a clear indication that the strain biogenically easily reduces selenite ions to insoluble elemental Se (Se 0 ) form. 6,8,25,27,58 The yield of SeNrs was determined and was approximately 7.74 mg/100 mL. The reaction mixture of SeNrs biosynthesis can be optimized to increase the yield and purity by altering the physico-chemical and cultural conditions in the used medium: i) precursor salt, ii) carbon and nitrogen source, iii) pH and oxygen supply, and iv) addition of electron donor, etc. 59 characterizations (optical, morphological, elemental, and functional) of seNrs The synthesis of SeNrs in liquid culture medium was monitored by the UV-Vis spectroscopy that showed a strong and 3395 Bio-selenium nanorods as anticancer agents broad surface plasmon resonance (SPR) peak at ~620 nm, which is a characteristic of SeNrs ( Figure 4A). However, no absorption peak corresponding to the SeNrs in the control flask (without SeO 2 ) was observed. It is well known that due to Mie scattering, SeNrs exhibit absorption at the wavelength ~620 nm. As evident from previous reports, the presence of a single SPR peak evokes biogenic synthesis of SeNrs by S. bikiniensis strain Ess_amA-1, and this was further confirmed by TEM and EDS techniques. 58 The time-dependent intensity of red color increase of the culture medium indicated the gradual growth in the size and shape of SeNrs during the incubation period. Hence, the SeNrs were analyzed at three different incubation times, 6 hours, 24 hours, and 48 hours, employing the TEM. The time-dependent change in the shape and morphology of SeNrs was noticed. During the incubation, the shape of reduced elemental selenium (Se 0 ) gradually changed from the spherical structure to a rod-like structure. Figure 4Ba shows the spherical shape of Se 0 irregular nanospheres, which possess an average diameter of 50-100 nm after 6 hours of incubation. However, these spherical shape structures start to lose their integrity as relatively low aspect-ratio anisotropic structures (rods). After 12 hours, aggregates of higher aspect-ratio rods emerging from a few growth centers were observed (Figure 4Bb). After 48 hours, the length of the structures increased in one dimension and they were converted into the rod-like structures (Figure 4Bc). The SeNrs with an average length of 600 nm, average diameter of 17 nm, and aspect ratio of 35:1 were observed. Nevertheless, most of the biosynthesis methods have reported the production of spherical shape selenium nanoparticles (SeNPs). 24,26,27,60 However, few recent studies reported the biosynthesis of Se nanorods via spherical SeNPs that acted as seeds for growth. 6,17 The high free energy of SeNPs evoked the Ostwald ripening process, which may be responsible for the growth of spherical SeNPs into SeNrs. 17,61 The TEM data also revealed the nucleation/growth of the SeNrs (6 hours and 12 hours incubation) that is probably due to the presence of aromatic amino acids produced by the strain Ess_amA-1, thus indicating possible adhesion of biological macromolecules on the surface of SeNrs. These data are consistent with the previously documented occurrence of biological macromolecules associated to SNMs of microbial origin. 7,9,22 The EDX analysis revealed the presence of Se peak at 1.37 keV, confirming that the SeNrs was successfully synthesized. 23 However, the peaks of carbon (C) and oxygen (O) were believed to be derived from biological macromolecules present on the surface of SeNrs (Table 3). These biological macromolecules may be responsible for the reduction, growth, and stabilization of biosynthesized SeNrs. 25 FTIR spectroscopy was performed to identify the functional groups of the biological macromolecules responsible for the reduction of SeO 2 into the elemental selenium (S 0 ) and the nucleation/growth of SeNrs. The FTIR spectrum shows the characteristic stretching vibration bands of proteins 3397 Bio-selenium nanorods as anticancer agents on the surface of biosynthesized SeNrs ( Figure 4C). The band positions at 3,430 cm −1 were assigned to the stretching vibrations of -N-H and C=O from the amide A and amide I bands of proteins/enzymes, respectively. 62 The free amine or cysteine groups of proteins have a strong ability to bind to metal NPs. 63,64 FTIR data suggest that the proteins/enzymes produced by the strain Ess_amA-1 are primarily responsible for the synthesis of the SeNrs. The proteins/enzymes present on the surface of SeNrs were acting as natural capping agents, preventing agglomeration and promising anticancer activity. Taken together, the data obtained from EDX and FTIR analyses revealed the purity of SeNrs on the basis of absence of signature peaks of other species of SNMs such as SeO 2 NPs. assessment of anticancer activity of seNrs Nanostructured selenium materials have attracted substantial attention due to its excellent biological activity and low toxicity. 17 With promising applications in cancer nanotechnology, SNMs are being touted as the new anticancer and chemopreventive agents. 65 Very recently, the cytotoxic effect of the inorganic and organic Se compounds on the MCF-7 cell line was assessed. 1,9 Various types of SNMs stabilized and surface modified with different types of biological macromolecules/functional groups are reported to possess excellent anticancer activities via the including induction of reactive oxygen species (ROS) production, cell cycle arrest, mitochondrial dysfunction, DNA fragmentation, and cell apoptosis. 1,66 These SNMs have also been shown to augment the anticancer properties of chemotherapeutic drugs like adriamycin and doxorubicin. 67,68 Therefore, the SeNrs were evaluated for their anticancer properties against Hep-G2 and MCF-7 cell lines. The biosynthesized SeNrs showed growth inhibition of Hep-G2 and MCF-7 cells in a dose-dependent manner ( Table 4). The inhibitory effect of the SeNrs was significantly higher on the MCF-7 cells than on the Hep-G2 cells. For instance, SeNrs at 10 µg/mL, 25 µg/mL, 50 µg/mL, and 100 µg/mL reduced MCF-7 cell viability to 69.1%, 54.4%, 44.3%, and 37.5%, respectively. All of those values were significantly lower than those of the Hep-G2 cells: 86.9%, 72.5%, 56.4%, and 42.3%, respectively, at LSD (0.05) =5.7% ( Figure 5A). The lethal dose (ID 50% ) of SeNrs on Hep-G2 and MCF-7 cells was obtained at 75.96 µg/mL and 61.86 µg/mL, respectively, as shown in Figure 5B. The data indicated that the effect of SeNrs on MCF-7 cells was significantly more than on Hep-G2 cells. Moreover, microscopic cell morphological observation after MTT staining showed that Hep-G2 and MCF-7 cells treated with SeNrs showed a dose-dependent reduction in cell numbers, loss of cell-to-cell contact, cell shrinkage, and formation of apoptotic bodies ( Figure 5C; MCF-7 cells data not shown). These results collectively suggested that biosynthesized SeNrs have the anticancer property against cancer cells and can serve as potential anticancer agents. However, a few recent studies have reported lower toxicity and selectivity of SNMs toward normal cells. 1,20 The mechanism involved in the selectivity of the SNMs remains unexplained. Therefore, we tried to explain the plausible mechanism of SNMs for selective killing of cancer cells. Plausible anticancer mechanism of seNrs The inorganic and organic selenium compounds play an essential role in human life and a number of them are considered to possess chemopreventive and therapeutic properties against cancer. 66 In situ surface functionalized SNMs via the biosynthesis procedure have recently gained much attention as potential anticancer agents due to their excellent anticancer activity, biocompatibility, and low toxicity, when compared to inorganic and organic Se compounds. 1,9,69 Conjugation with functional ligands/groups, indeed, can not only prevent the aggregation of SNMs via plus-to-minus charge interactions, but also enhance the anticancer efficacy. 1 These SNMs are established as promising antioxidants (redox modulating) but can also act as prooxidants and thereby exhibit potential anticancer properties in the presence of transition metal ions (Cu). It is well established that tissue, cellular, and serum copper levels are considerably elevated in various malignancies. 70,71 These SNMs are able to bind cell chromatin materials (both DNA and Cu[II]) forming a ternary complex. A redox reaction of the Se compound and Cu(II) in the ternary complex may occur, leading to the reduction of Cu(II) to Cu(I), whose reoxidation generates a variety of ROS. Therefore, cancer cells may be more subject to electron shuttle between copper ions and SeNrs to generate ROS, thereby exhibiting the killing effect on Hep-G2 and MCF-7 cells ( Figure 6). Thus, our hypothesis is that the anticancer mechanism of SeNrs involves the mobilization of endogenous copper, possibly chromatinbound copper, and the consequent prooxidant action. Conclusion The isolated S. bikiniensis strain Ess_amA-1 has the inherent potential to produce more stabilized, bioefficacious, and ecofriendly SeNrs than physico-chemically synthesized SNMs, and can be exploited for mass-scale production. The biosynthesized SeNrs showed anticancer activity against Hep-G2 and MCF-7 cells under in vitro conditions. SNMs are potent anticancer agents, with a modest effect on normal cells. The exact mechanism by which this anticancer activity is mediated remains unclear to the scientific community. In this paper, we suggest a hypothesis that the anticancer mechanism of SeNrs involves mobilization of elevated endogenous copper of cancer cells and consequent prooxidant action. Nevertheless, in-depth studies should be conducted to investigate the anticancer action mechanism of SNMs.
2018-04-03T03:10:41.393Z
2015-05-06T00:00:00.000
{ "year": 2015, "sha1": "746eebd3fe07f63ee30a26c705ca3ead89bf77b5", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=24927", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1ab8ae423100fc53b3644683c67257ad0c9c4aa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
15172792
pes2o/s2orc
v3-fos-license
Plasmon-Enhanced Fluorescence Biosensors: a Review Surfaces of metallic films and metallic nanoparticles can strongly confine electromagnetic field through its coupling to propagating or localized surface plasmons. This interaction is associated with large enhancement of the field intensity and local optical density of states which provides means to increase excitation rate, raise quantum yield, and control far field angular distribution of fluorescence light emitted by organic dyes and quantum dots. Such emitters are commonly used as labels in assays for detection of chemical and biological species. Their interaction with surface plasmons allows amplifying fluorescence signal (brightness) that accompanies molecular binding events by several orders of magnitude. In conjunction with interfacial architectures for the specific capture of target analyte on a metallic surface, plasmon-enhanced fluorescence (PEF) that is also referred to as metal-enhanced fluorescence (MEF) represents an attractive method for shortening detection times and increasing sensitivity of various fluorescence-based analytical technologies. This review provides an introduction to fundamentals of PEF, illustrates current developments in design of metallic nanostructures for efficient fluorescence signal amplification that utilizes propagating and localized surface plasmons, and summarizes current implementations to biosensors for detection of trace amounts of biomarkers, toxins, and pathogens that are relevant to medical diagnostics and food control. Introduction Research in plasmonic confinement of light to volumes much smaller than wavelength paved new routes to powerful amplification schemes in optical spectroscopies. In particular, we witnessed rapid advancements in surface-enhanced Raman spectroscopy (SERS), surface-enhanced infrared spectroscopy (SEIRA), and surface plasmon-enhanced fluorescence spectroscopy (PEF) [1][2][3][4][5] over the last years. This progress was accompanied with the implementation of plasmonics to a range of analytical technologies for the detection of chemical and biological species that are relevant to important areas of medical diagnostics, food control, and security [6,7]. Among these, fluorescence is arguably the mostly spread optical method, and it has been already routinely used for readout of assays over several decades. In PEF, fluorophore labels are coupled with the tightly confined field of surface plasmonscollective oscillation of charge density and associated electromagnetic field on a surface of metallic films and nanostructures. This interaction can be engineered to dramatically enhance emitted fluorescence light intensity which is desired for detecting minute amounts of analytes with improved limit of detection and shorten analysis time. PEF was subject to a number of excellent reviews over the last years covering the fundamental research on the interaction of nanoscale emitters with metallic surfaces [8][9][10] as well as its implementation into advanced assays and applications for biological studies [4,5,[11][12][13]. This paper aims at updating these reviews and providing key leads for a design of plasmonic nanostructures for efficient amplification on realistic biochips. Firstly, fundamentals of surface plasmon-fluorophore interactions are introduced, and the performance characteristics of metallic nanostructures that are essential for strong enhancement of fluorescence signal are discussed. Afterwards, implementations of PEF biosensor devices for rapid detection of trace amounts of biomarkers and harmful compounds including toxins and pathogens are reviewed. Interaction of Fluorophores with Surface Plasmons The coupling of light with localized surface plasmons (LSPs-supported by metallic nanoparticles) and surface plasmon polaritons (SPPs-traveling along continuous metallic films) can provide strong confinement of electromagnetic field intensity. These fields can interact with fluorophores at their absorption λ ab and emission λ em wavelengths which alter respective transitions between the ground state and higher excited states (see Fig. 1). Surface plasmon-induced changes in the excitation and decay rates can be classically described by Maxwell equations by using fluorophore absorption μ ab and emission μ em electric dipole moments [8]. The excitation rate of a fluorophore γ e that is irradiated by an incident wave with the electric field E at the absorption wavelength λ ab can be expressed as Let us note that Eq. (1) holds for small amplitudes of electric field E for which the excitation rate is far from saturation. After its excitation, the fluorophore can return to its ground state by emitting a photon at a higher wavelength λ em (radiative decay rate γ r ) or without emitting a photon (nonradiative decay rate γ nr ). Further, we denote an intrinsic radiative decay rate as γ r 0 , nonradiative decay rate as γ nr 0 , and quantum yield η 0 =γ r 0 /(γ r 0 +γ nr 0 ) for a fluorophore in a homogenous aqueous environment. When the fluorophore is brought in the vicinity of a metallic structure, the radiative decay rate γ r and nonradiative decay rate γ nr =γ nr 0 +γ abs are changed due to the increased local density of optical states (LDOS) at λ em that is associated with plasmon-enhanced field intensity |E| 2 . This leads to the modified quantum yield η [14]: For short distances from the metal surface d<15 nm, strong quenching of radiative transitions occurs due to Förster energy transfer between a fluorophore and a metal. This quenching is accompanied with metal-enhanced nonradiative decay rate γ abs that competes with γ r , shortens the lifetime of the fluorophore excited state τ=1/(γ r +γ nr ), and decreases the quantum yield η. At longer distances d that are below the decay length of surface plasmon field L p , the emission via surface plasmons becomes dominant. When these surface plasmons are out-coupled to the far field, such interaction can enhance the radiative decay rate γ r and thus increase quantum yield η. As Fig. 2 illustrates, this effect is particularly strong for fluorophores with low intrinsic quantum yield η 0 . For instance, the factor of η/η 0 ∼4 was calculated at the distance of d=10 nm from a gold disk nanoparticle and a fluorophore with η 0 =0.05. For a flat metallic surface, lower enhancement of the quantum yield is observed owing to the weaker field confinement of SPPs compared to LSPs. At a distance d≫L p , the emission from fluorophores is decoupled from surface plasmons and becomes only weakly affected by the interference with waves back-reflected from the metal surface [15]. Let us note that the emission via dipolar LSP modes on metallic nanoparticles is directly converted to the far field via scattering, and thus, it contributes to γ r . However, the emission via SPPs traveling along continuous metallic surfaces requires an additional coupler in order to extract such emitted radiation. Similar to surface plasmon resonance (SPR) spectrometers, reverse Kretschmann configuration of attenuated total internal reflection (ATR) method or diffraction on periodically corrugated metallic surface (grating) can be used. As Fig. 1a shows for the reverse Kretschmann configuration, the emission via SPPs is cross-coupled through a thin metallic film and forms a characteristic cone propagating in a high refractive index dielectric substrate [8,16]. The fluorescence light cone is centered at the polar SPR angle θ for which SPPs on the top metal surface are phase-matched with optical waves in the dielectric substrate (see Fig. 3a). Similarly, diffractioncoupling of SPPs to propagating waves is possible through additional momentum provided by a periodic grating which allows for concentrating the emitted light towards a specific direction (see Fig. 3b). The ability to control the emission angular distribution offers attractive means to increase the collecting efficiency of fluorescence light in fluorescence devices by its "beaming" towards a detector. Moreover, the highly directional fluorescence emission is useful for suppressing background signal that originates from (typically isotropic) scattering and autofluorescence. For the majority of fluorescence detection schemes, less than a few percent of emitted photons is delivered to a detector. As illustrated in Fig. 3, most of the emitted radiation intensity can be emitted via surface plasmons and subsequently out-coupled to a specific angle. The directionality of surface plasmon-coupled emission can be quantified by the following factor f [17][18][19]: where γ r (θ,ϕ) is the radiative decay rate density at λ em that is integrated over all polar θ and azimuthal ϕ angles in the denominator. In summary, the coupling of fluorophores with surface plasmons on metallic surfaces allows amplifying the intensity a b Fig. 2 a Simulated radiative rate γ r (associated with emission to far field γ r ph and via surface plasmons γ r SP ) and nonradiative rate γ nr and b respective changes in a quantum yield η for a fluorophore with low η 0 = 0.05 and high η 0 =0.5 intrinsic quantum yield. The rates were normalized by the total decay rate γ r ph +γ r SP +γ nr . A flat gold surface supporting SPPs and gold disk nanoparticle with a diameter of D=110 nm and height of 50 nm supporting LSP were assumed. Simulations were carried out for a randomly oriented fluorophore in water and the emission wavelength of λ em =670 nm a b Fig. 3 a Simulated and experimental angular dependence of surface plasmon-coupled emission via regular surface plasmon polaritons (SPPs) and long-range surface plasmon polaritons (LRSPPs) with reverse Kretschmann configuration. b Angular distribution of emitted light from a dipole coupled with arrays of metallic nanoparticles supporting collective localized surface plasmons (reproduced with permission from [65] and [53]) of detected fluorescence light by the combination of three effects-(1) increasing the excitation rate γ e through the plasmon-enhanced field intensity at the absorption wavelength of λ ab , (2) enhancing fluorophore quantum yield η, and (3) high directionality f of plasmon-coupled emission at the wavelength λ em : where EF is the enhancement factor of detected fluorescence intensity with respect to that measured without the metallic structures (e.g., a free fluorophore in homogenous aqueous environment). Let us note that the enhancement factor strongly depends on the fluorophore orientation due to the polarization sensitivity of surface plasmon resonance. As the orientation of fluorophores is typically random, the enhancement factor measured for an ensemble of emitters is averaged across all possible orientations of the absorption and emission dipole moments μ ab and μ em , respectively. In addition, the PEF amplification is highly surface sensitive and occurs only at distances d below the surface plasmon probing depth L p . Therefore, it can provide means to better distinguish between specific fluorescence signal and background that originate from bulk effects including auto-fluorescence or scattering. Surface Plasmon Field Intensity Enhancement PEF is directly related to the strength of the field E generated in the vicinity of metallic surfaces. Therefore, the design of metallic nanostructures providing maximum field intensity enhancement upon the excitation of surface plasmons is of key importance. Various materials exhibit plasmonic characteristics including noble metals, transparent conducting oxides, graphene, and semiconductors [20]. Among these, noble metals are preferably used for PEF as they support surface plasmons in the visible and near infrared part of the spectrum, and they exhibit low damping associated with inter-and intraband transitions. The electromagnetic field intensity enhancement |E| 2 /|E 0 | 2 that is accompanied with the coupling to surface plasmons strongly depends on the (complex) metal refractive index n m . For LSPs, one can show that the field enhancement is approximately proportional to the figure of merit |Re{n m 2 }|/ Im{n m 2 }. The coupling to SPPs on a continuous film is accompanied with the field enhancement that scales with a similar term (Re{n m 2 }) 2 /Im{n m 2 }. The SPP figure of merit is plotted for gold, silver, and aluminum in Fig. 4 and shows that aluminum can be the preferable metal of choice for PEF at wavelengths in the blue and UV region [21]. In the visible and near infrared part of the spectrum, surface plasmons on silver and gold surfaces provide higher field enhancement which increases with the wavelength. Silver is known to provide stronger field intensity enhancement than gold (particularly at wavelengths λ<600 nm); however, gold is more often used due to its better chemical stability. Further, the field intensity enhancement |E| 2 /|E 0 | 2 that is associated with the excitation of SPPs and LSPs on most commonly used metallic nanostructures is discussed. As the field intensity enhancement factors are difficult to measure directly, we provide an overview of |E| 2 /|E 0 | 2 values obtained from simulations (a brief summary can be found in Table 1). We preferably selected works where the near-field simulations are supported by experimentally obtained data on far-field properties of studied metallic nanostructures. Let us note that further detailed information on plasmonic properties of metallic nanostructures can be found in numerous specialized review papers [22][23][24][25][26]. Continuous Metallic Films Characteristics of SPP modes traveling along metallic surfaces can be tuned by their mutual interaction. For instance, a thin metallic film supports SPP modes at each of its two surfaces. These modes become coupled when the thickness of the metal film d m is comparable with the plasmon penetration depth into the metal (typically up to 10 nm) and when the film is surrounded by dielectrics with similar refractive indices (as shown in the respective figure in Table 1). The spatial overlap and phase matching between SPPs leads to the establishing of coupled symmetrical and antisymmetrical surface plasmon polariton modes [27]. The mode with the antisymmetrical profile of the parallel component of the electric field E ∥ is referred to as long range surface plasmon polariton (LRSPP) while the one with the symmetrical profile is short-range surface plasmon polariton (SRSPP). LRSPPs are weaker guided by the metal film than regular SPPs, and thus, they can propagate to longer distances and exhibit decreased Ohmic losses, and their field probes to larger distances L p from the metal surface. Another type of coupled SPP mode can be excited on metallic surfaces with dense subdiffractive gratings [28]. Diffraction on such periodic modulation let counterpropagating SPPs interact which opens a bandgap in the SPP dispersion relation. Two Bragg-scattered surface plasmon polariton (BSSPP) modes occur at edges of the bandgap with the field intensity localized either in grating valleys or at peeks of the periodic modulation. The coupling to SPP-like modes provides field intensity enhancement |E| 2 /|E 0 | 2 that exponentially decays from the metal surface. As calculated in Fig. 5 for a gold surface and distance of d=15 nm, the field intensity enhancement |E| 2 /|E 0 | 2 increases with the wavelength and follows the dependence of the figure of merit presented in Fig. 4. The enhancement for ATR and diffraction grating-based SPP couplers is similar, and it reaches |E| 2 /|E 0 | 2 ∼10 at λ=550 nm and ∼85 at λ= 900 nm. The excitation of LRSPP modes on a gold film with d m =20 nm is accompanied with an enhancement that is stronger by a factor of 3-5 and allows reaching significantly longer distances L p with respect to regular SPPs. The behavior of BSSPP modes is analogical to LRSPPs and SRSPPs and exhibits similar features [29]. In order to further boost the field intensity enhancement up, the field of SPPs can be confined in the direction parallel to the surface. A continuous metal film that is perforated by arrays of nanoholes (see the respective figure in Table 1) represents a well-characterized system [30,31] that can act as a diffraction grating for the excitation of SPPs and at the same time supports laterally confined LSPs. In a different example, finite difference time domain (FDTD) simulations were carried out for the metallic grating with narrow, high-aspect-ratio grooves enabling diffraction-based excitation of SPPs that interact with LSP modes at the grooves [32]. This work predicted large field intensity enhancement of |E| 2 /|E 0 | 2 ∼10 3 at LSP were assumed for Kretschmann configuration for SPP and LRSPP modes, respectively. For the diffraction-based coupling, the period and modulation depth of sinusoidal grating was adjusted for normal incidence excitation wavelength of λ=820 nm for a gold grating structure with 60nm-wide and 90-nm-deep grooves arranged with a period of Λ=560 nm. Another approach that takes advantage of the interplay between SPP and LSP modes utilized a relief concentric grating with a narrow hole in its center (see the respective figure in Table 1) [33,34]. FDTD analysis of a silver film with five concentric grooves (period Λ=440 nm) surrounding a nanohole (diameter of 140 nm) showed a field enhancement of |E| 2 /|E 0 | 2 ∼40 that was associated with the focusing of SPPs to the central nanohole supporting LSPs at the wavelength λ=585 nm [33]. Metallic Nanoparticles The plasmonic structure that has arguably become the most investigated in detail is the spherical metallic nanoparticle. If its diameter D is much smaller than the resonant wavelength λ, it supports only a dipole LSP mode with the field intensity decreasing away from the metal as ∼(D/[0.5D+d]) 3 [35]. This formula gives an estimate of the probing depth LSP field that roughly scales with the particle diameter L p ∼D. The excitation of LSPs on a gold spherical nanoparticle immersed in water provides a moderate maximum field intensity enhancement of |E| 2 /|E 0 | 2 ∼18 as calculated for D=20 nm at λ=521 nm. Localized surface plasmon resonance (LSPR) occurs at higher wavelengths on nanoparticles with a thin metallic shell capping a spherical dielectric core (nanoshell particles-see the respective figure in Table 1). The interaction of LSP modes at the inner and outer metal surfaces red shifts the LSPR wavelength and allows reaching higher field intensity strength [36]. For instance, a nanoshell nanoparticle with the outer diameter of D=54 nm and gold layer thickness of 14 nm was shown to enhance the field intensity by a factor of |E| 2 /|E 0 | 2 ∼10 2 at the resonant wavelength λ=617 nm [37]. Nanoparticles with decreased symmetry support multiple LSP modes at different wavelengths. For example, elongated rod metallic nanoparticles support LSP modes with a dipole moment oscillating parallel and perpendicular to the nanoparticle axis [38]. Higher enhancement occurs for the excitation of LSP with the parallel dipole moment which concentrates the field intensity at nanoparticle tips. For instance, a gold rod nanoparticle with a length of 77 nm and a diameter of 28 nm was reported to enhance the field intensity by a factor of |E| 2 /|E 0 | 2 ∼10 2 [39] at the resonant wavelength λ=780 nm. In general, sharper metallic tips allow for more efficient concentrating of the light intensity. For example, gold triangle nanoparticles with a side length of 100 nm and a height of 20 nm were predicted to provide the field intensity enhancement |E| 2 /|E 0 | 2 >10 3 at the resonant wavelength λ=514 nm [40]. However, let us note that such field enhancement strongly decreases with increasing tip curvature and distance d from the metal. Therefore, the field intensity enhancement that can be experimentally achieved at distances d relevant to PEF is typically significantly lower. Metallic Nanoparticle Dimers Individual nanoparticles can serve as building blocks for the design of more complex metallic nanostructures with controlled LSPR properties. Near-field interaction of two spherical metallic nanoparticles brought in close proximity (nanoparticle dimer) leads to an establishment of a new LSP mode with a dipole moment aligned parallel to the dimer axis. This mode strongly confines the field intensity in the gap. For example, the maximum field enhancement of |E| 2 /|E 0 | 2 ∼1.8×10 3 was simulated by FDTD method for a gap LSP mode at a wavelength of λ=633 nm that was supported by gold nanoparticles with a diameter of D=30 nm and gap width of 3 nm [41]. Two end-to-end oriented gold rod nanoparticles were predicted to enhance the field intensity by a higher factor of |E| 2 /|E 0 | 2 ∼10 4 for a dimer gap width of 1 nm and resonant wavelength between λ=700 and 800 nm [38,42]. The employment of triangular nanoparticle dimers with sharp tips oriented towards each other allows for even tighter confinement of the field intensity. This system is referred as to "bow tie" nanoantenna (see the respective figure in Table 1). Green's tensor-based model predicted the field enhancement |E| 2 /|E 0 | 2 >10 3 for a bow tie nanoparticle with a gap width of a few nanometers and LSPR wavelength at λ ∼800 nm [42]. The enhancement rapidly drops with increasing gap width. For instance, the enhancement factor of |E| 2 /|E 0 | 2 =2−3 × 10 2 was simulated for the gold bow tie nanoparticle with a gap width of ∼20 nm and realistic tip curvature at a similar resonant wavelength [43]. It should be noted that the majority of studies describe idealized nanoparticle geometries, and we witnessed only recently simulations that take into account their roughness and shape irregularities [44]. Metallic Nanoparticle Arrays Periodic arrays of metallic nanoparticles enable enhancing the field intensity through long-and short-distance coupling of LSPs supported by individual nanoparticles [25]. For distances between nanoparticles that are close to the wavelength of incident light, long-distance (diffraction) interaction dominates and it is typically manifested as narrowing of the LSPR absorption band [45]. For short distances that are comparable with the decay length of LSPs L p , near-field interaction of LSPs builds up which is accompanied by a shift of LSPR wavelengths and altered field intensity profile in the vicinity of the nanoparticles. For near-field interaction with the gap width between plasmonic nanoparticles >10 nm, typically only moderate enhancement occurs. For instance, |E| 2 /|E 0 | 2 ∼10 was reported for dense rectangular square arrays of gold disk nanoparticles [46,47] at wavelengths of 530-630 nm. Similarly, an inverse structure of densely packed nondiffractive arrays of nanoholes yields intensity enhancements of |E| 2 /|E 0 | 2 ∼16 at wavelength of λ=600 nm [46]. The LSP field strength can be increased by using arrays of sharp nanoparticles such as nanotriangles that are arranged in a structure that resembles a bow tie nano-antenna. For instance, field enhancement of |E| 2 /|E 0 | 2 ∼10 2 at λ=780 nm has been simulated for closely packed arrays of silver triangle nanoparticles by FDTD [48]. Diffractive coupling between metallic nanoparticles provides an alternative mechanism to achieve larger LSP field intensity enhancements. Such interaction gives rise to collective (lattice) localized surface plasmons (cLSPs), and it should be noted that this type of interaction is particularly strong for symmetrical geometry (i.e., the refractive index above and below the arrays is the same) [49]. It origins from phase matching of LSPs at wavelengths that coincide with the LSPR band of individual nanoparticles. With respect to regular LSPs, collective localized surface plasmons trap light at a surface more efficiently and exhibit decreased radiative damping which consequently leads to strong enhancements [45,[50][51][52]. FDTD simulations of cLSP arrays of gold disk nanoparticles showed more than tenfold increased field strength compared to identical individual LSP nanoparticles [53]. The same work predicted the enhancement of |E| 2 /|E 0 | 2 = 2×10 2 for cLSPs at wavelengths λ=630-670 nm and relatively large distance of d=20 nm. Plasmon-Enhanced Fluorescence Even though early investigations on surface plasmonmediated fluorescence date several decades back [54,55], we currently witnessed a rapidly increasing number studies on this phenomenon that were performed on ensembles of fluorophores and more recently also for individual fluorophores [14,56]. These efforts resulted in the development of plasmonic structures that enhance the fluorescence intensity over three orders of magnitude EF>10 3 [43,57]. This section is devoted to the deconvoluting of key factors acting in efficient PEF. We particularly focus on the choice of a metallic nanostructure that determines the strength of surface plasmon field E, spectral overlap of surface plasmon resonances with fluorophore excitation and/or emission wavelengths, orientation and intrinsic quantum yield of fluorophores, and methods for the extracting of surface plasmon-coupled emission from a surface to the far field. A comparison of PEF performance characteristics for selected plasmonic structures is presented in Table 2. Let us note that if not stated differently, the discussed studies were performed with ensembles of dye molecules randomly attached to the top of a spacer layer that controls the distance from the metal d. Flat Continuous Metallic Films SPPs on continuous metallic surfaces were mostly used for enhancing the excitation field strength at λ ab and for exploiting the surface plasmon-driven emission at λ em . The fluorescence signal increase of EF=32 was measured for the excitation of high quantum yield rhodamine-6G dye (η 0 =0.95, λ ab =530 nm, λ em =550 nm) via SPPs at a distance of about d ∼10 nm [58]. In this work, Kretschmann configuration with a thin silver film was used to generate SPPs at a wavelength of 543 nm. A similar value was obtained for medium quantum yield Cy5 dye (η 0 = 0.28, λ ab =640 nm, λ em =670 nm) that was probed by SPPs on a gold surface at a higher wavelength of λ=633 nm [11]. Layer structures that support LRSPPs allow the further increase of the excitation strength at λ ab owing to smaller damping of these modes and associated stronger field intensity [59]. For Cy5 dyes attached onto a gold surface at a distance d=15-20 nm, an additional two-to threefold increment of fluorescence intensity was reported compared to that for regular SPPs [60,61]. These values are lower than the field intensity enhancement predicted in Fig. 5 which is mostly caused by morphology changes of very thin metal films deposited on a low refractive index fluoropolymer (e.g., Teflon or Cytop with low surface energy are used to generate the symmetrical refractive index structure) [61]. As the probing depth L p of LRSPPs can reach up to several microns, it allows for order of magnitude higher fluorescence signals for architectures where fluorophores are dispersed in an extended 3D matrix rather than attached on a surface or embedded in a thin dielectric film [60]. As Fig. 2a shows, SPPs can efficiently collect fluorescence light (more than 50 % photons) emitted at emission wavelength λ em from a close proximity to a metallic surface. Figures 1a and 3a illustrate that the surface plasmon-coupled fluorescence emission (SPCE) can tunnel through a thin metal film into a dielectric substrate where emitted light forms a highly directional characteristic cone propagating into the far field. This type of emission at λ em can be combined with the excitation via SPPs at λ ab which occurs at a slightly different angle [62]. In order to collect the SPCE signal that is isotropic in azimuthal angle ϕ, elements such as hemispherical prism [62], dielectric paraboloid element [63], and concentric diffraction grating [64] were developed (see Fig. 6). The use of LRSPPs to collect fluorescence light is less efficient than regular SPPs (owing to the weaker field confinement) but offers the advantage of narrower angular distribution and higher peak intensity of SPCE [65]. In addition, let us note that SCPE can be canceled by the design of SPP dispersion relation, so a bandgap occurs at wavelengths close to λ em [66]. Periodically Corrugated Continuous Metallic Films Diffraction on periodically corrugated metallic surfaces provides an alternative means for simultaneous SPP-enhanced excitation at λ ab and extraction of SPP-driven emission of fluorescence light at λ em [55,67]. For example, this combined approach allowed for the enhancement of fluorescence signal with a factor of EF=40 and 10 2 for 1D and crossed 2D gratings, respectively [68]. These results were obtained for a medium quantum yield Cy5 dye immobilized on a gold grating with the modulation period of Λ=400 nm, depth of 20-25 nm, and a 20-nm-thick SiO 2 spacer layer preventing quenching [68]. A metallic circular grating (so-called bull's eye) with a nanohole in its center was employed for the amplification of fluorescence signal emitted by dyes that diffused in the nanohole cavity [34,69] (see the respective figure in Table 1). Compared to regular gratings, a larger enhancement factor of EF=1.2×10 2 was reported for medium quantum yield Alexa Fluor 647 (η 0 =0.33, λ ab ∼650 nm, and λ em ∼665 nm) and reference flat gold film structure [34]. This amplification strategy took advantage of surface plasmon coupling at both λ ab and λ em wavelengths. Figure 7 illustrates how the design of periodic concentric grating allowed controlling the directionality of SPP-driven fluorescence emission by changing the phase of SPP modes that scattered on concentric grooves. Metallic Islands and Nanoclusters Fluorescence enhancement on substrates with metallic islands and nanoclusters was subject to research since the 1980s [70]. This approach offers the advantage of a relatively simple [48] preparation procedure and provides moderate enhancement factors through the combined effect of LSP field-enhanced fluorescence excitation rate γ e and increased quantum yield η. For instance, silver islands with size between 20 and 80 nm enhanced the fluorescence signal from adsorbed bovine serum albumin protein that was conjugated with Texas Red dye (η 0 = 0.2, λ ab =590 nm, and λ em =615 nm) by a factor of EF=8-16 [71]. These structures exhibited a broad LSPR absorption band centered at a wavelength of λ ∼450 nm that was below the dye excitation and emission bands. Annealing a thin stack of silver and gold films with varied thicknesses allowed for tuning LSPR wavelength of bi-metal nanoclusters between λ=450 and 550 nm [72]. These structures were coated with an amorphous silicon-carbon alloy which simultaneously served Table 1) with varied offset a between the first groove and the aperture center (reproduced with permission from [69]) as a protection and spacer layer. Obtained results showed that the enhancement increases when the LSPR wavelength is tuned towards λ ab and λ em of used Cy5 dye, and the maximum value of EF=35 was achieved. Chemically Synthesized Metallic Nanoparticles A chemically synthesized spherical metallic nanoparticle was attached to a sharp glass tip and it was approached individual dyes on a glass substrate [14]. This arrangement allowed for precise control of the distance d between the nanoparticle and fluorophore. Obtained results revealed an optimum distance of around d ∼10 nm for high quantum yield Alexa Fluor 488 dye (η 0 =0.92, λ ab ∼495 nm, λ em ∼519 nm) and silver nanoparticle with a diameter of 80 nm. At this distance, the fluorescence intensity emitted into the glass substrate was enhanced by a factor of EF=13-15 when the dye was excited via LSPs at the wavelength of 488 nm [14]. The same work reported similar enhancement of EF=8-9 for medium quantum yield Nile Blue dye (η 0 = 0.27, λ ab ∼627 nm, λ em ∼630 nm) and gold spherical nanoparticle supporting LSPR at longer wavelength of λ=637 nm. Gold nanoshell particles can be used for the fluorescence enhancement in near infrared (NIR) spectrum. Nanoshell particles with 15 nm thick gold capping layer, outer diameter of 78 nm, and LSPR wavelength λ ∼800 nm were decorated with human serum albumin conjugated with low quantum yield IR800 dye (η 0 =0.07, λ ab = 745 nm, λ em =795 nm) [73]. Measured fluorescence intensity (emitted per attached dye) was enhanced by a factor of EF=40 with respect to that for an identical labeled protein in a solution. The same study showed that a gold rod nanoparticle with LSPR wavelength at λ ∼800 nm enhanced the fluorescence signal by a lower factor of EF=9. This enhancement was increased when the transversal and longitudinal LSPRs were engineered to spectrally overlap with fluorophore absorption λ ab and emission λ em wavelengths. The enhancement of EF=20.8 was obtained for Oxazine-725 dye on gold rod nanoparticles with transverse and parallel LSP modes tuned to wavelengths 532 and 720 nm, respectively [74]. Significantly stronger enhancement of EF=1.7×10 2 was observed for fluorophore molecules exposed to more tightly confined field in gaps between plasmonic nanoparticles [75]. This approach was studied by using aggregates of spherical silver nanoparticles with a diameter of 37 nm and trapped medium quantum yield Atto-655 dyes (η 0 =0.3, λ ab =663 nm, λ em =684 nm). Even larger enhancement factor of EF ∼1.1×10 3 was reported for perylene diimide dye that was dispersed in a 2-3-nm spacer layer between a silver naoparticle (diameter of 80 nm) and a flat silver surface supporting a confined gap LSP mode [76]. However, it should be noted that such large EF value was partially obtained due to the fact that the reference measurement was performed for a dye at a very small distance of d=2-3 nm from a silver surface (which leads to strong quenching; see Fig. 2b). Metallic Nanostructure Arrays Prepared by Lithography Modern lithography provides powerful fabrication tools for the preparation of metallic nanostructures that can be tailored for very efficient PEF studies on individual fluorophore molecules. Fluorescence enhancement of EF=1.3×10 3 was reported for bow tie nanoparticles (see the respective figure in Table 1) and a low quantum yield TPQDI dye (η 0 =0.025, λ ab ∼790 nm, λ em =850 nm) [43]. These structures were prepared by electron beam lithography (EBL), and it is important to note that such high EF was observed for individual molecules that were positioned in an approximately 30-nm-wide gap between the sharp nanoparticle tips. A similar enhancement factor of EF=1.1×10 3 was obtained for low quantum yield Alexa Fluor dye (reduced to η 0 =0.08 by a quencher) two half cylindrical nanoparticles fabricated by focused ion beam milling (FIB) [57]. Even more complex metallic nanoparticle geometries such as those resembling Yagi Uda nanoantenna were prepared by EBL for the fluorescence measurements on single emitters including quantum dots [77] or dyes [18]. Nanoimprint lithography was used for the preparation of dense arrays hierarchical structures comprising gold disk nanoparticles with around 100 nm diameter above a metallic backplane. Inside the narrow gap between disk nanoparticles and the backplane, additional gold clusters with a diameter of 5-25 nm were formed [78]. This system exhibited broad LSPR resonance centered at around 800 nm, and it was reported to allow for large EF of fluorescence light emitted from low quantum yield infrared dye indocyanine green (ICG, η 0 =0.012, λ ab =783 nm, λ em ∼850 nm). The enhancement factor of 1.1×10 3 was measured for an ensemble of dyes that were randomly attached at the distance of d=5 nm from the gold surface. In addition, experiments on individual dyes indicated an enormous maximum enhancement of EF=4.5×10 6 . Periodic arrays of metallic nanoparticles with weakly interacting dipole LSPs were used in numerous investigations with ensembles of fluorophores. Rectangular arrays of silver disk nanoparticles with a diameter of 120 nm and height of 27 nm were prepared by nanoimprint lithography (NIL) and showed the maximum enhancement factor of EF=15.8 per attached low quantum yield Cy3 dye (η 0 =0.04, λ ab =550 nm, and λ em =570 mn) [79]. The period of the structure was adjusted to Λ=200 nm in order to match the LSPR wavelength to that of focused excitation beam (λ=543 nm) and Cy3 dye absorption wavelength λ ab . Similar fluorescence enhancement was reported for arrays of gold nanodisk and CdSe-ZnS quantum dots (λ em ∼600 nm) which were excited by a broad wavelength spectrum of mercury lamp [80]. Quantum dots were selectively attached to the gold nanoparticles with varied thickness of the spacer film that was prepared by successive coating by 1-4-dibiotinylbutane and streptavidin layers. The highest (area compensated) enhancement was achieved for the structure with spacer layer thickness of d=16 nm, disk diameter of 100 nm, and period of Λ=200 nm which supported LSPs at λ=580 nm. Dense arrays of silver nanotriangles which were produced by colloidal lithography revealed fluorescence amplification of EF=83 for Alexa Fluor 790 (η 0 = 0.04, λ ab =782 nm, and λ em =804 mn) [48]. Complementary structures with metallic films perforated by nanohole arrays were utilized for PEF that takes advantage of the interplay between SPP and nanohole LSP modes. For instance, EBL-fabricated nanohole arrays with a diameter of 100 nm and period between Λ=350 and 650 nm were reported to enhance the fluorescence signal by a factor of EF=82 for Oxazine 720 (η 0 =0.6, λ ab =620 nm, λ em =650 nm [81]) and the excitation wavelength λ=633 nm [82]. This enhancement was achieved for a period of Λ=553 nm which allowed simultaneous excitation and emission of fluorescence light by LSPs supported by nanoholes and diffraction-coupled SPPs. Similar enhancements of EF=1.1×10 2 have been reported for Cy5 on a silver nanohole array protected by a 20nm SiO 2 spacer film [68]. In summary, the coupling of fluorophores with intense fields of surface plasmons can amplify emitted fluorescence light intensity by several orders of magnitude. The highest enhancements were demonstrated in measurements with single fluorophores that were placed into plasmonic hotspots. In combination with low quantum yield dyes, several groups reported the fluorescence enhancement >10 3 for such configurations. Interface Architectures In order to exploit the amplification of fluorescence signal in detection assays, surfaces of metallic nanostructures have to carry biomolecular recognition elements (BREs) that can specifically capture the target analyte from a liquid sample. As such surface chemistries were already subject to thorough reviews [83,84], this section provides only a brief overview of commonly used building blocks. Rather, we focus on biointerfacial systems that were adopted for selective (local) attachment of biomolecules at plasmonic hotspots. We discuss some key implications for the sensitivity of fluorescence biosensors which utilize such structures. In particular, it is important to note that the local functionalization of plasmonic hotspots is on the one hand favorable as it assures high fluorescence signal associated with a binding event, but on the other hand it leads to lower average density of BREs on the sensor surface and potentially to smaller probability of analyte capture. These two effects may act counter each other and thus hinder the sensitivity of PEF biosensor technologies. Functionalization Building Blocks In biosensor applications, self-assembled monolayers (SAMs) represent a popular class of materials used for tailoring properties of interfaces between a transducer and liquid sample [85][86][87]. Alkanethiol SAMs offer a powerful toolbox for reliable attachment of biomolecules to noble metal surfaces via amine coupling, his-tag, and biotin-streptavidin interaction. Silanes-based chemistries are preferably used for the functionalization of oxide layers [72,88] which often serve as a protection layer and a spacer film for the control of a distance between a fluorophore and metal d. S-layer protein SAMs were employed for the modification of surfaces of plasmonic biosensors, and specific fusion proteins carrying functional groups that react with biotin tags [89] or immunoglobulin (IgG) Fc regions [90,91] were developed. Another important route for the functionalization of metallic surfaces utilizes synthetic or natural polymers. When attached to the metal surface, they can provide an open 3D structure that accommodates larger amounts of biomolecules than a 2D system relying on SAMs. For instance, poly(N-isopropylacryamide) [61] and dextranbased [92] cross-linked polymer networks and dextran-based brushes [93] were successfully utilized in PEF biosensors that took advantage of high-binding capacity matrices (see Fig. 8a). In order to control the distance between fluorophores and a metallic surface d, layer-by-layer deposition of polymer spacer layers was commonly used [4,94]. Local Functionalization Precise attachment of BREs to areas where electromagnetic field is confined (plasmonic hotspots) is crucial in order to harness the large fluorescence signal amplification enabled by PEF on metallic nanostructures. The reason is that only those molecular binding events occurring in plasmonic hotspots contribute to a strongly amplified fluorescence signal while the binding taking place outside plasmonic hotspots does not. EBL was proposed for the selective functionalization of gold nanorod arrays by using a PMMA mask with clearance windows for selective access to nanorod tips [95]. Another potentially simpler approach based on material selective surface modification was reported for arrays of metallic nanoholes [96]. In this work, colloidal lithography was used to etch nanoholes through a stack of TiO 2 -Au-TiO 2 films (see Fig. 8b). The gold nanohole walls were modified with thiol-PEG carrying a biotin terminal group while the TiO 2 oxide surface was passivated by poly(Llysine)-graft-PEG (PLL-g-PEG). On this structure, selective binding of neutravidin to the gold nanohole walls was observed with LSPR. Near-field lithography was suggested for the selective attachment of molecules close to plasmonic nanoparticle hotspots by using a polysiloxane layer containing a nitroveratrylcarbonyl (NVoc) [97]. The excitation of LSPs at the wavelength 780 nm by a pulsed laser beam locally amplified two-photon absorption of NVoc which leads to its cleavage. This approach was envisaged to open new ways for preparing nanoscale windows around the metallic particles for subsequent selective modification with proteins or synthetic functional polymers. Selective functionalization of gold nanorods prepared by wet chemical synthesis is possible due to the high crystallinity of such nanoparticles [98]. For instance, cetyltrimethylammonium bromide (CTAB) that is used for the stabilizing of gold nanorod particles preferably bind to {100} faces of nanorods leaving the {111} nanorod tips available for the attachment of other moieties such as biotin disulfide. A similar approach was employed in PEF studies for the covalent linkage of fluorophores at the preferred longitudinal axis of the gold nanorods [99]. Affinity Binding at Plasmonic Hotspots In biosensors, measured sensor signal is calibrated against the concentration of target analyte in an analyzed sample c α . For fluorescence-based heterogeneous assays, measured fluorescence signal F is proportional to the product of the enhancement factor EF and number of specifically captured molecules on a sensing spot. The relation between the number of captured molecules and the concentration of analyte in a sample c α depends on range of parameters including the means of the analyte transfer from a sample to the surface, density of biomolecular recognition elements c β , dissociation affinity binding constant K d , and reaction time. By using the Langmuir isotherm, one can show that the fluorescence signal can be described by the following equation: This equation holds for the analyte concentrations much smaller than dissociation constant c α ≪K d and surface reaction in equilibrium. V denotes the volume of analyzed sample with an analyte concentration of c α , S is the surface area of a Fig. 8 Example of a threedimensional binding capacity binding matrix utilizing a crosslinked polymer network (reproduced with permission from [92]) and b local modification of inner walls of cylindrical metallic nanoholes with two-dimensional SAM by using material-selective local chemistries (reproduced with permission from [96]) sensing spot, and ξ is the fraction of this area that is occupied by plasmonic hotspots and functionalized by BREs with surface density of c β . For large sample volumes V and small functionalized surface areas S, the sensor response is proportional to term ∼EF ξ S c β c α . In this limit, the effect of strong PEF amplification at sparsely distributed hotspots will not provide substantially improved sensitivity. The reason is that large EF factors are typically associated with strong electromagnetic field confinement which occurs for a low density of hotspots ξ. However, Eq. (5) indicates that PEF on locally functionalized hotspot would be highly favorable for the analysis of small sample volumes V when high affinity BREs are used. For c β S≫K d V, Eq. (5) yields F∝EF V c α which translates to the situation when virtually all present molecules are captured at plasmonic hotspots and contribute to amplified fluorescence signal. For example, let us assume that IgG antibodies serve as biomolecular recognition elements and they are immobilized on the hotspot surface with the full packed monolayer density of c β ∼2.5×10 −14 mol/mm 2 . The sensing spot area is S=1 mm 2 , and plasmonic hotspots occupy its 10 % (ξ=0.1). Then, the above condition is fulfilled when that sample volume of 10 μL is analyzed and dissociation constant of BREs is better than K d =0.25×10 −9 M. Even most of the used antibodies exhibit the K d in the nanomolar range, numerous antibodies with K d as low as 10 −12 M become available [100,101] which opens room for highly sensitive immunoassay detection schemes with spatially confined plasmonic hotspots. It should be noted that another important parameter is the time needed for the collecting of the analyte on a surface. In order to speed up this process, PEF biosensors can be combined with ultrasound sonication [102], microwave heating [103], or they rely microfluidic devices [104]. Biosensor Applications Over the last years, we witnessed numerous implementations of PEF into already established laboratory technologies such as fluorescence microarray scanners, fluorescence microscopes, or microtiter plate readers as well as the development of entirely new compact devices that utilize this amplification scheme. PEF nanostructures were mostly combined with immunoassays which offer the advantage of commercial availability of antibodies against a large variety of analytes (see Table 3). In order to avoid direct labeling of target analyte, there are typically used sandwich [105,106] or competitive [105] assay formats with detection antibody conjugated to a fluorophore (see Fig. 8a). The first biosensor implementation of PEF was reported in the beginning of the 1990s [58], and a decade later, it was Table 3 Overview of PEF biosensors for the detection of chemical and biological compounds with information on analyzed matrix, limit of detection, analysis time, and assay format [134] reintroduced in the form of a method named surface plasmonenhanced fluorescence spectroscopy (SPFS) [104]. This approach takes advantage of the enhancement of fluorescence signal via probing the metal sensor surface with SPPs that are resonantly excited at the absorption wavelength λ ab of used fluorophore labels. Another configuration utilizing SPPs for the collecting of fluorescence light at the fluorophore emission wavelength λ em was developed based on surface plasmoncoupled emission (SPCE) [62]. The most common implementations of SPFS method utilize an optical setup with angular interrogation of SPR and an additional module for the collecting and detection of emitted fluorescence intensity (see Fig. 9a). In this scheme, analyzed samples are flowed over the sensor chip with SPR-active layer modified by biomolecular interaction elements. The capture of the target analyte occurring on the sensor surface can be observed by combined SPR and measurement of intensity of fluorescence light that is emitted through a sample above the metal surface (see Fig. 9b). Both SPR and fluorescence signals can be monitored in real time which allowed for advanced biomolecular interaction analysis (BIA) studies [88,93,104]. SPFS was shown to detect molecular analytes such as immunoglobulin G (IgG) at a concentration as low as 0.5 fM [4]. SPCE is implemented by using a similar Kretschmann configuration as SPFS, but the intensity of fluorescence light that is emitted into a substrate below the metal film is measured. The SPCE detection format with a disposable biochip carrying arrays of embossed paraboloid elements was reported [106] (see Fig. 6). By utilizing SPP-driven excitation and emission of fluorescence light on a thin metallic deposited on top of such elements, IgG assay with limit of detection as low as 1 pg/ml (6 fM) was demonstrated. Diffraction gratings supporting SPPs [88] and substrates with metallic nanoparticles exhibiting LSPR [72] were applied for the amplified fluorescence measurements performed by commercially available fluorescence microscopy and microarray scanners. Typically, an end-point fluorescence signal is measured after the reaction of the analyte with BREs on the surface. In conjunction with commercially available fluorescence scanners, limits of detection between femtomolar and picomolar concentrations were most often reported [72,88]. So far, the best limit of detection of 0.3 fM was achieved for direct detection of IRDye-800cw dye-labeled IgG molecules and a dense grating combining NIL-prepared metallic gaps and random metallic clusters [107]. In general, sensor chips with metallic nanostructures that can be fabricated by mass production-compatible technologies (such as colloidal lithography, NIL, annealing of thin films, or wet chemical synthesis) are better suited for practical PEF biosensors than techniques that require slow and expensive nanofabrication tools (EBL or FIB). Detection of Biomarkers Prostate-specific antigen (PSA) is an established biomarker for the diagnosis of prostate cancer and new technologies for its analysis at concentrations below picomolars are expected to provide a valuable tool for point-of-care diagnosis (POC) of female breast cancer [108], early identification of prostate cancer relapse [109], and in forensic applications [110]. A biosensor for detection of free prostate-specific antigen (f-PSA) using long-range surface plasmon-enhanced fluorescence spectroscopy (LRSP-FS) and photo-cross-linked carboxymethylated dextran hydrogel matrix (shown in Fig. 8a) was reported [111]. As shown by fluorescence kinetics in Fig. 10a, the analyzed sample was firstly flowed over the sensor surface that was functionalized by capture antibodies followed by the binding of fluorophore-labeled detection antibodies. The in situ measured increase of fluorescence signal was proportional to the amount of captured analyte. The sensor allowed the detection of f-PSA in buffer and human serum with the limit of detection (LOD) of 34 fM and 0.33 pM, respectively, in 35 min. This LOD was about four orders of magnitude better than that for SPR-based detection b a Fig. 9 a Optical setup of surface plasmon-enhanced fluorescence spectroscopy (SPFS) utilizing angular modulation of SPR. Example of a sensor chip supporting LRSPPs and E. coli O157:H7 sandwich immunoassay format. b Fluorescence signal measured upon the changing angle of incidence of the excitation laser beam in vicinity to the resonance after binding of target analyte (E. coli O157:H7) and reacting with dye-labeled detection antibody on the surface (reproduced with permission from [119]) as can be seen from calibration curves presented in Fig. 10b. Metallic nanoparticle-enhanced fluorescence assays were developed for the analysis of PSA in female serum in order to perform diagnosis of breast cancer [111]. Sandwich assay format was used for the detection of the concentration ratio of f-PSA and PSA conjugated with α-1-anti-chymotrypsin (PSA-ACT) in diluted female serum from healthy personnel and patients with breast cancer. The limit of detection of f-PSA in PBS buffer and diluted female serum was 0.4 pg/ml (12 fM) and 1.8 pg/ml (52 fM), respectively, and the analysis required 2 h. Another cancer biomarker-C-reactive protein-was detected by SPFS with sandwich immunoassay. The limit of detection of 26 ng/ml (248 pM) in human serum diluted 1:20 was reported for 30 min of analysis time [112]. PEF detection of pancreatic cancer biomarker-UL16-binding protein 2-was implemented to microtiter plate arrays by using a sandwich immunoassay [113]. In this work, the detection antibody was attached to a 25-nm gold nanoparticle and labeled with Atto633 dyes in order to increase the fluorescence signal associated with the binding event. The limit of detection of 18 pg/ml (0.75 pM) in 1:10 diluted human serum and detection time of about 4.3 h were reported. Fluorescence immunoassay for the analysis of human necrosis factor alpha protein (TNF-α-immune-modulator agent) was performed by phase-modulation fluorometry amplified by substrates with silver islands [114]. Firstly, the analyzed sample was incubated with a detection antibody labeled with the dye DY488. Afterwards, the mixture was brought in contact with capture antibodies attached to the silver islands, and the fluorescence signal that accompanied the affinity binding was measured. The LOD of 3 pM was reported with the detection time of 2 h. A similar approach was adopted for the detection of troponin I (TnI) which is used as biomarker of myocardial damage [115]. In this work, the sensor chip with silver nanoparticles was subsequently modified with protein A, capture IgG antibody against TnI, and blocked with bovine serum albumin (BSA). Then, the buffer or whole blood sample with TnI was incubated with fluorophore-labeled detection antibody and reacted with the sensor surface. TnI detection in buffer was performed with and without 3-min microwave heating which provided the LOD of 5 pg/ml (0.22 pM) and 0.1 ng/ml (4.3 pM), respectively. For whole blood samples, the LOD of 50 pg/ml (2.2 pM) was obtained when microwave heating was applied. In another example of PEF implementation to microtiter plates, human IgG using was detected by using sandwich immunoassays which for 1 h incubation time provided LOD of 0.086 ng/ml (0.57 pM) [116]. Detection of Toxins and Pathogens Bacterial pathogens were analyzed based on the detection of specific DNA sequences [117,118]. Genomic and exosporium DNA of Bacillus anthracis spores were rapidly detected within 1 min at the microgram per milliliter concentrations by using PEF amplification that utilizes gold nanoclusters [117]. Furthermore, the same work reported the detection of DNA from less than 1,000 vegetative cells in 1 min by using sensor chips combining the PEF amplification with microwave heating-based extraction of DNA. For the immunoassay detection of whole bacteria, PEF amplification based on tightly confined LSP of SPP fields is not possible due to the large (around a micrometer) size of this type of analyte. Therefore, the SPFS detection principle was combined with the excitation of LRSPPs which exhibit a large penetration depth L p (see Fig. 9a). This fluorescence readout principle was carried out for E. coli O157:H7 sandwich immunoassay and provided a limit of detection as low as 6 colony forming units (cfu)/ml [119]. The assay was highly specific and required 20 min. Moreover, LRSPP-enhanced fluorescence spectroscopy was adopted for the detection of aflatoxin M 1 (AFM 1 ) [105]. This harmful low molecular weight analyte is a metabolite of mycotoxin aflatoxin B 1 produced mainly by Aspergillus flavus and Aspergillus parasiticus pathogens. The gold sensor surface was functionalized with a conjugate of AFM 1 and BSA for the inhibition competitive immunoassay. Monoclonal rat antibody against AFM 1 was incubated with a sample containing AFM 1 , and the unreacted antibody was flowed over the sensor surface and detected by the amplified fluorescence spectroscopy. The limit of detection of AFM 1 present in milk was determined to be 0.6 pg/ml (1.8 pM), and the analysis time was 53 min. An assay for severe acute respiratory syndrome (SARS) coronavirus (SARS-CoV) nucleocapsid (GST-N) protein was developed with localized surface plasmoncoupled fluorescence fiber optic readout [120]. The sandwich immunoassays enabled the analysis of recombinant SARS-CoV N protein in a buffer at concentrations as low as 0.1 pg/ ml. A similar biosensor platform was employed for the detection of swine-origin influenza A (H1N1) viruses (S-OIV) with the detection limit of 13.9 pg/ml [121]. Conclusions PEF pushed forward the sensitivity and shortened analysis time of assays for the detection of important analytes including biomarkers, pathogens, and toxins. These compounds were detected at low femtomolar concentrations, and the analysis often required only several minutes. We witnessed numerous implementations of this amplification scheme to novel biochips that are compatible with existing microscopy and microarray technologies as well as to entirely new biosensor devices. Up to now, PEF biosensors mostly took advantage of metallic nanostructures providing the amplification of fluorescence intensity by <10 2 . However, current advances in plasmonics paved ways towards much stronger amplifications which can reach factors >10 3 . In order to harness such fluorescence enhancement in practical biosensor technologies, these efforts need to be complemented by the development of new methods for precise and cost-effective fabrication of metallic nanostructures and their selective functionalization in plasmonic hotspots. This review article addresses these challenges and discusses possible future ways in this rapidly developing biosensor field that aims at impacting important areas of point-of-care medical diagnostics, food control, and safety.
2018-04-03T04:53:56.154Z
2013-12-28T00:00:00.000
{ "year": 2013, "sha1": "f4761a305755e1f614a75cc8ade098b09d005728", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11468-013-9660-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f4761a305755e1f614a75cc8ade098b09d005728", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
3202579
pes2o/s2orc
v3-fos-license
An Overview of Mixture Models This paper has been withdrawn. With the advancement of statistical theory and computing power, data sets are providing a greater amount of insight into the problems of today. Statisticians have an ever increasing number of tools to attack these problems, some of which can be implemented in the area of mixture modeling. There is a great deal of literature on mixture models and this work attempts to provide a general overview of the subject, including the discussion of relevant issues and algorithms. The reader can hope to gain a broad understanding of concepts in mixture modeling and find the references cited within as a valuable resource for the next stage of their research. unknown proportions. For example, consider inferring the strategies young children employ when presented with a cognitive task. The children's performance on the task may be modeled using finite mixtures with the components pertaining to the different strategies (Thomas and Horton (1997)). Or perhaps, consider attempting to classify a group of individuals by the way they each speak the same word -a challenging problem due to context dependencies (i.e. different points in time, gender, etc.) in speech recognition. Here, a finite mixture may be used with the components pertaining to different 'vowel classes' of spoken words (Peng et al. (1996)). Or finally, consider assessing the service quality of banking institutions. The institutions may be modeled using finite mixtures with components pertaining to different market segments (Wedel and DeSarbo (1994)). Studies such as those mentioned above have the roots of their analyses grounded in a seminal work by Pearson (1894). In this article, Pearson (1894) was one of the first individuals to incorporate the use of mixture models as well as note some of the issues surrounding them -in particular estimation and identifiability. These are issues still prominent in today's mixture research and they will be addressed in this work. Since the time of the Pearson (1894) article, a great deal of literature has emerged in many disciplines regarding mixture models. In addition to the numerous technical and cross-disciplinary articles on mixture modeling, monographs concerning the subject include Everitt and Hand (1981), Titterington et al. (1985), Lindsay (1995), and McLachlan and Peel (2000). This article addresses many of the issues presented in these monographs, as well as current work, but at a high-level overview. A General Mixture Model Suppose we have n subjects where we take a series of m measurements, say Y i = (Y i,1 , . . . , Y i,m ) T , on the i th subject for i = 1, . . . , n. Furthermore, take y 1 , . . . , y n as realized values of the Y i 's, which are independent and identically distributed (iid) according to a distribution F . In this scenario, standard multivariate techniques (Johnson and Wichern (2002) and Anderson (2003)) can be employed to estimate the common population mean vector, µ, and the population variance-covariance matrix, Σ. Suppose, in addition to the above scenario, there is an assumed heterogeneity with respect to the response tendencies of the subjects. One way to account for this heterogeneity is by suggesting k different classes to which the subjects could essentially belong. Assuming fixed k, the distribution of the Y i 's has the k-component mixture density f k (y i ; ψ) = k j=1 λ j g j (y i ; θ j ), (2.1) where λ j > 0 and k j=1 λ j = 1 are the weights (or mixing proportions) for the components of the model. (The subscript for f will be suppressed except when to stress the dependence of f on k.) Furthermore, define Λ k−1 =    (λ 1 , λ 2 , . . . , λ k−1 ) : where λ k has been arbitrarily omitted since λ k = 1 − k−1 j=1 λ j . The g j 's are known component densities, parameterized by θ j ∈ Θ j ⊆ R qj such that Θ j represents the specified parameter space for the θ j 's. The mixture density f is parameterized by ψ ∈ Ψ such that Ψ represents the specified parameter space for all unknown parameters in the mixture model. Note that where Ψ ⊂ R r and r = ( k j=1 q j ) + k − 1. We will take F as the corresponding k-component mixture distribution whose components are composed of the distributions G j . For the scenarios presented in this work, the G j differ only in θ j , thus we will take g j ≡ g and q j ≡ q which yields Ψ = Θ k × Λ k−1 and r = kq + k − 1. Estimation In this section, we focus on estimation of the parameters of a mixture model, ψ, given Y and k. Early works employed a method of moments approach (for instance, Pearson (1894) and Quandt and Ramsey (1978)), but with the advent of more efficient computing, numerous algorithms have emerged as tools for estimation in the mixture setting. These techniques can be classified into two primary categories: likelihood methods and Bayesian methods. We will provide a brief literature review on some of the available techniques and provide a more complete description of a few commonly employed algorithms. We will close this section with one final issue which concerns estimating the number of components when this is not known a priori. Likelihood Methods The likelihood for the parameters of a mixture model, ψ, can be easily formulated using the mixture density in (2.1) as In dealing with likelihood methods, it is often easier to work with the log likelihood: (3.1) Then, an estimateψ (the MLE) is provided by solving where S(y; ψ) is called the score function. It is necessary to consider the possibility of multiple local maxima since the likelihood will have multiple roots. Moreover, the likelihood function may be unbounded, which becomes a considerable concern when implementing various algorithms (as will be discussed). Focusing on local maxima on the interior of the parameter space (denoted by Ψ • ) helps circumvent this problem because under certain regularity conditions, there exists a strongly consistent sequence of roots to the likelihood equation that is asymptotically efficient (see Ferguson (1996)). In fact, a √ n-consistent estimator can be constructed using the method of moments estimator mentioned earlier. For a deeper treatment of the choice of root, as well as testing for a consistent root, refer to McLachlan and Peel (2000). The rate of convergence for the likelihood methods to be discussed will also be considered. Consider a norm · on Ψ and a sequence c t such that c t → 0 as t → ∞. If the sequence of iterates {ψ (t) } draws sufficiently close to a solution ψ * of (3.2), then the rate of convergence is given by where t = 0, 1, . . . and q ≥ 1. When replacing the sequence {c t } by a constant c, then we refer to q in (3.3) as a local rate of convergence. An illustration of local linear (q = 1) and local quadratic (q = 2) convergence is given in Figure 1. Newton Methods An efficient way for solving (3.2) is to implement a Newton-type method. The Newton-Raphson method takes a linear Taylor series expansion about the current fit ψ (t) for ψ, which yields which is the negative of the Hessian of ℓ(ψ). Then, finding a zero for the right hand side of (3.4) yields the update The Newton-Raphson method has the benefit of local quadratic convergence to a solution ψ * of (3.2), but this convergence is not guaranteed. Aside from some other computational issues (as noted in McLachlan and Krishnan (1997)), Newton-Raphson has the benefit of providing, as an estimate of the variancecovariance matrix of the solution, the inverse of the observed information matrix, [I(ψ * ; y)] −1 . Thus, standard error estimates, confidence intervals, and inference procedures are readily available. One may also implement a quasi-Newton method by replacing I(ψ (t) ; y) in (3.5) by A, an approximation to the negative Hessian matrix. This yields the update ψ (t+1) = ψ (t) + A −1 S(y; ψ (t) ). While evaluation of the Hessian is avoided at each iteration, yielding a lower cost of computation, some drawbacks with this method are that the local quadratic convergence of the regular Newton-Raphson method is lost, convergence is not guaranteed, and erratic estimates for ℓ(ψ) may be obtained if a poor value of A is used. One final Newton-type method is Fisher's method of scoring. This method replaces I(ψ (t) ; y) in (3.5) by the expected (Fisher) information matrix, evaluated at the current fit ψ (t) for ψ. This yields the update Another version of Fisher scoring uses the empirical information matrix, yielding the update With both methods, one is relegated to local linear convergence and convergence is again not guaranteed. EM Algorithms As seen in the previous section, Newton methods can provide relatively 'speedy' convergence, but this convergence is not ensured and calculations like inverting the Hessian may be rather difficult to perform. An alternative is the use of Expectation-Maximization (EM) algorithms, which were popularized in the mixture modeling literature after the article by Dempster et al. (1977). We will focus on developing an EM algorithm for the mixture case, but it should be noted that this algorithm is one member in a much larger class of algorithms (see McLachlan and Krishnan (1997) and McLachlan and Peel (2000) for a discussion). We construct an EM algorithm for mixtures by first introducing the indicator random variable Z i,j = I{observation i belongs to component j}, for i = 1, . . . , n and j = 1, . . . , k such that I{·} is the indicator function. We refer to the measurements, Y, from earlier as the incomplete or observed data and (Y, Z) as the complete data. Here we use Y and Z to denote all of the Y i 's and Z i,j 's, respectively. The observed data log likelihood is simply ℓ(ψ) from (3.1), but it will be denoted as ℓ o (ψ) when the meaning is not made clear by the context. The complete data log likelihood is given by ℓ c (ψ) is introduced to deal with the intractability of maximizing ℓ o (ψ) with respect to ψ. With the formal notation defined, we now construct an EM algorithm for mixture models. 1. Given a fixed ψ (t) at the t th iteration, t = 0, 1, . . ., calculate (3.6) This step is referred to as the Expectation Step or E-Step. Find for all ψ ∈ Ψ. This step is referred to as the Maximization Step or M-Step. 3. Iterate until a stopping criterion is attained. The final estimate obtained will be denoted byψ. where Bern(λ j ) is taken to mean the Bernoulli distribution with rate of success λ j , and Z i,j is independent of Y i * for all i * = i. Since E ψ (t) is a linear functional, the right hand side of (??) and (3.6) allows us to replace Z i,j by , which follows from an application of Bayes' rule and the law of total probability. Thus, when provided the estimate ψ (t) , we get . Also note in the E-Step, as stressed in Flury and Zoppè (2000), is that the expectation of the complete data log likelihood is conditioned on the observed data and it does not strictly replace missing data by their conditional expectations. As can be seen, the structure for an EM algorithm is rather simple and thus programming is relatively easy. While there have been some technical issues about the Dempster et al. (1977) article addressed over the years (such as convergence results noted by Wu (1983)), we will discuss a couple of issues concerning implementation of Algorithm 3.1. One issue concerns selection of the initial values (ψ (0) ). Due to the multimodality in the mixture likelihood, there are multiple local maxima and in some cases, a poor choice of ψ (0) can lead to the sequence of EM estimates diverging. Due to such features, it is strongly advocated to start EM algorithms from many different initial values. We will use a simple binning procedure to determine hyperparameters for distributions used in random generation of the starting values. For reviews of possible options for starting values, see McLachlan and Krishnan (1997) or McLachlan and Peel (2000). Another issue concerns the stopping criterion. Usually an EM algorithm is run until or, when given a norm · on Ψ, until for some ǫ > 0 chosen arbitrarily small. Schafer (1997) discusses the stopping criterion |ψ for l = 1, 2, . . . , r, though this method fails when ψ (t) l ≈ 0. Regardless, EM algorithms converge linearly, which can be very slow at times. An inappropriate stopping criterion may cause one to claim convergence too soon. Certain methods, such as an Aitken-based acceleration technique, may be implemented to alleviate some of the difficulty with the slow rate of convergence (see Lindsay (1995) for a discussion). We use the method in (3.7) as our stopping criterion. Numerous EM-type algorithms can be found in the literature (see McLachlan and Krishnan (1997) and McLachlan and Peel (2000) for references). A useful extension of the EM algorithm is the Expectation / Conditional-Maximization (ECM) algorithm of Meng and Rubin (1993). Consider a partition of Ψ, say Ψ Algorithm 3.2 (ECM Algorithm). For a given ψ (0) will be a specified initial value. This E-Step is the same step as in the EM algorithm of Algorithm 3.1. 2. For each i = 1, 2, . . . , s, calculate for all i 2 < i. These steps are referred to as the Conditional Maximization-Steps or CM-Steps. 3. Iterate until a stopping criterion is attained. The final estimate obtained will be denoted byψ. There is also a multicycle ECM algorithm as given in Liu and Rubin (1994). This algorithm incorporates an additional E-Step between some or all of the CM-Steps. MM Algorithms and Adaptive Barrier Methods The EM algorithms of the previous section are special cases of MM algorithms, which are prescriptions for constructing such optimization algorithms. Since we present the EM algorithm in the context of maximization, MM stands for minorize / maximize. When in the context of minimization, MM stands for maximize / minorize. For our setting, MM algorithms operate by creating a surrogate function (which we denote by h) to drive an objective function uphill. In the context of our mixture model, h(ψ; After choosing a minorizing function, we then maximize it. In the EM setting, the minorizing function at ψ (t) (shifted by a constant) is Q(ψ; ψ (t) ). We do not go into much detail about MM algorithms here, but one may refer to Lange (2004) or Hunter and Lange (2004) for discussion. Our brief definition of MM algorithms provides a segue from EM algorithms to adaptive barrier methods, which are used in constrained optimization. Suppose we have an ECM setting where in one of the CM-Steps where ψ l ∈ R r * for some r * < r, the total number of parameters. In other words, we focus only on the portion of the parameter vector over which we are actually maximizing. Suppose further that m(ψ l ) is twice continuously differentiable subject to the linear inequality constraints where µ > 0 is a barrier constant. The barrier function (the portion of the surrogate function involving µ) forces ψ (t+1) l to remain within the interior of the feasible region (i.e., the region satisfying the constraints). As Lange (1999) points out, it is impossible to maximize h(ψ l ; ψ (t) l ) explicitly in most problems, so maximization using a quadratic approximation is suggested. However, it is often sufficient to perform a few steps of Newton-Raphson, thus avoiding the seemingly more complex quadratic approximation method. Further details on adaptive barrier methods in convex programming may also be found in Lange (1994). Bayesian Methods A Bayesian approach can be taken for estimation in mixture models provided a proper prior is used (i.e., a prior that sums or integrates to a finite value for the discrete and continuous case, respectively). With the advancement in computing power, developments in Markov Chain Monte Carlo (MCMC) algorithms have made Bayesian analyses an appealing method for analyzing mixture models. McLachlan and Peel (2000) provide many references to Bayesian mixture analysis. The discussion we provide below will be from the perspective where the data are continuous, but the discrete case is analogous. Let L o (ψ) and L c (ψ) denote the observed data likelihood and complete data likelihood (the antilogarithms of ℓ o (ψ) and ℓ c (ψ), respectively). Let z denote the realization of the component indicator random variable Z. Denote the proper prior density for ψ as π(ψ) and the conditional density for Z given Ψ = ψ as π(z; ψ). The posterior density of ψ is then given by where K denotes a normalizing constant. Now we are treating the mixture parameters as random variable quantities. We partition Ψ and denote the (independent) prior distributions on Λ k−1 and Θ k by Π Λ (λ) and Π Θ (θ), respectively. For the prior on the mixing proportions, we will always use Π Λ (λ) = Dir k (α), such that Dir k (α) is taken to mean the Dirichlet distribution with parameter vector α = (α 1 , α 2 , . . . , α k ) T , where α j > 0 for all j = 1, . . . , k. We use these priors in outlining two MCMC algorithms (in a mixture context) that are common for posterior simulation: Gibbs samplers and Metropolis-Hastings algorithms. Using a Gibbs sampler (Geman and Geman (1984)), we may simulate from each element of ψ by conditioning on the current values of the other elements in ψ. With the formal notation defined, we can now construct a Gibbs sampler for mixture models. 1. Choose initial values ψ (0) and Z (0) . 2. For a given t, such that t = 1, 2, . . ., simulate for all j = 1, . . . , k. Here, Z i and Z j are vectors denoting the i th row and j th column of Z, respectively, and Π Θ (θ; Z (t−1) j ) is the conditional distribution of θ given the previous iteration's value of Z j . Simulate i ) is taken to mean the multinomial distribution consisting of one draw from k bins with probability of success vector Increment t and repeat steps 2 and 3. As for a Metropolis-Hastings algorithm (Metropolis et al. (1953) and Hastings (1970)), a little more programming is usually necessary since there is the requirement of a proposal density (q(ψ * ; ψ)) to effectively search the entire parameter space. Many times, this density is chosen to be symmetric (i.e., q(ψ * ; ψ) = q(ψ; ψ * )), but as Gill (2002) points out, this is not necessary. The decision about whether we accept a value, ψ * , from this proposal density will be based on the acceptance ratio, With the formal notation defined, we can now construct a Metropolis-Hastings algorithm for mixture models. Increment t and repeat steps 2 through 4. Once we have an MCMC sample from the posterior, we may perform inference for the parameters. This is in contrast to likelihood methods, that give only maximum likelihood estimates and an estimate of its sampling distribution variance-covariance matrix. We should note some issues when implementing these and other MCMC methods as found in Robert and Casella (2004). For instance, choosing a proposal density q(ψ * ; ψ) may require one to incorporate some sort of tuning parameter (see Chib and Greenberg (1995) and Cappè and Robert (2000)). There are also practical issues such as thinning the chain, burn-in, and selecting initial values. A major problem in using MCMC methods to estimate parameters in the mixture setting is label switching, which will be addressed later. Number of Components Determining the number of components for (2.1) is still a major contemporary issue in mixture modeling. We will address here some of the techniques used in assessing the number of components when this is not known a priori. ( 3.9) It is well known that standard regularity conditions do not hold in the setting of (3.8) and thus the asymptotic distribution of (3.9) is not the usual chi-squared distribution (see Aitkin and Rubin (1985) and Lindsay (1995) for a discussion). However, model selection techniques are still used in assessing the overall number components as simulations have indicated relatively good empirical results (see McLachlan and Peel (2000) for references). We recommend not using these techniques solely in determining the number of components of a mixture model, but rather to give further supporting evidence to the number selected based on another method, such as a bootstrapping technique (to be discussed later). Four common model selection criteria are Akaike's information criterion (AIC) of Akaike (1973), the Bayesian information criterion (BIC) of Schwarz (1978), the Integrated Completed Likelihood (ICL) of Biernacki et al. (2000), and the consistent AIC (CAIC) of Bozdogan (1987). Given an estimateψ, the form of these criteria are, respectively, where r = kq + k − 1 is the number of parameters in the mixture setting. These values are calculated for a reasonable range of components and then the maximum of these values (for each criterion) corresponds to the number of components selected by that criterion. As an alternative to penalized likelihood methods, Chen and Kalbfleisch (1996) present a penalized minimum-distance estimate. They argue that the penalized likelihood approach tends to produce a fit with fewer components. However, it is unknown whether or not this approach produces a consistent estimate of the number of mixture components. Hence, they use the penalized minimumdistance estimate, which they show to be consistent for the number of mixture components as well as the mixing distribution. Their method is used for any of the common distances, such as the Kolmogorov-Smirnov distance, the Cramervon Mises distance, and the Kullback-Liebler information. While the last is not symmetric, it can be viewed as a distance and used when two distributions have common support. In fact, one may use a symmetrized version of the Kullback-Liebler distance to avoid the symmetry issue. The interested reader may refer to Chen and Kalbfleisch (1996) for applications of the penalized minimum-distance method. A commonly employed method in determining the number of components is a bootstrapping scheme proposed by McLachlan (1987). The algorithm is an attempt to approximate the null distribution of the LRT statistic values given in (3.9). We outline the algorithm for this parametric bootstrapping scheme using an EM algorithm as follows: Algorithm 3.5 (Parametric Bootstrapping the LRTs for Number of Components). 1. Fit a mixture model with k 0 and k 0 +1 components to the data, y 1 , y 2 , . . . , y n , which leads to the EM estimatesψ 1 andψ 2 , respectively. 2. Calculate the (observed) log likelihood ratio statistic in (3.9). Denote this value by Ξ obs . 3. Simulate a data set of size n from the null distribution (the model with k 0 components). Call this sample y * 1 , y * 2 , . . . , y * n . 4. Fit a mixture model with k 0 and k 0 + 1 components to the simulated data and calculate the corresponding 'bootstrap' log likelihood ratio statistic. Denote this value by Ξ * . 5. Repeat steps 3 and 4 B times to generate the bootstrap sampling distribution of the likelihood ratio statistic, Ξ * 1 , Ξ * 2 , . . . , Ξ * B . 6. Compute the bootstrap p-value as Algorithm 3.5 is implemented by first testing 1 versus 2 components. A value of p B is obtained for this test and if it is lower than some significance level α, then claim statistical significance and proceed to test 2 versus 3 components. If not, stop and claim that there is not statistically significant evidence for a 2component fit. Proceed in this manner until you fail to reject the null hypothesis. Exact theoretical results for testing (3.8) have been obtained in numerous special cases. As Lindsay (1995) points out, some of these testing scenarios yield limiting distributions that either resemble mixtures of chi-squared distributions of different degrees of freedom (called a chi-bar-squared distribution) or can, in fact, be shown to be a chi-bar-squared distribution. One special case is when k 0 = 1 in (3.8). Lindsay (1995) shows the limiting distribution for −2 log ∆ in this case is 1 2 where χ 2 p denotes the chi-squared distribution with p degrees of freedom. Notice that (3.10) is just a linear combination of chi-squares. Because of this fact, there is no guarantee that the parametric bootstrap outlined in Algorithm 3.5 will give a good approximation. An example where a statistic is asymptotically distributed as a linear combination of chi-squares and the parametric bootstrap approximation fails can be found in Babu (1984). While this does present theoretical difficulties, it does not appear to be an issue often encountered in practice. McLachlan and Peel (2000) present simulation results and cite many references that endorse this method by assessing the accuracy of p B as well as the overall power of the test. Bayesian Approaches In addition to the likelihood methods presented, there are a few Bayesian procedures concerning estimating the number of components. One method is the Dirichlet process (Ferguson (1973)). For a parametric mixture model, rewrite (2.1) as where the mixing distribution P is defined as k j=1 δ(θ j ). Here, δ(·) represents the Dirac measure on the parameter space Θ, meaning that P is a discrete distribution that puts mass λ j on θ j . Since the number of components is now random, the problem can be thought of as selecting a model out of the set of all possible mixture distributions on Θ. Thus, it is necessary to specify some sort of prior on this set. One way of accomplishing this is by implementing the Dirichlet process to obtain the prior on the set of all distributions on Θ. The focus is on P , which is a distribution on Θ drawn from the Dirichlet process with parameter α, such that α is a finite measure on Θ. While the details of the Dirichlet process (Ferguson (1996)) are beyond the scope of this work, an easier way to think about the Dirichlet process is what is referred to as Sethuraman's representation: Algorithm 3.6 (Sethuraman's Representation of the Dirichlet Process). 2. Take γ 1 , γ 2 , . . . as independent and identically distributed from Beta(1, α(Θ)), chosen independently of θ 1 , θ 2 , . . ., such that Beta(a, b) is taken to mean the beta distribution with shape parameters a and b. Sethuraman (1994) showed that P is a realization from the Dirichlet process with parameter α. In addition to Algorithm 3.6, Escobar (1994) presented a way to sample from the posterior when a Dirichlet prior is used with a location mixture of normals. Green (1995) proposed a framework for constructing a reversible jump MCMC in order to "jump" between parameter subspaces of varying dimensionality. This is appealing for Bayesian model determination because now prior information can be placed on the number of components in the model (as well as the component parameters) and provide effective exploration of the varying dimensions of parameter subspaces. Since k is now a parameter, the parameter vector of interest becomes ω k = (θ T 1 , . . . , θ T k , λ 1 , . . . , λ k−1 , k) T ∈ Ω k , such that k ∈ N. The elements of ω k are each regarded as being drawn from an appropriately defined prior distribution. A basic sketch of the reversible jump MCMC method is as follows: 1. Draw the value ω * k from the proposal density g(·|ω k ) and target (posterior) distribution π(·). (Note that ω * k may be from a different subspace than ω k .) 2. Let M = M 1 ∪ M 2 be a countable family of move types. (a) If move m ∈ M 1 is attempted with destination ω * k ∈ Ω k , then the acceptance of this sample is given by the appropriately defined acceptance probability α then the acceptance of this sample is given by the appropriately defined acceptance probability α (2) m (ω k , ω * k ). 3. Iterate. Run I iterations of an MCMC sampler according to the current parameter space. Standard Errors After generating an MCMC sample using the procedures discussed in Section 3.2, posterior standard deviations are easily obtained. With likelihood methods, it is possible to obtain standard error estimates by using the inverse of the observed information matrix when implementing a Newton-type method. However, this may be computationally burdensome. An alternative way to report standard errors in the likelihood setting is by implementing a parametric bootstrap (Efron and Tibshirani (1993)). Efron and Tibshirani (1993) claim the parametric bootstrap should provide similar estimates to the standard errors compared to the method involving the information matrix. The development of this procedure has become useful for the mixture case as well. We outline the algorithm for a parametric bootstrapping scheme in the mixture setting using an EM algorithm as follows: Algorithm 3.9 (Parametric Bootstrap for Standard Errors). (1) ,ψ (2) , . . . ,ψ (B) . However, when performing a bootstrapping procedure in the mixture setting, one must be cognizant of the label switching problem described below. Identifiability In this section, we formally define identifiability for mixture distributions. This discussion and the definition of identifiability are adopted from McLachlan and Peel (2000). Let F k denote a parametric family of k-component mixture densities as described in (2.1) and F the class of all such F k . So Permuting the component labels of the mixture density results in F being nonidentifiable in Ψ. We formalize this concept as a definition. Definition 4.1 states that no element of F can arise in two different ways except by trivial means, such as letting some λ j = 0 or splitting a component by letting θ j1 = θ j2 . Label Switching In Section 3, we saw possible estimation methods used in mixture modeling. These methods included a parametric bootstrap using EM algorithms to obtain standard error estimates and Bayesian inference via MCMC samplers. During the implementation of such iterative methods, one must be cognizant of the solutions being calculated from one iteration to the next since a given mixture component cannot be extracted from the likelihood. This situation occurs because the component labels cannot be distinguished from one another due to the nonidentifiability in ψ as established in Definition 4.1. Such a permutation of the component labels as in this definition is called label switching. There are numerous methods in the literature for dealing with label switching (see Jasra et al. (2005) for a review of some of these techniques). One of the easiest methods for dealing with this issue, especially when the parameters are well-separated within the parameter space, is by imposing identifiability constraints on the parameters (such as λ 1 ≤ λ 2 ≤ . . . ≤ λ k ). However, this method comes with caveats heavily emphasized in the literature (for instance, see McLachlan and Peel (2000) and Stephens (2000b)). For example, consider fitting a mixture with k = 2 components with the mixing proportions close to 0.50. Imposing the identifiability constraint on the mixing proportions clearly influences the estimates of θ 1 and θ 2 , thus creating a bias. Such a situation is highlighted in where they present "disturbing" results when considering the various ordering constraints on a k = 3 component mixture of normals using an MCMC sampler. This identifiability can be imposed after the simulations have been completed, as Stephens (1997) demonstrates for an MCMC sample of size N by relabeling the sample (Ψ (1) , Ψ (2) , . . . , Ψ (N ) ) and applying permutations π 1 , π 2 , . . . , π N such that the permuted sample (π 1 (Ψ (1) ), π 2 (Ψ (2) ), . . . , π N (Ψ (N ) )) satisfies the identifiability constraints. Since there is not always a clear choice of labeling, Richardson and Green (1997) stress post-processing the simulations under different permutations of the labels to determine an appropriate choice. One alternative method is to consider bootstrapping in mixtures. McLachlan and Peel (2000) point out that label switching can usually be avoided by setting the EM algorithm's starting values to the maximum likelihood estimates, since EM algorithms are (generally) very dependent on the starting values. Next, note that since the likelihood of a k-component mixture model is invariant under permutation of the component labels, it effectively has k! modes. Label switching is often presented in the context of Bayesian mixture modeling since the posterior distribution will also have this property under a symmetric prior. The first Bayesian method we will consider is a decision theoretic approach as implemented in , Stephens (2000b), Hurn et al. (2003), andJasra et al. (2005). Consider estimating the parameters from the mixture model in (2.1). In a Bayesian framework, summarizing their posterior distributions will be viewed as choosing an action, a, from the action space, A. Then, define a loss function L : A × Ψ → R + . Since a will be a chosen vector of parameters, let a =ψ. The objective is to find a value ofψ that minimizes the posterior expected loss, or risk. This results in the Bayes estimator, which is defined as (4.1) The expectation on the right hand side is the risk function, which is taken over the posterior distribution of ψ|Y. When considering only the class of loss functions that are invariant under permutations of ψ, the Bayes estimator in (4.1) becomes unaffected by label indexes. Once an appropriate loss function has been chosen, these procedures can be summarized into the following algorithm: Stephens (2000b)) as Find the Bayes estimatorψ This entire process hinges on the selection of an appropriate loss function, which may be quite challenging. Yet if the loss function L is chosen to be invariant to permutations of the component labels, then label switching will not hamper the resulting Bayesian estimates. Stephens (2000b) recommends running Algorithm 4.1 from several starting points and choosing the Bayes estimate that provides the best local optimum found. Another procedure used within the Bayesian framework is by Chung et al. (2004), who suggest assigning as few as one observation to a component a priori. This amounts to using data-dependent priors where one or more observations are assigned to each component with certainty. The point is to apply enough information to break the symmetry of the likelihood and flatten the posterior density over k! − 1 nuisance regions, which are the duplicate modes resulting from the permutations of the components. The posterior density in the sampler will now reflect a modified likelihood function which accommodates a density where one (or more) observations were assigned to each component. The major limitation of this approach is to what extent one is willing to accept preclassifying certain observations. Software In this section, we briefly outline a few software packages capable of performing analysis of mixture models. There are many packages which specialize in fitting certain mixture models, but the packages we mention here provide a little more versatility with respect to the selection of functions they offer. The SAS TRAJ procedure (Jones et al. (2001)) analyzes longitudinal data by fitting a mixture model. Specifically, PROC TRAJ fits semiparametric discrete mixture models to longitudinal data. Distributions available in this procedure include Bernoulli, censored normal, Poisson, and zero-inflated Poisson. The R programming language has a few packages available for analyzing mixture models. The mclust package (Fraley and Raftery (2006)) provides modelbased clustering, density estimation, discriminant analysis, and the analysis of mixtures of (multivariate) normals under various parameterizations of the component-specific variance-covariance matrices. EM algorithms are used for estimation and BIC is used for determining the number of components. Another package available in R is the mixtools package (Young et al. (2008)). This package fits a wide array of mixture models including mixtures of (multivariate) normals, mixtures of regressions, mixtures of Poisson regressions, mixtures of logistic regressions, and mixtures of multinomials. EM algorithms are used for estimation in all of these cases and there is also a Metropolis-Hastings algorithm for the mixture of regressions setting. There are bootstrapping functions for testing the number of components as well as estimating the standard errors. There is also a stochastic semiparametric EM algorithm for estimating a nonparametric multivariate mixture model. One commercially available software program is Mplus (Muthèn and Muthèn (2008)). This program provides tools for mixture models, latent class analysis, and survival mixtures, just to name a few. Mplus has the capability of carrying out these analyses for observed variables that are continuous, censored, binary, ordinal, nominal, or any combinations of these types. Mplus is a very flexible tool for researchers and provides many routines in addition to those for mixture analysis. Conclusion The area of finite mixture models has a rich literature demonstrating their applicability in a wide variety of fields. This article attempted to codify an overview of mixture modeling by providing an introduction to the topic, discussion of relevant issues in estimation, and outlining various algorithms. Researchers unfamiliar with mixture modeling will hopefully gain an appreciation of their utility as well as an understanding of their limitations. We hope the reader gained a greater breadth of knowledge to aid them as they proceed with more specific literature on the various tools and issues regarding mixture modeling.
2012-12-19T02:01:43.000Z
2008-08-04T00:00:00.000
{ "year": 2008, "sha1": "bdc97b3ac9b3afed9b30182e6c1f7f67082b173f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bdc97b3ac9b3afed9b30182e6c1f7f67082b173f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
259475863
pes2o/s2orc
v3-fos-license
Rod-shaped microparticles — an overview of synthesis and properties Micro particles come in a wide variety of architectural designs and shapes. It is time to look beyond the conventional spherical morphology and focus on anisotropic systems. Rod-shaped micro particles in particular exhibit numerous unique behaviors based on their structural characteristics. Because of their various shapes, architectures, and material compositions, which are based on the wide range of synthesis possibilities, they possess an array of interesting characteristics and applications. This review summarizes and provides an overview of the substantial amount of work that has already been published in the field of rod-shaped micro particles. Nevertheless, it also reveals limitations and potential areas for development. Introduction Colloidal particles have a wide range of applications from paints [1], stabilizers in emulsions and dispersions [2], and structure-directing agents to sensor components [3]. Due to their rather small dimensions, the material properties are often secondary to structural features such as size or shape [4] and these particles can exhibit a diversity of shapes, including spherical, rod-shaped, dumbbell, cuboid, urchin, and hollow. In general, such particles can be attained through various strategies, including top-down and bottomup approaches. Top-down methods, such as mechanical grinding and milling, laser ablation, focused ion beam milling, and electron beam lithography, involve the reduction of bulk materials to smaller particles. Conversely, bottom-up methods, such as vapor-liquid-solid growth, solvothermal synthesis, templated synthesis, and self-assembly, involve the assembly of smaller units to form larger structures. The selection of the appropriate synthesis method, and the design of the final shape of the particles, should take into account the desired properties and performance of the materials in the target application. Although different shapes of materials have their own unique properties and functionalities, the synthesis of rodshaped materials at the nano and micro scale is particularly noteworthy. For the lower size range, i.e., nano particles, synthetic approaches for rod-shapes have been extensively studied and tuned. A large number of reviews describe concepts of synthesizing anisotropic nano materials [5] and how to achieve certain morphologies and optimize the aspect ratios (ARs), such as for absorption and scattering in plasmonic studies [6][7][8][9][10]. These nano scale entities are differentiated between nano rods (all dimensions smaller than 100 nm and typical ARs between 3 and 5) and nano wires, characterized by extended length values. A comprehensive review of the plethora of developments in this area is beyond the scope of the present discussion. We refer the interested readers to designated literature [5,11,12]. Highly relevant rod-shapes in nature also occur on a slightly larger scale, with bacteria being the most prominent example, but also fungi and spores make use of the cylindrical morphology. For biological organisms, several rodforming growth mechanisms have been discovered and summarized in a review [13]. While individual synthetic strategies [14,15] as well as engineering-based approaches [16,17] to produce elongated micro structures have been reported, our investigation revealed a lack of a thorough and didactic review on synthetic approaches how to obtain cylindrical micro objects. Behaviors The examination of colloidal particles is crucial in understanding the dynamics of complex systems in nature. While spherical particles have been extensively studied [18][19][20][21], it is imperative to also investigate anisotropic systems [22], not at last for their biological relevance. These systems can display a far richer and intricate behavior as they possess both, translational and orientational degrees of freedom. The idea of dissipative coupling between the translational and rotational motion was first proposed by Perrin [23,24]. When the rotation of a uniaxial anisotropic particle is restricted, it exhibits two independent translational motions along its two principal axes. This results in distinct diffusion constants, D ‖ and D ⟂ for motion parallel and perpendicular to the long axis as shown in Fig. 1a. The longitudinal diffusion coefficient is higher than the transverse diffusion coefficient as the particle experiences more resistance along the transverse direction. However, when rotation is allowed, the rotational diffusion of the particle, characterized by a single diffusion coefficient, D , and an associated diffusion time, =1∕ 2D , washes out the directional memory of the particle over time. This leads to a crossover from anisotropic to isotropic diffusion, as the time scale becomes much longer than . As a result of the anisotropy of non-spherical particles, the probability distribution function of their displacements deviates from the Gaussian distribution typically observed in isotropic systems, such as spherical particles, to a non-Gaussian distribution [27][28][29]. This was also experimentally demonstrated for ellipsoid PMMA particles confined in a quasi-two-dimensional environment [30]. This crossover from anisotropic to isotropic diffusion was also established in an earlier work for prolate ellipsoids through molecular dynamics simulations [31]. Additionally, the diffusion coefficients of both translational and rotational motion for ellipsoidal particles were recorded as a function of concentration [32]. Since then, a plethora of studies have been conducted using both experimental and simulated methods to investigate the behavior of anisotropic structures in various environments [33,34]. Due to the complexity of the environments in which rodlike structures are implemented in real-world applications, which differs from the bulk behavior in terms of entropic and hydrodynamic interactions, a significant body of research has been conducted to replicate such conditions in constrained or confined geometries. Specific examples include but are not limited to the dynamics of single silica micro rods suspended in water microchannel flow [35], diffusion of thin nano rods in polymer melts [38], diffusion of iron-plated gold rods in corrugated channels [36], gold rods in confined quasi 2D porous media [39], and the diffusion of a silver nano wire through obstacles [37]. Some of these examples are illustrated in Fig. 2. Another interesting feature of rod-shaped particles is their ability to display complex phase behavior compared to the isotropic structures as can be seen in Fig. 1(b-d). Whereas spherical particles show a transition between gas, liquid, crystal, and glass phase, rods can possess an additional intermediate phase between liquid and crystal phase termed as liquid-crystal phase. One of the earliest theoretical explanations for the formation of a nematic liquid-crystalline phase was provided by Onsager in 1949. He proposed that the transition from an isotropic to a nematic phase for long, hard rods could be purely entropy-driven [40]. Subsequently, numerical simulations showed that a transition from nematic to smectic phase can also be driven by entropy alone [41]. Subsequently, there have been notable advancements in the detailed study of the rich phase behavior of rods [42][43][44][45][46]. A variety of experimental techniques have been developed to probe these processes. Some of the commonly employed methods such as depolarized light scattering [47], Fig. 1 Behavior of rods: a translational and rotational diffusion coefficients defined for a rod. b SEM image of a blue phase III assembled from dumbbell-shaped colloids (DBCs). Reproduced with permission [25]. c, d Simulation of the phase behavior of short rods in 2D. Reproduced with permission [26] fluorescence anisotropy decay [48], dynamic light scattering [49], small-angle X-ray scattering [50], and nuclear magnetic resonance spectroscopy [51] have been used to study the diffusion of particles and molecules in liquids. Theoretical description of methods Synthesis of rod-shaped particles requires a driving force, which guides the growth anisotropically in one direction. For the synthesis of micro rods, different concepts and driving forces have been developed. A schematic illustration of five important concepts is displayed in Fig. 3. However, not all reported synthesis procedures can be classified into one of these concepts. A commonly employed strategy is to utilize the anisotropy of the crystal structure of the material. As different crystallographic facets possess different surface energies, the crystal growth occurs with different reaction rates. Additionally, the growth rates of the facets can be tuned by addition ofcertain capping agents, which can selectively decrease the surface energies ofspecific facets [52]. However, it is to be noted that this concept is limited tocrystalline materials with preferably hexagonal or tetragonal structure. Another concept is based on the introduction of an additional phase in the form of a liquid droplet, from where the growth of the rod develops. Here, a precursor is transferred from a surrounding phase (gas or liquid) to the droplet, [35]. d Trajectory of iron-plated gold rods in a corrugated channel. The orientation of the rod is color coded. When the rod is perpendicular to the channel boundary, its orientation is pi/2 and when it is parallel to it, its orientation is taken to be 0. Reproduced with permission [36]. A silver nano wire diffusing in different configurations: e a random repelling laser field. f randomly placed polymer pillars. Reproduced with permission [37]. Diffusion of thin rods in g unentangled h entangled polymer melts. Reproduced with permission [38] where it will be converted to the desired material at the droplet rod interface. An example, where this solutionliquid-solid process is especially important for cylindrical micro particles, is the synthesis of silica micro rods [14,53]. Asymmetry can also be induced by applying a shear force to an emulsion, leading to a linear deformation of the emulsion droplets. This concept has been applied for the synthesis of polymer micro rods [54]. Application of a magnetic field can also be a source of asymmetry for the synthesis of magnetic micro rods. In fact, it can lead to an assembly of primary particles into chains during the growth [55]. Finally, the growth of micro structures can be carried out in a template. Common templates include anodic aluminum oxide (AAO) [56] or polycarbonate membranes [57] where materials can be deposited (e.g., by electrochemical reactions). Moreover, biological templates like bacteria or viruses have also been employed [58]. Metals Synthesis of metal micro meter sized rods can be carried out in different templates including AAO and polycarbonate membranes. These templates are available in sizes ranging from few nm to several μm. One common approach is to immerse the template in a solution of the metal salt, contact one side of it to an electrochemical cell and apply a cathodic potential to reduce metal ions in the solution to the respective metal in the pores. While the diameter of the resulting rods is given by the diameter of the pores, the length can be controlled by the duration of the reaction and the applied potential. Later the rods can be released by dissolving the template in a suitable solvent. A collection of different metals and alloys synthesized by template assisted electrodeposition can be found in the work of Péter et al. [59]. This technique also offers the opportunity of growing rods with different segments of different materials [60,61], which can for example be used for synthesis of micro swimmers [62]. The concept can also be extended to tubular micro structures with layers of different compositions. Common examples include polymer metal composites with an outer polymer and an inner catalytically active metal layer, which are applied as bubble propelled micro swimmers [63,64]. Besides templated systems, few other concepts can be applied for the synthesis of metal rods on the micro scale. One approach is to coat the metal on a micro rod of another material (e.g. SiO 2 ), leading to a core shell structure with a metal shell [65]. Many more syntheses can be found on the nano scale and they are a frequent study subject in physical chemistry. Even though these examples do not fulfil the size requirements we established above,we have nonetheless decided to include an overview on this research to incentive the development of novel synthetic techniques in the interface area, resulting in metal micro rods. Metallic rod-shaped nano structures have received significant attention due to their unique optical, electronic, and catalytic properties. Due to their small size and large surface-to-volume ratio, metallic nano structures display a range of extraordinary physical and chemical properties that are not observed in bulk materials. The properties and potential applications of metallic nano rods and metallic nano wires are distinct, owing to their different shape characteristics. Due to the ability of tuning their AR, metallic nano rods are highly desirable for plasmonic applications, as they can exhibit strong absorption and scattering capabilities across a wide range of wavelength from visible to infrared regions [66]. The electrical conductivity of nano wires is higher than that of metallic nano rods [67]. This feature renders them particularly suitable for electronic applications, including interconnects and sensors [68]. Here, we are going to focus mainly on gold, silver, and copper. Au Seed-mediated growth is a widely used method for the synthesis of gold and silver nano and micro rods. The process involves the use of small seed particles as nucleation sites for the growth of nano rods. The seed particles are typically prepared by reduction of metal precursors, such as chloroauric acid or silver nitrate, with a reducing agent, such as sodium borhydride or ascorbic acid. Once the seed particles have been prepared, they are added to a solution containing a metal precursor and a capping agent. The metal precursor provides the atoms that are used to grow the nano rods, while the capping agent, such as cetyltrimethylammonium chloride or polyvinylpyrrolidone (PVP), helps to stabilize the seeds and control the growth of the nano rods. The first pioneering study on seed-mediated growth of gold nano rods (AuNRs) was done by Jana et al. [79]. More papers, improving upon the existing study, were published [80,81]. In order to produce a specific shape and cross-section, researchers have manipulated the capping and reducing agents during the synthesis process. For instance, a combination of CTAB and NaBH 4 favorably produces Au nano rods that exhibit a pentagonal cross-section, commonly referred to as penta-twinned AuNRs. By switching the agent used to stabilize the seeds from CTAB to citrate or PVP, single-crystal AuNRs with an octagonal crosssection have been synthesized [82]. The AR of gold nano rods has been a subject of intense research due to its importance in various applications. In recent years, several studies have reported the use of diverse techniques, such as the introduction of aromatic compounds [66,83], binary surfactant mixtures [84], and temperature [85] control to precisely regulate the AR of AuNRs. Additionally, research has also been focused on further modifications of the shape of AuNRs, such as tapered [86] and rice-shaped structures [87], thus adding to the versatility and potential of these nano materials. The first panel of Fig. 4 shows a general schematic of the seed-mediated growth of AuNRs and AuNRs synthesized through different techniques. Ag Silver NRs have also been prepared using a seed-mediated process. One of the first studies to achieve this was done by Jana et al. [88]. However, the polyol method became more popular for synthesizing Ag nano structures. It involves the use of metal precursors dissolved in a polyol solvent, such as ethylene glycol or glycerol. PVP acts as an excellent capping agent as well as reducing agent and has been used extensively to synthesize Ag nano rods as well as nano wires [73,89]. This method has been used to synthesize Ag nano bars which could subsequently be turned into Ag nano rice [89]. Silver nano bars could also be produced by siteselective of Ag nano cubes [90]. The second panel of Fig. 4 shows a general schematic of the synthesis of Ag nano wires and some images of Ag nano rods as well as wires. Cu Compared to Au and Ag, there have been limited reports on the synthesis of Cu-based nano and micro structures. This can be mainly attributed due to the difficulty of reducing Cu salts into metallic Cu. Moreover, lack of effective capping agents and poor stabilization at ambient conditions still remains a challenge [76]. In general, Cu nano rods and wires have been synthesized using seed-mediated [74,77] and template-based methods [91,92]. The third panel of Fig. 4 shows a schematic of the solution phase synthesis of Cu nanostructures. [69]. c Schematic illustration of seed-mediated growth of Au nano rods. Reproduced with permission [7]. d and e 2D STEM-HAADF image of Au nano bipyramid coated with Ag. Reproduced with permission [70]. f SEM image of Ag nano rod. Reproduced with permission [71]. g SEM image of a Ag nano bar. Reproduced with permission [72]. h Schematic illustration of growth of Ag nano wires with pentagonal cross-section. i SEM image of a Ag nano wire. j TEM image of microtomed Ag nano wires. Reproduced with permission [73]. k TEM image of Cu nano rod. Reproduced with permission [74]. l HAADF-STEM image of Cu nano rod. Reproduced with permission [75]. m Schematic illustration of solution phase synthesis of Cu. Reproduced with permission [76]. n (i) TEM image of Cu nano wire. (ii) SEM image showing the pentagonal cross-section of the nano wire. (iii) Schematic of the Cu nano wire showing different facets of the nano wire and the growth direction of the nano wire. Reproduced with permission [77]. o SEM image of Cu nano wire. Reproduced with permission [78] Metal compounds Another major class of materials is metal oxides. Before discussing this category, various metal oxyhydroxides are reviewed since they are widely used as templates for the production of metal oxide rods [93]. Metal oxyhydroxides Ignoble metals such as iron [94], cobalt [95], and manganese [96] commonly result in rods with various diameters, lengths, and structures as a result of either solvothermal or hydrothermal synthesis parameters. In a study from 2015, the impact of pH-value and Fe 3+ concentration on the synthesis of FeOOH nano rods was investigated. Higher concentrations of the precursor cause an expansion of the rod length. Similar hydrothermal techniques based on a nitrate precursor were used to create FeOOH rods with a diameter of about 20 nm and a length of about 750 nm in an alkaline environment [97]. Other methods, such as a template synthesis process, can be employed to produce larger FeOOH rods [15]. Hollowedout FeOOH micro rods were formed using MgO particles as template and adding an aqueous solution of FeCl 2 . After 4 h of stirring at room temperature, the resulting rods were substantially larger than those produced by the hydrothermal process, measuring a few micrometers in width and tens of micrometers in length [96]. In 2008, rod-shaped MnOOH particles with diameters up to 200 nm and lengths up to tens of micrometers were produced using a hydrothermal technique, taking MnSO 4 as a precursor and using sometimes beta-cyclodextrine as an additive [96,98]. The size of the rod could be controlled in the previously specified ranges by varying the stoichiometric factor of beta cyclodextrin as additive, and modifying the temperature [98]. GaOOH rods with different properties were created by adjusting the hydrothermal method's parameters. The generation of GaOOH rods has been the subject of numerous works. In some studies, these rods were synthesized from Ga(NO 3 ) employing low temperatures of 95 • C and short reaction times, producing rods with a diameter of 1 m and a few micrometers in length [115]. The impact of pH value is also mentioned in the work of this group and demonstrated that the AR is significantly influenced by the amount of the precursor [100]. When performed in a weak acidic environment, with GaCl 3 as a precursor, the synthesis results in rhombic rods with a diameter of 300 nm and a length of around 1.5 m [99]. At comparable conditions, this particle form is also observed for -FeOOH on a smaller scale [116,117]. More inhomogeneous GaOOH rods with lengths ranging from 0.5 to 10 m and diameters varying from 0.4 to 2 m were produced by the hydrothermal synthesis process carried out at a high temperature of 225 • C for 10 h [101]. By attempting to use a liquid reaction at low temperatures of 95 • C and adding urea, which continually decomposes during the reaction and causes the necessary hydrolization, zeppelin-shaped rods with lengths of about 1 to 2 m were produced. Using pure water results in defined rods with lengths of about 3 m [102]. Similarly, the formation of FeOOH rods by adding urea for hydrolization has also been reported to yield zeppelin-shaped rods [118]. The fabrication of CoOOH rods with lengths ranging from 3 to 10 m and a diameter of about 800 nm was the focus of another group applying a chemical bath deposition technique. The resulting rods composed of stacked nano sheets were produced on a stainless steel mesh from a Co(NO 3 ) 2 precursor solution at low temperatures [95]. Metal oxides Metal oxyhydroxide rods are frequently utilized as precursors for their metal oxide equivalent, which is typically converted through the calcination process. This is also applicable for the synthesis of MnO 2 micro rods, which are produced by annealing hydrothermally produced MnOOH micro rods to 350 • C for 10 h. The resulting rods have diameters ranging from 0.10 to 0.62 m and lengths ranging from 1.9 to 12 m [110]. MnO 2 rods with lengths ranging from 2 to 3 m were produced using a similar procedure [108]. The hydrothermal process is another method used to directly produce MnO 2 micro rods. Template-assisted electrodeposition using MnSO 4 as precursor offers the synthesis of MnO 2 micro rods with tune-able length and diameter [57]. Micro rods and other morphologies made from ZnO [119] are often formed using hydro-or solvothermal techniques. ZnO rods with diameters up to several micrometers and lengths of a few micrometers are produced via a lowcost hydrothermal technique based on a Zn(NO 3 ) 2 precursor. Therein, the pH level and precursor concentration are important factors in the development of micro rods. In addition, the reaction time affects both the crystal shape and size [111,113,120]. Another method for producing ZnO micro rods is the hydrothermal deposition at copper stripes. Thus, by adjusting the temperature and the response time, the growth process may be controlled [121]. It has also been shown that ZnO micro rods may be synthesized using the microwave-assisted hydrothermal technique [122]. Additionally, the use of additives affects the synthesis parameters and the shape of the rods [123]. While using the hydrothermal method, it has also been reported that the cooling temperature affects the rods' morphology and characteristics [124]. Aside from the hydrothermal method, there are a few solvo-chemical synthesis techniques for producing ZnO micro rods. The synthesis of ZnO rods is often based on the transformation of ZnOOH to ZnO and that the concentration of additives, such as HMTA, affects the growth rate [125]. ZnO can also be deposited electrochemically into polycarbonate membranes, where H 2 O 2 is electrochemically reduced to OH − , which leads to precipitation of Zn(OH) 2 [126]. Next to ZnO, there are a few papers dealing with MoO 3 rods, which are mostly formed with the hydrothermal method, using Na 2 MoO 4 or (NH 4 ) 2 MoO 4 as precursor in an acidic environment. Using a low temperature and a shorter reaction time generates bigger rods in length and diameter than using high temperatures of 180 • C and a longer reaction time [106,107,127]. MgO rods are typically made using a wet chemical process that starts with the synthesis of MgCO 3 micro rods at room temperature and ends with the calcination to MgO rods in the presence of air [15,[128][129][130]. The addition of dextrose is known to enhance anisotropic growth during the calcination process, which helps to obtain a rod-like form [131,132]. Additionally, Fe 2 O 3 rods grown on top of other materials, such as MgO, are produced using MgO micro rods as templates. For doing this, a FeCl 3 solution was mixed with the MgO micro rods, and after calcination, -Fe 2 O 3 hollow micro rods with diameters of several micrometers and tens of micrometers in length were produced [15]. In addition, hematite rods were produced hydrothermally from FeCl 2 [104] and via the thermal decomposition of FeAc [133]. Furthermore, using a microwave-assisted technique and polyethyleneglycol, Fe 3 O 4 rods with diameters of 800 nm and lengths of 3 to 6 m were created. Also, a relatively recent technique is used in this material section to form rod-like shapes by applying an external magnetic field during the hydrothermal synthesis [134]. S-doped TiO micro rods can be synthesized via ultrasonication of TiOSO 4 in water. The obtained rods consist of a polycrystalline anatase phase with a diameter of about 2 m and a length of several tens of m . TiO 2 micro rods are also accessible by templated methods including electrodeposition using TiCl 3 [135] and sol gel electrophoresis of positively charged TiO 2 sol particles into a template [56]. The latter method has been applied for a variety of materials including BaTiO 3 and SrNb 2 O 6 [136]. Finally, ink-jet printing could be optimized to produce TiO 2 rods of various diameters [137]. Co O can be synthesized either hydrothermally or with the use of a microwave, followed by calcination, to produce rods with lengths and diameters of around 6 to 30 m and 0.7 to 1.5 m , respectively [112,138,139]. Similar to how GaOOH is formed, Co 2 O 4 rods may also be formed using a solvothermal process, urea as an addition, and a final calcination phase [103]. CuO rods up to 200 nm in diameter and 11 m in length are the end product of an alkaline hydrothermal synthesis using NaNO 3 and CuSO 4 as precursors [109]. NH 4 VO 3 is used as a precursor for a hydrothermal synthesis that yields 500 nm long V O 5 rods at high temperatures and extended reaction times [105]. At even greater temperatures, the precursor V 2 O 5 produces VO 2 micro rods that are 4 m long [114]. An overview over different influences on metal oxide rod syntheses is given in Fig. 5. Metal organic frameworks (MOFs) Metal organic frameworks are a class of compounds introduced by Yaghi et al. [140]. Different units are linked together by strong bonds, achieving a combination of inorganic and organic properties: the organic part consists of negatively charges species, mostly carboxylates which in combination with positively charged metals result in high volume species. Using different di-or polytopic linkers with different geometries, the structure of linker molecules determines the [94,[96][97][98][99][100][101][102] and metal oxides (right). Reproduced with permission [103][104][105][106][107][108][109][110][111][112][113][114] morphology of the final particles (Fig. 6a-c) [141]. Therein, especially ditopic linkers can cause rod-shaped growth [142,143]. Not only shape, also porosity and crystallinity benefitted from the rod shape, caused by the incorporation of rod-favoring, linear 1,4-benzenedicarboxylic acid linkers. A possible application of the rod-shaped MOFs is the mimicking of bacterial shapes, using for example a Fe(III) carboxylate-based MOF named MIL-88A exposing Lewis acid sites and terminal carboxylic groups. These are available for surface modification, which allows tuning internalization kinetics, endocytosis pathway, and the intracellular fate of different MOF particles to a certain extent [144]. Furthermore, even if the MOF structure itself is polyhedral and not elongated, their geometrically perfect shapes and size distributions allow highly directional bonding which can lead to rod geometries (Fig. 6d, e) [145]. Polymers In contrast to the well-defined crystalline MOFs, the related class of infinite coordination polymers (ICP) is mostly amorphous, which impedes the understanding of mechanical formation details. The team around Chad Mirkin developed Salen-based homochiral ICP particles, which are amorphous spheres or rod-shaped crystalline structures, depending on the solvent [146]. Different jetting-based techniques allow fabrication of a variety of shapes, a method especially valid for polymeric materials [17]. Light structured photopolymerization, mold-based printing [16], and different 3D printing approaches will not be discussed here, despite the promise and variability of sizes and materials that can be used. We consider these techniques more of an engineering approach and do not deepen the discussions. For most polymeric materials, we must differentiate between de novo and shape modification-based approaches. An early shape modification approach relies on stretching liquefied isotropic particles (Fig. 7a), as described by Champion et al. [147] and followed up by several others [148,149]. In recent years, progress has been made also in the de novo synthesis of rod-like polymer particles. The polymer rods result if the polymerization of monomers is directed, for example via emulsion polymerization of tetrafluoroethylene [150]. The rod-like particles are formed, when the surfactant concentration is near or above the critical micelle concentration. A related approach leading to rod-shaped polymeric structures is termed mesophase polymerization, i.e., the use of surfactant mesophases as templates for "molecularly imprinted" micro rods [151,152]. Furthermore, the thermopolymerization of thiophene-based precursors on the microscale, resulting in elongated conducting polymer rods/ wires in water, was shown to be viable [153]. An efficient scale-able process for the formation of a new class of polymer micro rods was reported by the Velev group [54]. It is based on the liquid-liquid dispersion technique. The process begins by adding a small amount of concentrated solution of SU-8 in gamma-butyrolactone to an organic liquid medium. [141]. d and e show 1D and 2D rod formation by assembly of individual building blocks. Reproduced with permission [145] Then, a shear force, stirring by impeller, was given to the emulsion leading to the deformation, resulting in elongation of those particles and then results in a dispersion of rod-like particles (Fig. 7b) [54]. A more recent method to shape SU-8 into rods builds up on the liquid-liquid dispersion technique. The colloidal SU-8 polymer rods are prepared by shearing an emulsion of SU-8 polymer droplets and then broken into colloidal rods with ultrasonic waves [154]. Concluding, conducting polymers were also shaped into rods using templated methods such as electrochemical deposition, for example using nano porous coordination templates in which polythiophene micro rods with ordered chain alignment can be prepared [155]. A similar strategy is used to synthesize protein-imprinted magnetic polymer micro rods [156]. Selecting the template, this method facilitates controlling the shape and size of particles, but the materials are restricted by the necessity to remove the template. Silica A facile synthesis for SiO 2 micro rods with tune-able length was firstly reported by Kuijk [14]. The synthesis is taking place in an emulsion in pentanol using the silica precursor tetraethyl orthosilicate (TEOS). The hydrophobic TEOS is mainly dissolved in the continuous pentanol phase, where it will be hydrolyzed causing an increase in hydrophilicity and a transfer to the H 2 O emulsion droplets. There, further hydrolysis and condensation of TEOS is taking place, which leads to a nucleation of SiO 2 at the droplet-pentanol-interface. The change in solubility during the hydrolysis of TEOS enables a directed growth of the SiO 2 from the H 2 O droplets, which causes a rod-shaped morphology of the product. The overall process is depicted in Fig. 8a. The overall concept can be referred to as a solution-liquid-solid method [53]. The length of the rods is controlled by the amount of TEOS and the reaction time [14]. The resulting diameter is mainly influenced by the droplet size and the contact angle between the three phases: SiO 2 , H 2 O, pentanol, and is in the range of 200-300 nm [158]. These properties can be changed by modifying the composition of the alcoholic phase or changing the temperature. The impact of the hydrophobicity on the resulting structures is summarized in Fig. 8b. Notably, these properties can also be changed during the growth, enabling the synthesis of rods with segments with different diameters (Fig. 8c) [157,159]. Additionally, the diameter can be increased by Stöber growth of layers of silica around the rods [14]. b SEM images of SiO 2 rods with segments with different diameters controlled by reaction temperature. Reproduced with permission [157]. c Schematic illustration of impact of alcohol hydrophobicity on morphology of SiO 2 rods. Reproduced with permission [158] More complex morphologies can be obtained by adding seed particles to the medium. The emulsion droplets can attach to the seed and start the rod growth from the there. By this, the diameter of the rod can be increased to about 800 nm and depending on the choice of seed material, different functionalities like magnetic or optical properties can be introduced [160][161][162]. Theoretically this synthesis concept could also be extended to materials other than silica. Hagemans et al. replaced TEOS by different titanium alkoxide precursors, which similar to TEOS react to TiO 2 by hydrolysis and condensation reactions. However, it was found that the much higher reaction rates allow nucleations in the pentanol phase and therefore no formation of rods was observed [163]. Other notable materials There are a several works reporting micro rods consisting of special or mixed materials like rare earth oxides. Examples include the 100 nm wide Eu(OH) 2 rods produced hydrothermally [164]. Besides, solvothermal synthesis can yield other rare earth rods, such as the tens of micrometer long Y 2 O 3 rods [165] or the up to 2 m long Gd 2 O 3 micro rods [166]. Furthermore, there are many rods consisting of mixed materials. To name two, there is Zn 2 SiO 4 , which is produced using a special hydrothermal diamond anvil cell and supercritical water [127]. Furthermore, there are several tens of micrometer-sized large rods made out of CuNb 3 O 8 via flux synthesis [167]. Besides many other mixed phases, there are also Calcium Hydroxylapatite Ca 5 (PO 4 ) 3 (OH) rods, which can generate diameters up to 5 m and tens of micrometers in length obtained by the hydrothermal synthesis method [168]. The growth of magnetic materials can be guided towards one-dimensional structures by application of a magnetic field during the synthesis. This concept has been applied for the synthesis of FeS 2 and Fe 3 S 4 micro rods consisting of aligned primary particles with different structures [55]. Analogously, micro rods consisting of Fe 3 O 4 and carbon were synthesized in a solvothermal approach. The carbon, which was introduced by addition of glucose, adsorbed on the formed Fe 3 O 4 nano particles and enabled a binding of these particles to chains, guided by the magnetic field [169]. Apart from mono-metallic rods, more complex and intricate designs can also be synthesized through various methods. These include alloys (e.g., Cu-Au/Ag, Ag-Au, Cu-Ag-Au, Ni-Pd/Pt/Ag/Au) [170], core-sheath structures (e.g., Au@Pd, Ag@Au, Cu@Au) [171], metal-dielectric composites (Au@SiO 2 ), and metal-semiconductors composites (e.g., Ag@TiO 2 , Au@Cu 2 O [172]). Applications The "Behaviors" section of the paper demonstrated that rodshaped micro structures exhibit unique properties compared to their spherical counterparts. This segment aims to investigate how these behaviors can be utilized in potential applications. Micro rods are promising candidates for various applications, including waste water purification [130][131][132], and catalysis [173] due to their larger surface to volume ratio. The review paper also presented instances of the micro rods operating in restricted geometries, as real-world settings are often intricate. In biomedical applications such as drug [174] and vaccine delivery [16], an advantage of the rod-shape has been confirmed for nano particles due to increased cell internalization, tumor penetration, and retention in blood [175,176], especially concerning bio-distribution [177,178]. In one of the studies, rods were selectively internalized by neutrophils compared to spherical structures, demonstrating that altering the shape of particles can be used to selectively target neutrophils for the treatment of different inflammatory conditions [179]. On the other side, micro fabricated rod arrays in the upper micron-range were shown to enable bio-interfacing [180]. In highly specific scenarios, rods were found to be more suited for particular applications, e.g., their one-dimensional structure can also be used as optical wave guides, to propagate light in tiny devices [181]. Another example are lithium-ion batteries, where the particles, because of their shape, can adjust well to the volume change in the charge-discharge cycles and rapidly transport electrons as well as ions [104,134,139,182]. Furthermore, when applied to a surface or deposited thereon, rods can modify it and imitate the effects of a lotus leaf, as it was done with ZnO rods [183,184]. Additionally, it is discussed how flexible LEDs and micro devices based on GaN micro rods may be made due to the regulated controllable three-dimensional growth [185,186]. As previously mentioned, rods possess the ability to exhibit an extra liquid crystal structure as compared to spherical particles. This feature makes them a potential candidate for applications in photonics. In one of the studies, it was shown that achiral dumbbellshaped colloids (DBCs) can form various liquid crystal phases including blue phase III with double-twisted chiral columns [25]. Blue phase liquid crystals can deliver sub-millisecond switching time, allowing LCDs to produce sharper images and compete with OLED displays [187]. They are also appealing for use in fast optical and electrooptical devices. Hence, this work opens up a path for creating blue phases from silica DBCs for use in photonic applications. Looking at niche applications like near infrared (NIR) obscurants for military uses, CuO rods were to be found to effectively diffuse NIR light [109]. The synthesized rods can frequently be utilized as templates for rods made of other materials or tubes, as was already mentioned in this study [15]. Comprehensive summary In general, we can conclude that the formation of rod shapes requires a driving force that pushes the system away for the often favored spherical symmetry. To achieve this, we identified and grouped some of the most important methods: • When crystal structures favor growth along a particular direction, rod-like growth can result, which is frequently the case for metal oxides or hydroxides. • Growth directed by templates, external fields, or interfaces. • Pre-formed particles or droplets can be re-shaped into rods by external forces like shear. We list some representative examples with the respective references in the While certain "fashions and trends" such as the interest in well-controlled shapes have led to the availability of (mostly noble) metals in smaller and larger sized rods (nano rods and nano wires, respectively), intermediate sizes are yet largely missing. We had a particular interest in rod shaped structures to explore self probelled rolling motion of micro rods, which has recently been observed on the macro scale for fiberboids [191] and in nature for the influenza virus on cell membranes [192]. However, a smart design of synthetic approaches, eventually combining different techniques, will probably overcome this restriction in the close future. There are a few examples, such as the magnetic assembly into iron oxide rods [134], where a new synthetic methodology has been developed for a single material, but the generic method is not yet explored. While the use of magnetic fields is certainly restricted, the approach could probably be extended to electric or acoustic fields, broadening the target materials significantly. Despite the extensive research efforts to synthesize various micro structures, certain challenges still persist and require further investigation. The underlying mechanism for their growth is not fully understood, and methods for producing these structures at large scale with high efficiency remain elusive. Furthermore, the stability of these materials under ambient conditions, particularly for metallic materials, and their environmental impact must be thoroughly evaluated before considering their practical application for commercial usage. Overall, a general comparison in terms of achieved homogeneities and reproducibilities is difficult. Not only are resulting quality factors highly dependent on individual skills and reagent purities, also technical factors such as the experimental setup including heat rate contribute significantly. Templated methods are frequently more difficult to scale up, but result in more homogeneous structures. Synthetic techniques based on chemical equilibria can result in very narrow size distributions, if optimized conditions are selected. Furthermore, we envision that a combination of different materials provides opportunities to tune properties. Examples here are core-shell metals [171] that allow tuning the plasmonic properties, or hybrid structures that use well-structured MOFs as templates that yield oxide materials after calcination [193][194][195]. We conclude by highlighting that the fascinating peculiarities in rod-behaviors can be coupled to many specific material properties, paving the way towards deeper understanding of biological systems, as well as advanced functionalities and practical applications at large scale. Consent to participate Not applicable. Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-07-11T01:18:45.029Z
2023-06-16T00:00:00.000
{ "year": 2023, "sha1": "e44851613275a0435234701ba3679a64e873b272", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00396-023-05111-3.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "808414e688c0182fb7e122c034c22c3d67b6f923", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
54215661
pes2o/s2orc
v3-fos-license
COPPER INFLUENCE ON POLYPHOSPHATE METABOLISM OF C UNNINGHAMELLA ELEGANS The aim of this work was to evaluate the physiological aspects of polyphosphate metabolism of Cunninghamella elegans grown in presence of copper. The growth profile was obtained by means of biomass yields, orthophosphate consumption, polyphosphate accumulation and phosphatases activities. The results revealed the influence of copper on the growth, observed by biomass yields. Orthophosphate consumption was faster in cells grown in the presence of copper. The presence of copper in the culture medium induced polyphosphate accumulation. The polyphosphate level was almost constant in the beginning of control culture growth, and could be related to the exponential growth phase. On the other hand, the copper treated cultures exhibited a significant reduction in the polyphosphate levels, indicating an active metabolization of the polymer. Acid phosphatase activity was not detected in the conditions studied, but alkaline phosphatase activity was significantly lower in the treated cultures. The results suggest the potential use of Cunninghamella elegans isolate in bioremediation and biosorption applied to environments polluted by copper. INTRODUCTION Copper is an essential micronutrient for most, if not all, living organisms since it is the constituent of many metalloenzymes and proteins involved in electron transport, redox, and other important reactions.Copper is also required as a cofactor in a variety of proteins.Despite its importance, relatively little is understood about the molecular details of how organisms acquire this trace metal from the environment.Copper requirements by microorganisms are usually satisfied with very low concentrations of the metal (1 to 10 µM).In contrast, copper present at higher level in its free ionic form (Cu 2+ ) is toxic to microbial cells.Microorganisms must possess delicate mechanisms to maintain intracellular copper within such a restricted level the neither interferes with normal metal homeostasis nor poses a risk of toxicity (9,11,12). It is well recognized that microorganisms have a high affinity for metals and can accumulate both heavy and toxic metals by a variety of mechanisms.Microorganisms highly effective in sequestering heavy metals include bacteria, fungi, algae, and actinomycetes.These have been used to remove metals from polluted industrial and domestic effluents on a large scale.Microbial interactions with metals may have several implications for the environment.Microbes may play a large role in the biogeochemical cycling of toxic heavy metals, also in cleaning up or remedating metal-contaminated environments.There is also evidence of a correlation between tolerance to heavy metals and polyphosphate metabolism (6,13,15,25). In microbial cells, inorganic polyphosphate (poly P) plays a significant role in increasing cell resistance to unfavorable environmental conditions and in regulating different biochemical processes.Many functions had been related to cellular poly P, *Corresponding Author.Mailing address: Rua Nunes Machado, 42, Bloco J, UNICAP, Boa Vista.50050-900, Recife, PE, Brasil.E-mail: takaki@unicap.brsuch as ATP substitute and energy source, reserve for Pi, chelator of metal ions, buffer against alkali ions, regulator for stress and survival, channel for DNA entry, regulator of development and a component of the capsule cell (18,22). The heavy metal tolerance has been related to poly P degradation and detoxification of heavy metals inside the cell.It has also been suggested that surface-associated poly P may be important in chelation of cations on the surface of cell (13,16,21). Considering the role of Cunninghamella, a Mucoralean fungi, in xenobiotic metabolization/bioremediation, and the polyphosphate role in microbial tolerance and resistance to heavy metals, the present study was carried out to evaluate the physiological aspects of polyphosphate metabolism in Cunninghamella elegans during growth in presence of copper. Microorganisms and Culture Conditions The isolate of Cunninghamella elegans (UCP 542), obtained from mangrove sediment, was kindly supplied by the Culture Collection of the Catholic University of Pernambuco, Brazil.The culture was mantained on Difco PDA (Potato Dextrose Agar), incubated at 28ºC.The SDA (Sabouraud Dextrose Agar) medium was used for spore production, during 5 days at 28ºC.A total of 10 7 spores/mL were collected and transferred to Erlenmeyer flasks containing 50 mL of SMM (Synthetic Medium for Mucoralean) and incubated during 120 hours, at 28ºC, at 150 Hz.A solution of 2 mM copper sulphate was prepared in distilled and deionized water, pH adjusted to 6.0 with 1N sodium hydroxide and 10% (v/v) acetic acid.All samples were prepared in five replicates. Growth Curves Samples collected at 12, 24, 36, 48, 72, 96 and 120 hours of culture were submitted to liophylization and mantained in a vacuum dissecator until constant weight.The final value corresponded to the arithmetic media of five replicates of each sample. Analytical Procedure The analytical procedures were performed using samples of culture supernatant collected at 12, 24, 36, 48, 72, 96 and 120 hours of cultivation.The final value corresponded to the arithmetic media of five replicates of each sample.Phosphate consumption was evaluated by spectrophotometric assay based on the Biosystems kit.A standard curve was produced using a potassium phosphate solution (0.5 to 5.0 g/L). Polyphosphate Determination The total cellular polyP was extracted and measured by the method described by Kornberg, 1995 (22).Cells were harvested at 12, 24, 36, 48, 72, 96 and 120 hours of culture, washed twice in 1.5M NaCl containing 0.01 M EDTA and 1mM NaF (wash buffer).The cell pellet was resuspended in 1.5 mL of wash buffer and sonicated on ice, for 24 minutes period with 2 minute intervals at 16 KHz.The resulting homogenate was centrifuged at 12,000 x g for 5 minutes to remove cell debris.To determine total intracellular polyP, 100 uL of concentrated HCl was added to 0.5 mL of cell extract and heated at 100ºC for 45 minutes.The liberated phosphate was measured spectrophotometrically in a Spectronic Genesys 2, at ultraviolet spectrum.The polyP concentrations were expressed in gram of phosphate per gram of biomass, as means of five replicas.An unhydrolized sample was used as a control of the background level of polyP. Phosphatases Activity The enzymes activities were evaluated by the method described by Joh et al., 1996 (19).Samples of 36 mg, collected at 12, 24, 36, 48, 72, 96 and 120 hours of cultivation, obtained from control and treated cultures, were washed in distilled water and suspended in 0.02M sodium acetate, pH 4.5, for the acid phosphatase detection and in 50 mM Tris-HCl buffer containing glycerol, pH 7.5, for the alkaline phosphatase detection.The samples were submitted to maceration during 5 minutes and homogenized during 2 minutes in ice bath.The extracts were centrifuged at 12,000 x g during 10 minutes at 4ºC to remove the cellular debris.The activity was determined by the use of Kit-Lab Test.The results corresponded to triplicates.The enzyme activity was expressed as International Units per Liter (UI/L), in which 1 UI/L represents the amount of enzyme that catalyses 1umol of substrate/minute/liter of sample. RESULTS Figs. 1A and 1B exhibit the growth C. elegans in SMM culture medium in the absence and in the presence of copper, respectively.The cellular growth was evaluated by biomass production during 120 hours of cultivation.The control culture presented logarithmic growth during the experimental period.The maximum growth yield (122.9 mg/L) was obtained at 120 hours of growth (Fig. 1A).An orthophosphate consumption corresponding to 86.02% was observed during the first 12 hours of cultivation.The orthophosphate consumption was total at 48 hours of culture (Fig. 1A).When C. elegans was grown in the presence of copper the biomass yield was higher than that obtained in the control culture (288.2 mg/L).During the first 12 hours of cultivation, an orthophosphate consumption of 80.96% was detected.In the control culture, the total consumption of the phosphorus source was observed at 48 hours of culture (Fig. 1B).The results related to cellular biomass production revealed a significant difference between control and treated cultures.In the treated culture, the highest orthophosphate depletion occurred in the first 24 hours of culture and corresponded to 26.0% of the control culture. The cellular polyphosphate behavior is presented in Fig. 2. In the control culture, a progressive and slow decline in the polyphosphate content was detected during the experimental period, which could be related to increase in the biomass yield (Fig. 2A).The polyphosphate content in the mycelia was 2.3 mg/dL, 2.2 mg/dL and 2.15mg/dL, for 12 hours, 24 hours and 36 hours of growth, respectively.After 36 hours a significant decrease of polyphosphate content occurred.On the other hand, the analysis of copper treated cultures revealed the higher polyphosphate consumption during the first 36 hours of cultivation (Fig. 2B).The polyphosphate content in the mycelia of copper treated cultures was 2.3 mg/dL, 1.24 mg/dL and 0.99 mg/dL, for 12, 24 e 36 hours, respectively.The data revealed that poly P content decreased significantly in relation to the control sample.However, an increase in the polyphosphate content was observed at 48 hours of growth (1.24 mg/dL).After 48 hours, a new decline in polyphosphate content was observed.An analysis of the polyphosphate profile in the control and treated cultures revealed that in the presence of copper the polymer content decreases in a faster and continuous manner during the 120 hours of growth.On the other hand, in the control culture, a decrease of 92.1% of polyphosphate content was observed. Fig. 3 exhibits the results for the alkaline phosphatase activity of the control and the treated samples.In the control samples, a continuous decrease in the enzyme activity was observed during the experimental period.Maximum activity was detected at 12 hours of cultivation, corresponding to 5.08 U.I/L.At 120 hours of culture, the activity was 2.22 U.I/L.The maximum enzymatic activity of C. elegans in cultures samples submitted to copper treatment was 1.28 U.I/L at 24 hours of cultivation.After 36 hours the values decreased progressively.The method described by Joh et al., 1996 (19) was inadequate to detect acid phosphatase activity in the control or treated cultures. A comparative analysis between polyphosphate content and enzymatic activity for control and treated cultures is exhibited in Fig. 4. A progressive decrease of polyphosphate content and enzymatic activity during the experimental period was observed.A decrease of 56.29% in the alkaline phosphatase activity was observed for control cultures during the 120 hours of growth.The correlation between control and treated cultures revealed a decrease of 75.2% in the alkaline phosphatase activity in the first 12 hours of cultivation.On the other hand, the treated culture showed a decrease of 35.71% in the enzymatic activity during the experimental growth. DISCUSSION Microbial interactions with metals may have several implications for the environment and health.Microorganisms may play a large role in the biogeochemical cycling of toxic heavy metals, also in cleaning up or remediation of metalcontaminated environments.Other implications are not as beneficial, as the presence of metal tolerance mechanisms, which may contribute to the increase of natural resistance.It is very important to remember that any disturbance on the environment can affect humans, the environment it self and the microbial community on which other living cells depend (25,26,30). The data presented in this work reveal the physiological aspects of the growth of C. elegans in presence of copper.The addition of 2mM copper to the growth medium has been shown to have a significant effect on the growth of C. elegans.As determined by biomass yields, the growth in control media was slower than in the presence of copper, when an increase of 57.3%, in the biomass production was observed. Most results reported in the literature demonstate the toxic effects of heavy metals or the appearance of microbial resistance to these heavy metals.The intake and subsequent efflux of heavy metal ions in microorganisms usually include a redox reaction involving the metal, which some organisms can even use for energy and growth.This has an important implication on microbial tolerance to heavy metals because the solubility and toxicity of the metal depend on its oxidation state (14,17,28,29). Microorganisms require metals in trace quantities for metabolism and growth, but higher concentrations can be toxic.The toxicity of metals is due to their ability to denature proteins.The blocking of functional groups, displacing an essential metal, or the modification of the active conformation of the molecule can cause this effect (11,12). The copper inhibitory concentrations for bacteria are 1 mM, like Co 2+ and Ni 2+ .Copper toxicity is based on the production of hydroperoxide radicals and on interaction with the cell membrane (5,6,10).Thus, the intracellular concentration of heavy metal cations, especially copper has to be tightly controlled.Copper metabolism was already studied in E. coli, some Pseudomonasrelated species, Enterococus hirae and S. cerevisiae, bringing some light on copper metabolism (1,2,5,6,7,8,27,30). The present work indicates that C. elegans is able to grown in copper containing medium and that the metal has a stimulatory effect on biomass prodution. In fungi, metabolic changes associated to pH and composition of the culture medium, physical environmental aspects and growth phases are cited.Indeed, carbon, nitrogen and phosphorus sources are essential for the growth of microorganisms.The orthophosphate added to the culture media could regulate the cellular growth cycle, suggesting its utilization for molecular synthesis of cellular compounds (4,17).In this work, it was shown that cultures of C. elegans submitted to copper treatment exhibited a faster orthophosphate remotion of the culture medium in the beginning of growth when compared to control cultures without the metal.Some studies on relationship between medium components and culture development in presence of heavy metals, demonstrated that microorganisms exhibited a reduced biomass prodution even when the metal concentration was low (10,28). The occurrence of polyphosphate has been reported for various eukaryote and prokaryote organisms.Some microrganisms can accumulate phosphate as poly P (21,22).Some studies revealed that the subsequent exposure to stress conditions promotes the poly P degradation.This mechanism has been coupled to bio-precipitation of heavy metals as cellbound metal phosphates (13,21).However, the aspects of polyphosphate metabolism in fungi as a response to a metal presence in cultive media have not been reported yet. In this report, the correlation between polyphosphate content in control and treated cells revealed that cultures submitted to copper treatment presented lower polyphosphate content in the initial stages of growth.In presence of copper, 57% of the polyP content was, possibly, degraded, if compared to control samples.Some studies pointed out the role of polyP in metal precipitation.In prokariotic cells, Poly P biosynthesis and degradation, determine the appearance of metal resistance.In contrast, other results revealed that only intracellular polyP is related to this phenomenum (20,21). The results obtained in this study indicate the association between polyP and biomass yield in cultures submitted to copper treatment.The data suggest that the polymer utilization for cellular growth increases in response to metal.Different enzymes are related to phosphate and polyP metabolism.Experimental models determine that phosphatases, exopyphosphatases, endopolyphosphatases and polyphosphate kinases are key enzymes, pointing out that these enzymes could be induced as a response to the phosphate and nitrogen concentration in the medium, its pH, and the growth phase (22,24). The acid and alkaline phosphatase activities in response to copper presence demonstrated that C. elegans did not present acid phosphatase activity.On the other hand, alkaline phosphatase activity was inhibited by the presence of the metal in the culture medium.Compared to control cultures, a reduction of 75% in the enzyme activity was determined for treated cultures.Results in the literature for cadmium have shown that the activity of many enzymes associated to oxidative phosphorylation, photosynthesis, cell membrane permeability and integrity was impaired by the metal (29). The present work revealed the influence of copper on the alkaline phosphatase activity in C. elegans.However, for survival in environments containing high concentrations of available metals, mechanisms to counter the inherent toxicity of the metal ions are required.The important role of fungi in biogeochemical cycles makes them candidates for studies of metal-microbe interactions.Fungal strains belonging to the class of Zygomycetes are of particular importance due to the presence in their cell walls of polymers such as chitin, chitosan, and glucan, which are known to be efficient metal ion biosorbents (3,13). A better understanding of the factors responsible for metal resistance may help in the application of fungal biomass for the treatment of metal-contaminated water, and in enrichment or recycling of valuable metals. Figure 1 . Figure 1.Growth curves and orthophosphate consumption profile of Cunninghamella elegans in Synthetic Medium for Mucoralean: A-Control Culture; B-Culture grown in 2mM copper. Figure 2 . Figure 2. Polyphosphate content and biomass yields of Cunninghamella elegans grown in Synthetic Medium for Mucoralean: A-Control Culture; B-Culture grown in 2mM copper. Figura 4 . Figura 4. Correlation of alkaline phosphatase (UI) and polyphosphate (mg/dL) contents in control and treated cultures of Cunninghamella elegans.
2018-12-04T13:49:35.776Z
2005-12-01T00:00:00.000
{ "year": 2005, "sha1": "cdc78a7d6abe749df4803958da1f5f6a9a5ccaa8", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/bjm/v36n4/v36n4a02.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cdc78a7d6abe749df4803958da1f5f6a9a5ccaa8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
88521663
pes2o/s2orc
v3-fos-license
An estimator for Poisson means whose relative error distribution is known Suppose that $X_1,X_2,\ldots$ are a stream of independent, identically distributed Poisson random variables with mean $\mu$. This work presents a new estimate $\mu_k$ for $\mu$ with the property that the distribution of the relative error in the estimate ($(\hat \mu_k/\mu) - 1$) is known, and does not depend on $\mu$ in any way. This enables the construction of simple exact confidence intervals for the estimate, as well as a means of obtaining fast approximation algorithms for high dimensional integration using TPA. The new estimate requires a random number of Poisson draws, and so is best suited to Monte Carlo applications. As an example of such an application, the method is applied to obtain an exact confidence interval for the normalizing constant of the Ising model. Introduction A random variable X is Poisson distributed with mean µ (write X ∼ Pois(µ)) if P(X = i) = exp(−µ)µ i /i! for i ∈ {0, 1, 2, . . .}. Suppose that X 1 , X 2 , . . . are independent identically distributed (iid) Poisson random variables with mean µ. The purpose of this paper is to present a new estimator for µ that uses almost the ideal number of Poisson draws. Our estimate will not only use draws from X 1 , X 2 , . . . iid ∼ Pois(µ), but make extra random choices as well. This external source of randomness can be represented by a random variable U that is uniformly distributed over [0, 1] (write U ∼ Unif([0, 1]). As is well known, a single draw U is equivalent to an infinite number of draws U 1 , U 2 , . . . Definition 1. Suppose A is a computable function of X 1 , X 2 , . . . iid ∼ Pois(µ) and auxiliary randomness (represented by U ∼ Unif([0, 1]) that outputsμ. Let T be a stopping time with respect to the natural filtration so that the value ofμ only depends on U and X 1 , . . . , X T . Then call T the running time of the algorithm. The simplest algorithm for estimating µ just fixes T = n, and setŝ This basic estimate has several good properties. First, it is unbiased, that is, E[μ n ] = µ. Second, it is consistent, as n → ∞,μ n → µ with probability 1. Third, it is efficient. Using the Fisher information about µ contained in a single X i with the Crámer-Rao inequality, it is possible to show that this estimate has the minimum variance of any unbiased estimate that only uses n draws. However, this estimate is difficult to use for building (ǫ, δ)-approximation algorithms, as the ratioμ n /µ depends strongly on µ. It is well known that X 1 + · · · + X n ∼ Pois(nµ). Using techniques such as Chernoff bounds to bound the tail of a Poisson distribution, it is possible to bound the value of n needed to get an (ǫ, δ)-approximation. These bounds however are not tight, and inevitably a slightly larger value of n than is necessary will be needed to meet the (ǫ, δ) requirements. The goal of this work is to introduce a new estimate for the mean of the Poisson distribution whose relative error is independent of µ, the quantity being estimated. 1.1 Examples of estimates whose relative error is independent of the parameter As an example of a distribution where the basic estimate is scalable, say that Z is normally distributed with mean µ and variance σ 2 (write Z ∼ N(µ, σ 2 )) if Z has density f Z (s) = (2πσ 2 ) −1/2 exp(−(s − µ) 2 /[2σ 2 ]). As is well known, normals can be scaled and shifted, and still remain normal. . In this case, the sample average satisfiesμ n ∼ iid ∼ N(µ, µ 2 /n), and (μ n /µ) − 1 ∼ N(0, 1/n). Note that the distribution of the relative error does not depend in any way on the parameter µ being estimated. For another example, say that Y is exponentially distributed with rate µ is the indicator function that is 1 when the argument inside is true, and 0 when it is false. As with normals, scaled exponentials are still exponential. Unlike normals, the rate parameter is divided by the scale. Say that T has a Gamma distribution with shape parameter k and rate Adding iid exponentially distributed random variables together gives a Gamma distributed random variable. Given n draws Y 1 , . . . , Y n , the maximum likelihood estimator for µ in this context is the inverse of the sample average (see for instance [8]): That givesμ By scaling µY 1 ∼ Exp(µ/µ) = Exp(1), so µY 1 + · · · + µY n ∼ Gamma(n, 1). Therefore the relative error inμ MLE,n is independent of µ! Now, the distribution of 1/T where T ∼ Gamma(k, µ) is called an Inverse Gamma distribution with shape parameter k and scale parameter µ (write 1/T ∼ InvGamma(k, µ). Note that what was the rate parameter µ for the Gamma becomes a scale parameter for the Inverse Gamma. The mean of this InvGamma(k, µ) random variable is µ/(k − 1). That means a unbiased estimate for µ iŝ What about discrete variables that are inherently unscalable? In [1], the author presented a method for turning a stream of iid Bernoulli random variables (which are 1 with probability p, and 0 with probability 1 − p) into a Gamma(k, p) random variable, where k is a parameter chosen by the user. This could then be used with the known relative error estimate for exponentials to obtain a known relative error estimate for Bernoullis. While the Bernoulli application has the widest use, Poissons do appear in the output of a Monte Carlo approach to high dimensional integration called the Tootsie Pop Algorithm (TPA) [3,4]. Therefore, to use TPA to build (ǫ, δ)-approximation algorithms, it is useful to have a known relative error distribution for Poisson random variables. The remained of this paper is organized as follows. Section 2 describes the new estimate and why it works. It also bounds the expected running time. Section 3 then shows how this procedure can be used together with TPA to obtain (ǫ, δ)-approximations for normalizing constants of distributions. The method The new estimate is based upon properties of Poisson point processes. Definition 3. A Poisson point process of rate µ on R is a random subset P ⊂ R such that the following holds. It is well known that there are (at least) two ways to construct a Poisson point process, which forms the basis of the estimate. The first method for simulating a Poisson point process is to take advantage of the fact that the number of points within a given interval has a Poisson distribution. Pois(µ(b − a)). Moreover, conditioned on the number of points in the interval, the points themselves are uniformly distributed over the interval. That is, The second method to building a Poisson point process of rate µ is to use the fact that the distances between successive points are iid exponentially distributed with rate µ. Using Fact 3, P k will have a Gamma distribution with shape parameter k and rate parameter µ. So, this is how the estimate works. First, generate N 1 , the number of points of the Poisson point process in [0, 1]. If this is at least k, then we know that P k ∈ [0, 1]. Otherwise, generate N 2 , the number of points in [1,2]. If N 1 < k and N 1 + N 2 ≥ k, then P k ∈ [0, 2]. Otherwise, keep going, generating more Poisson random variates until we know that P k ∈ [i, i + 1] for some integer i. Let A = N 1 + · · · + N i−1 . Then we know that A < k points are in [0, i], and A + N i ≥ k. From Fact 4, the N i points are uniformly distributed over [i, i + 1]. The k − A smallest of these points will be P k . One more well known fact about the order statistics of uniform random variables will be helpful. Putting this all together gives the following estimate, called the Gamma Poisson Approximation Scheme, or GPAS for short. Lemma 1. The expected number of Poisson random variables drawn by GPAS is bounded above by Proof. The number of Poisson random variables drawn is ⌈P k ⌉ ≤ P k + 1. Since P k ∼ Gamma(k, µ), E[P k ] = k/µ, which shows the result. Note that any fixed time algorithm would need a similar number of samples to obtain such a result. Fact 7. The Fisher information of µ for X ∼ Pois(µ) is 1/µ. Therefore, by the Crámer-Rao inequality, the variance of any unbiased estimateμ that uses n draws is at least µ/n, so for k/µ draws, the standard deviation will be at least µ/ √ k. Therefore to first order for the same number of samples, the resulting unbiased estimate achieves the minimum variance. Of course, the real benefit of using GPAS is that is provides an exact relative error distribution, thus allowing for precise calculations of the chance of error. Example 2. What should k be in order to make GPAS an (0.1, 10 −6 )approximation algorithm? Increasing the value of k in the previous example until we reach the first place where P((k − 1)/0.9 ≥ T ′ /µ ≥ (k − 1)/1.1) gives k = 2561 as the first place where this occurs. In fact, in the previous example P(2560/0.9 ≥ T ′ /µ ≥ 2560/1.1) = 0.0000009970 . . . , and so is slightly smaller than the error bound requested. It is possible to create an algorithm with exactly 10 −6 chance of failure by running GPAS either with k = 2561 or k = 2560 with the appropriate probabilities. This gives the following algorithm, where p k is the cumulative distribution function of a Gamma distribution with shape k and rate k − 1. Applications So why approximate the mean of a Poisson in the first place? One of the applications is to the Tootsie Pop Algorithm (TPA) [3,4]. Given a set A ⊂ B ∈ R n , the purpose of TPA is to estimate ν(B)/ν(A) for some measure ν. This is exactly the problem of approximating a high dimensional integral that arises in such problems as finding the normalizing constant of a posterior distribution in Bayesian applications. The output of TPA (see [3,4]) is exactly a Poisson random variable with mean ln(ν(B)/ν(A)). Typically the situation is that ν(A) is known, and the goal is to approximate the other. Let r = ln(ν(B)/ν(A)). Then ifr is an approximation for r, then exp(r) is an approximation for ν(B)/ν(A), and ν(A) exp(r) is an approximation for ν(B). TPA Approximation Scheme This algorithm applies with the understanding that line 3 of the algorithm Gamma Poisson Approximation Scheme is replaced with T ←TPA, that is, the Poisson with mean µ is replaced by a call to TPA. Table 1 shows the expected running time for the new algorithm versus the old, which used Chernoff inequalities to bound the tails of the Poisson distribution. The improvements are in the second order, which is why as δ shrinks relative to ǫ, the improvement is lessened. Still, for reasonable values of (ǫ, δ), the improvement is very noticeable. Example 3. Consider the Ising model [5], where each node of a graph with vertex set V and edge set E is assigned either a 0 or 1. For a configuration x ∈ {0, 1} V , let H(x) = #{e = {i, j} ∈ E : x(i) = x(j)}. Then say that X is a draw from the Ising model if P(X = x) = exp(βH(X))/Z(β), where Z(β) = y∈{0,1} V exp(βH(y)) is known as the partition function The goal is to find the partition function for various values of β. Note that Z(0) = 2 #V is known, so finding Z(β)/Z(0) is sufficient to find Z(β). Considering the Ising model on the 4 × 4 square lattice with 16 nodes in order to keep the numbers reasonable. Then Z(1) ≈ 3.219 · 10 11 and ln(Z(1)/Z(0)) ≈ 15.40. The method for using TPA on a Gibbs distribution is found on p. 99 of [4]. Methods for generating samples from the Ising model for use in TPA abound. See for instance [7,6,9,2]. As long as β is not too high, these methods are very fast. Using 100 calls with (ǫ, δ) = (0.2, 0.01) gives an estimate of 5200 ± 70 for the number of calls needed with the new Poisson estimate, while the old method requires 23249, making the new approach over 4 times as fast in this instance for the same error guarantee.
2016-05-30T23:11:18.000Z
2016-05-30T00:00:00.000
{ "year": 2016, "sha1": "61dc99794137d634bf70008179d297955b2c55de", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "61dc99794137d634bf70008179d297955b2c55de", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
270701352
pes2o/s2orc
v3-fos-license
Transcriptome analysis of Drosophila suzukii reveals molecular mechanisms conferring pyrethroid and spinosad resistance Drosophila suzukii lay eggs in soft-skinned, ripening fruits, making this insect a serious threat to berry production. Since its 2008 introduction into North America, growers have used insecticides, such as pyrethroids and spinosads, as the primary approach for D. suzukii management, resulting in development of insecticide resistance in this pest. This study sought to identify the molecular mechanisms conferring insecticide resistance in these populations. We sequenced the transcriptomes of two pyrethroid- and two spinosad-resistant isofemale lines. In both pyrethroid-resistant lines and one spinosad-resistant line, we identified overexpression of metabolic genes that are implicated in resistance in other insect pests. In the other spinosad-resistant line, we observed an overexpression of cuticular genes that have been linked to resistance. Our findings enabled the development of molecular diagnostics that we used to confirm persistence of insecticide resistance in California, U.S.A. To validate these findings, we leveraged D. melanogaster mutants with reduced expression of metabolic or cuticular genes that were found to be upregulated in resistant D. suzukii to demonstrate that these genes are involved in promoting resistance. This study is the first to characterize the molecular mechanisms of insecticide resistance in D. suzukii and provides insights into how current management practices can be optimized. Overexpression of metabolic genes suggests metabolic resistance in zeta-cypermethrin-resistant Drosophila suzukii We performed short-read RNA sequencing (RNA-Seq) to identify the molecular mechanisms underlying zetacypermethrin resistance in D. suzukii.We sequenced two zeta-cypermethrin-resistant lines (S3 and S4) and two susceptible lines derived from the same population as controls (S7 and S8).Pearson's correlation confirmed that the biological replicates are highly correlated with one another (Suppl.Figure 1). To determine whether gene expression changes underlie insecticide resistance, we identified differentially expressed genes (DEGs) between the resistant and susceptible lines.We observed a total of 2,120 downregulated genes, 1,708 upregulated genes, and 8,723 non-differentially expressed genes between line S3 and the susceptible lines (Fig. 2a; Suppl.Table 4).For line S4, we identified 3,686 downregulated genes, 4,240 upregulated genes, and 6,323 non-differentially expressed genes (Fig. 2b; Suppl.Table 5).Amongst the upregulated genes are those encoding classes of metabolic enzymes known to confer insecticide resistance in other insect species.For instance, we observed that at least one of the two resistant lines exhibited a significant increase in the expression of cytochrome P450 (cyp) 6a20 and cyp4d14, the carboxylesterase cricklet, heat shock proteins (hsp) 60B and hsp70Aa, and glutathione-s-transferase (gst) E3 (Fig. 2c; Suppl.Table 6).Our results suggest that zetacypermethrin resistance in the original field-collected population may be attributed to metabolic resistance.Furthermore, many of the genes downregulated in the resistant lines are genes related to cellular signaling, such as nicotinic acetylcholine receptors, acetylcholine transporters, and voltage-gated sodium channel (VGSC) subunits; notably, the VGSC paralytic (para), the gene encoding the target protein of zeta-cypermethrin, is among them (Fig. 2a-b, insert). Functional enrichment analyses were performed to identify in which pathways these DEGs are involved in (Fig. 2d-e; Suppl.Tables 7-8).Downregulated genes in line S3 were enriched in several pathways involved in neuronal signaling while upregulated genes were enriched in pathways involved in RNA processing, protein www.nature.com/scientificreports/expression, and metabolism (Fig. 2d; Suppl.Table 7).For line S4, downregulated genes were enriched in pathways involving neuronal signaling and metabolism while the upregulated genes are enriched in pathways involved in protein degradation and the cell cycle (Fig. 2e; Suppl.Table 8). Next, we performed Weighted Gene Co-expression Network Analysis (WGCNA), an unsupervised analysis pipeline that clusters genes into modules based on their expression profile across samples 31 , to identify potential novel gene clusters highly correlated with resistance (Suppl.Figure 2-3; Suppl.Table 9-10).For line S3, genes were clustered into 35 different colored modules with turquoise being most correlated with resistance (R 2 = 0.89) (Suppl.Figure 2a; Suppl.Table 9).Of the 2908 genes in turquoise, we identified several metabolic genes within the class of cyps, hsps, GSTs, and esterases (Suppl.Figure 2b).Functional analysis revealed that the genes in turquoise are enriched in metabolic, RNA processing, and protein expression pathways (Suppl.Figure 2c; Suppl.Table 10).For line S4, genes were clustered into 27 modules, with turquoise being most correlated with resistance (R 2 = 0.92) (Suppl.Figure 3a).Of the 4449 genes within turquoise, several are metabolic genes known to confer insecticide resistance in other species (Suppl.Figure 3b; Suppl.Table 11).Genes in turquoise are enriched in RNA processing, cell cycle, cell differentiation, and protein and gene expression pathways (Suppl.Figure 3c; Suppl.Table 12).Taken together, our results suggest that an upregulation of metabolic gene expression most likely confer zeta-cypermethrin resistance observed in D. suzukii in California. Overexpression of cuticular and metabolic genes suggests penetration and metabolic resistance in spinosad-resistant Drosophila suzukii We sequenced the transcriptomes of two spinosad-resistant lines (C3, C4) and two susceptible lines (C2, C5) derived from the same population to determine the molecular mechanisms conferring spinosad-resistance.Pearson's correlation coefficients revealed a strong correlation between biological replicates, confirming consistency between the replicates (Suppl.Figure 4). Next, we assessed gene expression differences between each resistant line vs both susceptible lines.For line C3, we observed 852 DEGs, with 492 downregulated genes, 360 upregulated genes, and 11756 non-differentially expressed genes (Fig. 3a; Suppl.Table 13).In line C4, there were 4233 DEGs, with 2132 upregulated genes, 2101 downregulated genes, and 8166 non-differentially expressed genes (Fig. 3b; Suppl.Table 14).Amongst the upregulated genes in line C3, we identified several expressed within the insect integument, including tweedle (twdl) F, twdlG, and twdlV as well as cpr35B and cpr66D 32 , while several genes upregulated in line C4 are metabolic genes, including cyp4d8, cyp6d4, hsp68, and gstS1 (Fig. 3c; Suppl.Table 15).This suggests that penetration resistance may confer spinosad resistance in line C3 while metabolic resistance may confer resistance in line C4.This also suggests that alleles resulting in either metabolic resistance or penetration resistance are present in the same field-collected spinosad-resistant D. suzukii population. Furthermore, we performed functional enrichment analyses and observed that genes downregulated in line C3 are enriched in metabolic pathways, including the metabolism of xenobiotics by the cyp pathway, while upregulated genes are enriched in pathways related to the insect cuticle and RNA processing (Fig. 3d; Suppl.Table 16).In line C4, on the other hand, genes downregulated in the resistant lines are enriched in pathways pertaining to cell cycle, DNA replication and repair, and cell division while upregulated genes are enriched in metabolism and neuronal signaling pathways (Fig. 3e; Suppl.Table 17). WGCNA was performed to potentially identify novel gene clusters strongly correlated with spinosad resistance (Suppl.Figure 5-6).In line C3, genes clustered into 47 different modules, with dark turquoise being most correlated with resistance (R 2 = 0.83) (Suppl.Figure 5a; Suppl.Table 18).Only 66 of the 79 genes within dark turquoise were functionally annotated and have a D. melanogaster homolog (Suppl.Figure 5b).Since this module consists of such few genes, no genes were enriched in any pathways, however, there are a few genes in dark turquoise that are involved in chromatin organization, such as histone H2A 33 , modifier of mdg4 34 , and histone methyl transferase 4-20 35 , as well as genes involved in hypoxia response (ecdysone induced protein 93F 36 and CG2918 37 ) and negative regulation of cell growth (La-related protein4B 38 and Forkhead box subunit O 39 ).On the other hand, genes in line C4 were clustered into 42 colored modules, with green most correlated with resistance (R 2 = 0.96) (Suppl.Figure 6a; Suppl.Table 19).There are 371 genes in green, and of those, only 3 genes, cyp6d4, cyp305A1, and GstO1, belong to metabolic enzymes involved in insecticide detoxification 24 (Suppl.Figure 6b).Green module genes are enriched in pathways involving neuronal organization and signaling as well as metabolism (Suppl.Figure 6c; Suppl.Table 20).Therefore, genes most correlated with resistance in line C3 are genes that have not been previously implicated in conferring insecticide resistance in other insect species, whereas in line C4, 3 of the genes most correlated with resistance are within classes of metabolic enzymes known to promote insecticide resistance. New field-collected Drosophila suzukii populations in 2022 show evidence of increased metabolic resistance as compared to flies collected in 2019 With the identification of genes of interest that may confer insecticide resistance in isofemale lines of D. suzukii, we were interested in determining whether any of these genes are also differentially expressed in resistant D. suzukii recently collected from similar locations in California.We assessed the resistance status of the F1 of D. suzukii collected in 2022 from two strawberry fields using discriminating dose bioassays.As we showed previously, the mortality rates observed in the Wolfskill and susceptible isofemale lines were 100% in discriminating dose bioassays (Fig. 1a,b).In comparison, the mortality of F1 of both 2022 populations was significantly lower than 100% (Fig. 4a), suggesting that these populations are resistant to both zeta-cypermethrin and spinosad (zeta-cypermethrin: Population #1: t = 23.88,df = 9, p < 0.0001; Population #2: t = 20.82,df = 9, p < 0.0001) (spinosad: Population #1: t = 6.736, df = 9, p < 0.0001; Population #2: t = 6.708, df = 9, p < 0.0001). Vol:.( 1234567890 Next, leveraging the results of our RNA-Seq experiment, we designed quantitative PCR (qPCR) primers to amplify five genes that were upregulated in at least one resistant D. suzukii isofemale line (Fig. 4b-f).Specifically, we detected cyp6a8, cyp4d14, and cyp6w1 to evaluate metabolic resistance (Fig. 4b-d) and twdlG and twdlF to evaluate penetration resistance (Fig. 4e-f).We observed that both populations show increased expression of cyp6a8 (Fig. 4b; Suppl.Table 21), cyp4d14 (Fig. 4c; Suppl.Table 21), and cyp6w1 (Fig. 4d; Suppl.Table 21) as compared to the susceptible controls.More so, the expression of all three cyp genes was significantly higher in Population #1 as compared to the resistant isofemale lines developed from 2019 field-collected populations (Fig. 4b-d; Suppl.Table 21).In fact, the expression of cyp6a8 was 17.3-fold higher in Population #1 as compared to the isofemale resistant line from 2019 (Fig. 4b).Additionally, cyp4d14 was 1.8-fold higher (Fig. 4c) while cyp6w1 was 15.2-fold higher in Population #1 than in the 2019 resistant line (Fig. 4d). We next assessed whether either of these lines exhibit penetration resistance by detecting cuticular genes twdlG (Fig. 4e) and twdlF (Fig. 4f).We observed a slightly lower expression of twdlG in Population #1 (Fig. 4e; Suppl.Table 21) and no significant difference of twdlF in either of the 2022 populations (Fig. 4f; Suppl.Table 21).Finally, we also detected expression of ecdysone receptor (ecR) as a negative control (Fig. 4g) because it was not previously observed to be differentially expressed in any of the isofemale resistant lines (Suppl.Tables 4-5, 13-14).There was no significant difference in ecR in either of the 2022 populations compared to the susceptible controls (Fig. 4g; Suppl.Table 21).Together, these results suggest that metabolic resistance confers insecticide resistance in the 2022 field-collected populations, as opposed to penetration resistance.Additionally, it shows that these qPCR-based assays are feasible and represents a more efficient approach of monitoring potential insecticide resistance in field-collected samples as opposed to performing bioassays to assess resistance. To further confirm that the expression of these genes can reflect the susceptible vs resistance status of D. suzukii, we assayed the expression of cyp6a8, cyp4d14, cyp6w1, twdlG, and twdlF in two additional D. suzukii populations that were collected in Georgia, USA in 2023 and were found to be susceptible to zeta-cypermethrin and spinosad (Suppl.Figure 7f.), herein referred to as Populations #3 and #4.As expected, we observed that our pyrethroid-resistant line (S4) exhibited increased metabolic gene expression as compared to the susceptible Knockdown of cyp4d14, cyp4d8, and cpr66D increases susceptibility to insecticides To determine whether the expression of metabolic and cuticular genes has a direct effect on insecticide resistance, we leveraged the genetic tools available in the closely related species D. melanogaster to manipulate the expression of these target genes and evaluate insecticide susceptibility using discriminating dose bioassays.We selected D. melanogaster mutant fly lines for three genes: cyp4d14, cyp4d8, and cpr66D.We chose a metabolic gene upregulated in both zeta-cypermethrin-resistant lines (S3 and S4), a gene associated with metabolic resistance in the spinosad-resistant line C4, and a gene implicated in penetration resistance that is upregulated in the spinosad-resistant line C3.We observed that reduced expression of cyp4d14 (Fig. 5a; Suppl.Table 22) increases susceptibility to zeta-cypermethrin (Fig. 5b; Suppl.Table 22) and spinosad (Fig. 5c; Suppl.Table 22).However, we observed that reduced expression of cyp4d8 (Fig. 5d; Suppl.Table 22) decreases susceptibility to zetacypermethrin (Fig. 5e; Suppl.Table 22) but increases susceptibility to spinosad (Fig. 5f; Suppl.Table 22).This is consistent with our differential expression analysis showing that cyp4d8 is only upregulated in zeta-cypermethrinresistant flies (Figs. 2-3).Finally, reduced expression of cpr66D (Fig. 5g; Suppl.Table 22) did not affect the susceptibility of flies to zeta-cypermethrin (Fig. 5h; Suppl.Table 22), but it increased susceptibility to spinosad (Fig. 5i; Suppl.Table 22).This result agrees with our sequencing analysis (Figs. 2-3), which showed that high levels of cpr66D was present in flies resistant to spinosad but not in flies resistant to zeta-cypermethrin. Sequence analysis reveals mutations in para or nAChRα7 likely did not contribute to insecticide resistance To investigate whether mutations within the target gene of each insecticide confer resistance, we assessed changes in allelic frequency between the resistant and susceptible populations of D. suzukii (Suppl.Figure 8).Within the gene that encodes the target of zeta-cypermethrin, the VGSC para, we identified a significant difference in allelic frequency in line S3 at nucleotide position 3,658,093, which is located within intron 30 in the gene body, and one difference in allelic frequency at nucleotide position 3,655,811, which is located within intron 29 in the gene body, in line S4 (Suppl.Figure 8b-c; 3,658,093: S3 p = 0.005171, S4 p = 0.6199; 3,655,811: S3 p = 0.4725, S4 p = 0.001131).Although we did not identify a mutation within the protein-coding region of para, we did observe that para is downregulated in zeta-cypermethrin-resistant D. suzukii (Fig. 2a-b).Therefore, it is possible that reduced expression of the target protein, rather than a site-specific mutation, contributes to resistance in these flies.This mechanism has not been previously evaluated in the context of pyrethroid resistance. We then analyzed the spinosad target protein nAChRα7, the D. suzukii homolog of D. melanogaster nAChRα6 inferred by sequence similarity.We identified three nucleotide positions within the 5' untranslated region (UTR) that exhibit a significant change in allelic frequency in line C3 and no changes in line C4 (Suppl.Figure 8d-g; 6,579,905: C3 p = 0.0101, C4 p = 0.06667; 6,580,106: C3 p = 0.01515, C4 p = 0.2424; 6,580,194: C3 p = 0.04762, C4 p = 1).Given we did not identify a mutation within the protein-coding region of the gene or any differential expression of nAChRα7 (Suppl.Table 13-14) in the resistant lines, we suspect that target-site resistance is not an underlying mechanism for spinosad resistance in these lines.However, we cannot rule out that the allelic frequency changes we observed in the 5' UTR may affect nAChRα7 protein levels given that the 5'UTR is important for translation initiation (reviewed in 40 ). Discussion Insecticide resistance in the invasive agricultural pest D. suzukii has been detected in California, U.S.A. over the past several years, but the molecular mechanisms driving these changes have yet to be identified [20][21][22] .We developed isofemale lines from field-collected populations of D. suzukii resistant to either zeta-cypermethrin or spinosad to identify the molecular mechanisms underlying insecticide resistance.We sequenced the transcriptomes of two resistant lines per population and found evidence of metabolic resistance in zetacypermethrin-resistant D. suzukii (Fig. 6).Specifically, we observed an upregulation of genes encoding metabolic detoxification enzymes in zeta-cypermethrin-resistant D. suzukii.Interestingly, we also observed decreased expression of the target gene para.This does not constitute conventional target-site resistance as resistant lines do not have a mutation in the target gene, but rather, an overall decrease in the target gene could render the insecticide less effective.This mechanism can be tested in a future mechanistic study.In D. suzukii resistant to spinosad, we identified evidence of penetration resistance in one line (C3), reflected by the upregulation of several genes expressed in the insect cuticle such as tweedle genes and cuticle proteins (cpr).In the other resistant line (C4) however, we observed evidence of metabolic resistance reflected by an upregulation of metabolic genes.Our results for the spinosad-resistant lines reveal the possibility for multiple mechanisms of insecticide resistance to be present in a single population. We concluded from our differential gene expression (DEG) analysis and weighted gene co-expression network analysis (WGCNA) that metabolic resistance and penetration resistance are contributing to pyrethroid and spinosad resistance observed in D. suzukii in California.However, it is important to note that we cannot rule out that other mechanisms are also contributing to the observed resistance, given that we detected many other differentially expressed genes and our WGCNA focused on the module with the highest correlation to resistance.Our DEG analysis uncovered other potential genes and pathways that may be important in D.suzukii resistance development, and future experiments will be necessary to explore the role of these differentially expressed genes.For example, we observed that genes involved in RNA processing and splicing are enriched in differentially expressed genes in zeta-cypermethrin and spinosad resistant lines (Fig. 2d; Fig. 3d).Splicing is a biological process that produces proteins with diverse structures and functions encoded by a singular gene (reviewed in 41 ).Therefore, it is possible that resistant isofemale lines may undergo differential splicing, resulting in widespread differences in gene and isoform expression as compared to susceptible flies.It has been shown in other insect species that differential expression of various isoforms of nAChRα6 confers insecticide resistance 42,43 , but at present it is not clear whether changes in splicing are limited to specific genes or observed more broadly in the transcriptome.Results from this study set the stage for future studies into other potential mechanisms of insecticide resistance, including the possibility of alternative splicing as a driver for resistance.We leveraged our findings to design molecular diagnostics, specifically quantitative PCR (qPCR) assays, that could identify insecticide resistance in the field (Fig. 4).Thus far, insecticide resistance in D. suzukii has only been detected in California [20][21][22] .Therefore, our diagnostic tests can be used to monitor insecticide resistance www.nature.com/scientificreports/development in California and can be used to detect early development of resistance in locations where resistance has yet to be reported.This would allow growers to adjust spray programs and delay and/or prevent resistance development in the fly population.Utilizing a few genes that were differentially expressed between the resistant and susceptible lines, such as cytochrome P450 (cyp) and tweedle genes, we designed qPCR assays to monitor resistance development.A benefit to using molecular diagnostics to detect resistance, as opposed to insecticide bioassays, is that they require few individuals (as little as 5 flies) as input whereas bioassays require a much larger number of flies.A similar molecular diagnostic detecting cyp expression to identify metabolic resistance has been previously developed and validated in mosquitoes 44 .More so, beyond just validating our diagnostic, we observed significantly higher levels of cyp expression in a 2022 field-collected population (Fig. 4), suggesting that resistance has not only persisted but increased since the 2019 collection.This observation is consistent with the general trend of increased spinosad resistance in D. suzukii from 2018 to 2020 20 .Additionally, we observed that reduced expression of a single cyp or cpr gene that we found to be differentially expressed in resistant D. suzukii renders D. melanogaster more susceptible to insecticides (Fig. 5), supporting the hypothesis that increased expression of metabolic and cuticular genes promote resistance.It is important to note this experiment leveraged the genetic mutants available in the closely related species D. melanogaster, as opposed to creating transgenic D. suzukii, which is much more challenging.Thus, it is possible that knocking down expression of other genes we identified to be differentially expressed in the resistant lines may not have the desired effect, if any, on susceptibility.This is because the proteins involved in insecticide resistance may differ, even between species of the same genus.For example, the substrate specificity of the various cyps may vary from species to species 45 .We also observed that the effect of gene knockdown is more evident at lower concentrations of insecticide.Furthermore, we cannot rule out that altering the expression of one cyp gene does not affect the expression of other cyp genes.This may explain why the cyp4d8 mutant exhibits increased susceptibility to spinosad but decreased susceptibility to zeta-cypermethrin (Fig. 5e,f).Finally, it is possible that knocking down several metabolic genes simultaneously will produce a stronger phenotype, given that several metabolic genes are differentially expressed in our dataset.This may be because multiple cyps may target the same substrate 45 . Our study also provides insights into the possibility of cross-resistance.Currently, in order to delay the development of resistance, there are restrictions on how many applications of a single type of insecticide are permitted in a site 46 .As a result, organic berry growers alternate usage between other organically-approved insecticides and spinosads 12,47,48 .Therefore, it is important to understand whether the mechanisms conferring resistance to one insecticide enables the insect to be resistant to other classes of insecticides as well.For instance, because the spinosad-resistant line C3 has increased expression of cuticular genes (Fig. 3c) suggesting a less penetrable cuticle, it is possible that the integument is less penetrable to other insecticides as well, but that remains to be tested.Furthermore, zeta-cypermethrin-resistant D. suzukii and spinosad-resistant line C4 express high levels of many metabolic enzymes implicated in metabolic resistance (Figs.2c and 3c).There are studies in house flies and onion thrips that attribute spinosad resistance to an upregulation in cyp expression 49,50 .Notably, our 2022 collections of D. suzukii exhibit resistance to both zeta-cypermethrin and spinosad and appear to be resistant suzukii is permeable to insecticides, enabling the insecticide to enter the insect and bind to its target protein, ultimately killing the insect.However, in the case of zeta-cypermethrin-resistant D. suzukii, an increased expression of metabolic enzymes results in an increased breakdown of the insecticide before it can bind to its target protein.Spinosad-resistant D. suzukii have increased expression of cuticular genes such that the cuticle is less penetrable by insecticides, allowing them to survive.Additionally, spinosad-resistant D. suzukii can also exhibit an upregulation of metabolic enzymes to increase detoxification of the insecticide, promoting the survival of the flies.This figure was created with BioRender.com(license to laboratory of JCC).www.nature.com/scientificreports/through an upregulation of metabolic genes (Fig. 4), and knockdown of cyp4d14 increase both zeta-cypermethrin and spinosad susceptibility (Fig. 5b,c).Thus, our results suggest a high likelihood for cross-resistance.Additional experiments are needed to verify this prediction for cross-resistance.Finally, we anticipate the ability to leverage our results to optimize current D. suzukii management practices.For instance, we identified an upregulation of metabolic enzymes in resistant D. suzukii (Figs. 2 and 3c).Presumably, this upregulation will increase detoxification, rendering the insecticide less effective.To combat this effect, synergists can be applied in conjunction with insecticides.Synergists are metabolic enzyme inhibitors, so when used in conjunction with insecticides, synergists can increase insecticide efficacy 51 .An alternative approach would be for growers to adopt an integrated pest management (IPM) strategy to control D. suzukii.IPM promotes increased control of a pest by adopting a combination of different strategies including genetic, biological, cultural, and chemical control 52 .It is possible that alternating between the use of insecticides and the use of non-chemical methods of control will alleviate the pressure driving insecticide resistance such that it is selected out of the population.Based on our data, we speculate that there is a high fitness cost associated with insecticide resistance.For example, in zeta-cypermethrin-resistant flies, we observed differential expression of many genes involved in neuronal system development and signaling (Fig. 2d), suggesting that neuronal processing is affected, potentially compromising a wide range of behaviors such as mating and feeding 53,54 .In the case of spinosad-resistant flies, we observed that many of the downregulated genes are enriched with metabolic pathways, suggesting that spinosad-resistant flies have energy usage deficiencies (Fig. 3d-e).It is possible that the fitness costs associated with resistance are the reason that it took eleven years for insecticide resistance to develop since the invasion of D. suzukii into California despite intense spray programs 46 .Further experiments will need to be conducted to identify the costs associated with resistance in D. suzukii.Moreover, it is possible that in the absence of selective pressure caused by spraying insecticides, in combination with the fast generation time of D. suzukii and their short lifespans 46 , alternating between insecticide spraying and other forms of pest control can be more effective at controlling D. suzukii infestations in the field.In fact, a previous study 20 has demonstrated that resistance increases throughout the growing season, likely due to increased exposure to insecticides from multiple applications.Therefore, it is possible that a short-term halt in spraying of insecticides of a specific chemistry for a few generations would increase susceptibility, given that the selective pressure is removed.Experiments are currently in progress to assess how long resistance persists in a population after spraying has ceased.Further, to combat penetration resistance, insecticides can be administered in bait traps as opposed to spraying such that the insecticide enters the flies through the digestive system rather than through the insect cuticle. In conclusion, our study characterizes the mechanisms of insecticide resistance in D. suzukii collected in California, U.S.A.We provide evidence that metabolic and penetration resistance underlie insecticide resistance in the populations we sampled from.Additionally, we developed and validated molecular assays that can monitor resistance in field populations of D. suzukii.Finally, our study provides insights into the possibility of cross-resistance and information that can be used to improve D. suzukii management programs. Field Drosophila suzukii populations and development of isofemale lines To assess resistance to zeta-cypermethrin (Mustang® Maxx 0.8 EC, FMC Corporation, Philadelphia, PA), isofemale lines were established from D. suzukii adults reared from fruits collected in October 2019 from a strawberry field in Monterey County, CA, U.S.A. Sixty fruits were collected and transported to the laboratory of Dr. Frank Zalom at the University of California, Davis.Twenty of the fruits were transferred to a plastic container containing a layer of cotton topped with sand as a substrate for pupation, for a total of three containers.The containers were maintained at 23 ± 1°C, 55-65% relative humidity (RH), and a 14-h light:10-h dark photoperiod in a walk-in growth chamber (Percival Scientific Inc., Perry, IA) and checked daily until fly emergence.Emerged D. suzukii flies were separated from non-target species and reared in bottles containing Bloomington standard Drosophila cornmeal diet (https:// bdsc.india na.edu/ infor mation/ recip es/ bloom food.html). To assess resistance to spinosad (Entrust® SC 22.5% spinosad, a mixture of spinosad A & D, Corteva Agriscience, Indianapolis, IN), isofemale lines were established from D. suzukii adults collected from a caneberry field in Santa Cruz County, CA, U.S.A. in November 2019.Adult flies were live-captured using McPhail traps (Great Lakes IPM, Inc., Vestaburg, MI) baited with approximately 20 ml of a yeast (7 g)-sugar (113 g)-water (355 ml) solution.Traps were collected the next day and returned to the laboratory.Flies were anesthetized using CO 2 to facilitate the removal of any non-target species (approximately twenty females and twenty males per bottle) and transferred into diet bottles (described above). Field-collected D. suzukii were assessed for resistance as described below in the "Discriminating dose bioassays" section.Each isofemale line was established from a single wild-caught, gravid, non-insecticide-treated female from a resistant population 55 .Crossing of siblings was repeated for eight generations for each isofemale line.A total of eight isofemale lines were established for each site, for a total of sixteen lines.Bioassays were performed once isofemale lines were established to identify resistant and susceptible isofemale lines. Field-collected populations (referred to as Populations #1 and #2) used for quantitative PCR (qPCR) assays were reared from fruits collected from two California open strawberry fields in Santa Cruz County in September 2022.A hundred ripe fruits were collected from each field and placed in plastic containers and transported to the laboratory.Fruits were transferred to new plastic containers containing a layer of cotton topped with sand.For each site, five containers of twenty fruits were prepared and placed at 23 ± 1°C, 55-65% RH, 14-h light: 10-h dark photoperiod and checked daily until fly emergence.Emerged flies were aspirated into diet bottles.Twenty female and twenty male D. suzukii adults were then moved to new diet bottles for propagation to increase total available flies, and the progeny (F1) from each site were used in bioassays.The susceptible field-collected line, Population #3, was collected from blueberries in Alma, GA, U.S.A. in June 2023 while Population #4 was also in 300 μL TRI reagent (Sigma, St. Louis, MO). 60 μL of 100% chloroform (Sigma) was added and incubated at room temperature for 10 min.The upper aqueous layer was recovered after spinning down and transferred to a new microcentrifuge tube.RNA was precipitated with an equal volume of 100% isopropanol at − 20°C overnight.After spinning down, the RNA pellet was washed with 70% ethanol once and allowed to air dry.The pellet was then resuspended in 20 μL 1X Turbo DNA-free kit buffer (Thermo Fisher Scientific, Waltham, MA) and treated with Turbo DNase per manufacturer's instructions.RNA quality was assessed with both the Agilent 2100 Bioanalyzer system (Agilent Technologies, Santa Clara, CA) and the Qubit RNA IQ kit (Invitrogen, Waltham, MA) on the Qubit 4 Fluorometer (Invitrogen).RNA purity was measured with the Nanodrop 1000 (Thermo Fisher Scientific), and RNA quantity was measured with the Qubit RNA HS (high sensitivity) assay kit (Invitrogen) on the Qubit 4 Fluorometer. Illumina short-read sequencing libraries were prepared with 1 μg of high-quality RNA and the TruSeq Stranded mRNA Library Prep Kit (Illumina, San Diego, CA) according to manufacturer's protocol.A total of twenty-four libraries were prepared: three biological replicates of two zeta-cypermethrin-resistant lines, two zeta-cypermethrin-susceptible lines, two spinosad-resistant lines, and two spinosad-susceptible lines.Library insert size and quality was measured with the Agilent 2100 Bioanalyzer System.Library concentration was measured with the Qubit 4 Fluorometer.All libraries generated from zeta-cypermethrin-resistant and susceptible D. suzukii lines were pooled together, and all libraries generated from spinosad-resistant and susceptible D. suzukii lines were pooled together, such that there were twelve libraries per pool.Pooled samples were sent to Novogene (Sacramento, CA) for sequencing on the HiSeq X Ten platform (Illumina) using PE150. Differential gene expression analysis Differential gene expression analysis was performed using sequencing reads derived from Illumina short-read sequencing.First, rRNA reads were removed using SortMeRNA v2.1 64 .Adapters (ILLUMINACLIP parameters 2:30:10) and low-quality ends (LEADING: 10, TRAILING:10, MINLEN:36) were trimmed using Trimmomatic v0.35 65 .Cleaned reads were aligned to the NCBI Drosophila suzukii Annotation Release 102 based on the LBDM_Dsuz_2.1.priassembly (accession no.GCF_013340165.1) 66 using STAR v2.7.9a 67 .Count data from STAR (-quantMode GeneCounts) served as input in the DESeq2 package 68 in R to perform differential expression analysis on each resistant line vs both susceptible samples.Each resistant line was compared to susceptible samples separately as each line might exhibit resistance due to different mechanisms.Genes with fold change differences between resistant vs susceptible populations with a Benjamini-Hochberg adjusted p-value < 0.05 were considered differentially expressed.Expression levels of genes were also measured as fragments per kilobase of exon per million mapped (FPKM) values calculated with Stringtie v2.0.4 69 .The consistency between biological replicates was calculated with Pearson's correlation coefficient, which was determined with the 'stats' package in R version 4.2.1.Expression differences of key genes between the resistant and susceptible populations were calculated with two-way ANOVA followed by two-stage linear set-up procedure of Benjamini, Krieger, and Yekutieli on GraphPad Prism. Weighted Gene Co-expression network analysis Gene expression (in FPKM) served as input for Weighted Gene Co-expression Network Analysis (WGCNA).Genes with an expression value of zero for more than six samples were excluded from analysis.To explore the modules most correlated with insecticide resistance, a correlation analysis using resistance status was performed with the WGCNA package (Version 1.72.1) 31 on R. Modules with a p-value < 0.05 were considered significant.Functional enrichment analysis (described below) was performed on the module with the highest correlation with resistance. Functional enrichment analysis Genes were functionally annotated using BLAST + (Version 2.12.0) against the NCBI Drosophila melanogaster Annotation r6.32 based on the Release 6 plus ISO1 mitochondrial genome assembly (accession no.GCA_000001215.4) 70 .Gene Ontology (GO) enrichment of genes were performed using ShinyGO 0.76.3 71 .GO terms and pathways were considered enriched if the false discovery rate (FDR) < 0.05. Variant calling To identify allelic changes between the resistant and susceptible populations in the target genes paralytic and nicotinic acetylcholine receptor subunit alpha 7, variants were called using Freebayes v1.3.5 72 .Differences between allelic counts between the resistant and susceptible groups were compared using Fisher's Exact Test on R v4.2.1. Quantitative polymerase chain reaction (qPCR) for gene expression analysis Total RNA extraction from F3 of the 2022 collection (see "Field Drosophila suzukii populations and development of isofemale lines") was performed as described above (see "RNA extraction, library preparation, and high throughput sequencing").cDNA synthesis was performed with the SuperScript IV Reverse Transcriptase kit (ThermoFisher Scientific) according to the manufacturer's instructions, using 4 μg of RNA as input.cDNA was then diluted ten-fold with nuclease-free water.qPCR was performed using Sso Advanced SYBR green supermix (Bio-Rad, Hercules, CA) in a CFX384 (Bio-Rad).Primer sequences are listed in Suppl.Table 23.Cycling conditions were 95°C for 30 s followed by forty cycles of 95°C for 5 s and an annealing/extension phase at 60°C for 30 s.The reaction was concluded with a melt curve analysis from 65°C to 95°C in 0.5°C increments at 5 s per step.Three technical replicates were performed per biological replicate for a total of five biological replicates.Isofemale lines served as the susceptible and resistant controls.Resistant control lines were selected based off whether the line overexpressed the gene of interest in the differential gene expression analysis.Data were analyzed using the Figure 1 . Figure 1.Identification of insecticide-resistant isofemale D. suzukii lines.(a, b) Bioassays to identify isofemale lines resistant to (a) zeta-cypermethrin (Mustang® Maxx) or (b) spinosad (Entrust® SC).Eight isofemale lines (indicated as S# and C#), developed from two separate field-resistant populations collected in California, USA, were tested.Two isofemale lines developed from a population collected from an untreated orchard in California, USA (Wolfskill, W#) served as the susceptible control (white).Each point represents a biological replicate of 5 males and 5 females (n = 8), and error bars indicate ± SEM.Resistant lines used for subsequent experiments are indicated in maroon and blue while the susceptible line is in black.Asterisks denote significant p-values as determined by one-way ANOVA followed by Tukey's multiple comparison test: * p < 0.05, *** p < 0.001, and **** p < 0.0001.Non-significant comparisons are omitted.(c) Dose-response relationship between zetacypermethrin-resistant isofemale lines (S3: maroon circle and solid line; S4: blue triangle and dashed line) vs a susceptible sibling line (S8: black cross and dotted line) (n = 8 biological replicates of 5 males and 5 females).The lethal concentration required to kill 50% of the population (LC 50 ) is indicated by the yellow line.Each point represents a biological replicate.(d) Dose-response relationship between spinosad-resistant isofemale lines (C3: maroon circle and solid line; C4: blue triangle and dashed line) vs a susceptible sibling line (C5: black cross and dotted line) (n = 8).(e) LC 50 values of D. suzukii isofemale lines for zeta-cypermethrin and spinosad. Figure 2 . Figure2.Zeta-cypermethrin-resistant lines exhibit increased expression of genes involved in metabolic resistance.(a) Volcano plot of genes displaying fold change gene expression differences between the resistant S3 line vs 2 susceptible lines (S7 and S8).Genes upregulated in the resistant populations (green) are to the right of the dotted line while downregulated genes (pink) lie to the left of the dotted line.Genes that exhibit no significant difference in expression between the two populations are in grey.Highly significant differences are higher up on the y-axis (where p adj is the Benjamin-Hochberg adjusted p-value).Labeled points signify genes satisfying at least one of the following criteria: (1) have the largest fold change difference between the two groups, (2) have the lowest p adj , and (3) are genes known to be involved in insecticide resistance.Labels contain the D. suzukii gene symbol (LOC#########) and the corresponding D. melanogaster gene symbol homolog.Genes known to be involved in conferring resistance are labeled in black while genes that are not known to be involved in resistance are grey.The black box denotes the region containing para, zoomed-in in the insert.(b) Volcano plot of genes displaying fold change gene expression differences between the resistant S4 line vs 2 susceptible lines (S7 and S8).(c) Relative expression (FPKM) of cytochrome P450 genes (Cyp), heat shock proteins (Hsp), the carboxylesterase Cricklet (Clt), and glutathione-s-transferase E3 (GstE3) in the susceptible (S7 and S8: black) vs resistant (S3: maroon; S4: blue) groups extracted from the RNA sequencing data.Each point denotes a biological replicate (n = 3 replicates of 8-10 females per line).Asterisks denote significant p-values as determined by 2-way ANOVA: ** p < 0.01, ***p < 0.001, and **** p < 0.0001.(d, e) Top 5 most significant enrichment pathways within the Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) Biological Processes (bio proc) categories for genes up-or down-regulated in line (d) S3 or (e) S4.The x-axis is Fold Enrichment, which is the percentage of differentially expressed genes that belong to each pathway.Point size represents the number of genes (nGenes) within the category while color denotes the false discovery rate (FDR) correction of enrichment p-values. Figure 3 . Figure 3. Spinosad-resistant lines exhibit increased expression of genes associated with either penetration resistance or metabolic resistance.(a) Volcano plot of genes displaying fold change gene expression differences between the resistant C3 line vs 2 susceptible lines (C2 and C5).Genes upregulated in the resistant populations (green) are to the right of the dotted line while genes downregulated in the resistant populations (pink) lie to the left of the dotted line.Genes that exhibit no significant difference in gene expression between the two populations are in grey.Highly significant differences are located higher up on the y-axis where p adj is the Benjamin-Hochberg adjusted p-value.Labeled points signify genes that satisfy at least one of the following criteria: (1) have the largest fold change difference between the two groups, (2) have the lowest p adj , and (3) are genes known to be involved in insecticide resistance.Labels contain the D. suzukii gene symbol (LOC#########) and the corresponding D. melanogaster gene symbol.Genes known to be involved in conferring resistance are labeled in black while genes labeled in grey are not known to be directly involved in conferring resistance.(b) Volcano plot of genes displaying fold change gene expression differences between the resistant C4 line vs 2 susceptible lines (C2 and C5).The black box denotes the region zoomed-in in the insert.(c) Relative expression (FPKM) of metabolic and cuticular genes (twdl: tweedle; cpr: cuticular protein) in the susceptible (C2 and C5: black) and resistant (C3: maroon; C4: blue) groups extracted from the RNA sequencing data.Each point denotes a biological replicate (n = 3 replicates of 8-10 females per line).Asterisks denote significant p-values as determined by 2-way ANOVA: **p < 0.01, ***p < 0.001, and ****p < 0.0001.(d, e) Top 5 most significant enrichment pathways within the Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) Biological Processes (bio proc) categories for genes up-or down-regulated in lines (d) C3 and (e) C4.The x-axis is Fold Enrichment, which is defined as the percentage of differentially expressed genes that belong to each pathway.Point size represents the number of genes (nGenes) within the category while color denotes the false discovery rate (FDR) correction of enrichment p-values. Figure 4 . Figure 4. Field-collected Drosophila suzukii populations in 2022 show increased expression of metabolic genes that were differentially expressed in 2019 resistant populations.(a) Discriminating dose bioassay to assess mortality of 2022 field-collected D. suzukii populations (Pop.#1 and #2) when exposed to zetacypermethrin and spinosad.Each point represents a biological replicate of 5 males and 5 females (n = 10), and error bars indicate ± SEM.Asterisks denote significant p-values as determined by one-sample t and Wilcoxon Test compared to a hypothetical mean of 100 (denoted by the dashed red line): **** p < 0.0001.(b-g) Gene expression of (b-d) cytochrome P450 (cyp) genes, (e, f) tweedle (twdl) genes, and (g) ecdysone receptor (ecR) in susceptible (S7 and C2) and resistant (S3 and C3) isofemale lines (established from 2019 collections) as well as 2022 field-collected populations (n = 5 biological replicates of 8-10 females).Asterisks denote significant p-values as determined by one sample T and Wilcoxon Test compared to the average gene expression of the susceptible line (denoted by the red dashed line): *p < 0.01, **p < 0.01 and ***p < 0.001.The hash marks (#) denote significant p-values as determined by One-way ANOVA followed by Holm-Sidak's multiple comparisons test to assess expression differences between the 2022 F1 populations and the resistant line.Non-significant comparisons are denoted as "ns". https://doi.org/10.1038/s41598-024-70037-xwww.nature.com/scientificreports/
2024-06-25T13:12:13.323Z
2024-06-22T00:00:00.000
{ "year": 2024, "sha1": "ec3eecfb46e5bf50a4bd2c0f5613709cf272b7e6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41598-024-70037-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a58b3502c3a897961be7f80c1c046e2dc1a072fa", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
210178152
pes2o/s2orc
v3-fos-license
Enhancing CFD predictions in shape design problems by model and parameter space reduction In this work we present an advanced computational pipeline for the approximation and prediction of the lift coefficient of a parametrized airfoil profile. The non-intrusive reduced order method is based on dynamic mode decomposition (DMD) and it is coupled with dynamic active subspaces (DyAS) to enhance the future state prediction of the target function and reduce the parameter space dimensionality. The pipeline is based on high-fidelity simulations carried out by the application of finite volume method for turbulent flows, and automatic mesh morphing through radial basis functions interpolation technique. The proposed pipeline is able to save 1/3 of the overall computational resources thanks to the application of DMD. Moreover exploiting DyAS and performing the regression on a lower dimensional space results in the reduction of the relative error in the approximation of the time-varying lift coefficient by a factor 2 with respect to using only the DMD. Reduced order modeling (ROM) is nowadays a quite popular and consolidated technique, applied to several fields of engineering and computational science thanks to the remarkable computational gain for the solution of parametric partial differential equations (PDEs).The ROM goal is to reduce the dimension of the studied system letting unaltered some important properties of the original problem, resulting in a more efficient computation.Such methods are frequently applied when many solutions for different parameters are required, for example in the context of parametric optimal control problems, uncertainty quantification, and shape optimization. For parametric reduced order models, the most common approach is to sample the solution manifold by creating a solutions database corresponding to different parameters, using a high-dimensional discretization, then combine the latter to identify the intrinsic lower dimension of the problem.For parametric reduced order models see [21,40,41], while for a more applications oriented overview we suggest [49,42,43]. For parametric time-dependent problems, a proper orthogonal decomposition approach can be applied to reduce the dimensionality of the system, as in [17,23].In this work we propose a novel data-driven approach for parametric dynamical systems, combining dynamic mode decomposition (DMD) with active subspaces (AS) property.These two relatively new methodologies provide a simplification of the dynamical system, and an analysis of the input parameter space of a given target function, respectively.Exploiting AS property we are able to obtain an estimation of the importance of the parameters of such function, as well as a reduction in the number of parameters.Moreover the methods are equation-free, being based only on input/output couples and do not make assumptions on the underlying governing equations. We define a generic scalar output s(µ, t) ∈ R that depends both on time t and on the parameters of the model µ ∈ D ⊂ R k , with k denoting the dimension of the parameter space.We denote the state of the parametric system at time t with s t (µ) ∈ R. The solution manifold in time is approximated using the DMD in order to obtain an approximation of the linear map A defined as: ( It is easy to note that using (1) we have the possibility to forecast a generic future state of the parametric system. To numerically compute the linear operator A, we need to sample the parameter space D, and for each time store the quantity of interest for each parametric configuration.Formally, considering a set of parameter samples with dimension N s, the discrete vector referring to the system state at time t results: Collecting several time states s i (µ) for i = 1, . . ., m, we compute the operator A with a best-fit approach such that s t+1 ≈ As t .Once computed the future prevision, we are able to exploit the relation between the input parameters µ i and the related outputs s future (µ i ) to approximate the output for any new parameter.In this work we use a Gaussian Process Regression (GPR) [56,20], but any regression or interpolation method can be used.We underline that the chosen regression model has to be fitted for any forecasted time we want to analyse. Since the high dimensionality in the parameter space can cause a decrease in the accuracy of the solution approximation, we couple the regression with the AS property in order to perform a sensitivity analysis of function s t (µ).AS indeed is able to provide an approximation g of a scalar function f , where the input parameters of g are a linear combination of the original parameters of f .The coefficients of such combination give information about the importance of the original parameters.In this work, we use this information to reduce the dimension of the parameter space -in which we build the regression -by not considering the parameters whose AS coefficients are smaller than a certain threshold, that is they are almost zero. The developed methodology is tested on an aeronautics application given by the flow past an airfoil profile.As output of interest we considered the lift coefficient and the parameters vector µ describes geometrical transformations according to the morphing technique proposed in [22].The fluid dynamics problem is described using the incompressible Navier-Stokes equations with turbulence modeling.These are discretized using a finite volume approximation.The deformed meshes corresponding to different input parameters are automatically obtained exploiting a Radial Basis Function (RBF) mesh morphing technique. This work is structured as follows: in section 2 we present the general parametric problem over which we apply the proposed numerical pipeline, providing some information about the geometrical deformation.In section 3 and section 4 we present the DMD and AS methods, respectively, while in section 5 we show the numerical setting of the problem and the results obtained.Finally in section 6 we propose some final remarks and highlight possible future developments. The parametric problem Let be given the unsteady incompressible Navier-Stokes equations described in an Eulerian framework on a parametrized space-time domain holds.Here, Γ = Γ in ∪ Γ 0 ∪ Γ out is the boundary of Ω(µ) and it is composed by three different parts Γ in , Γ out and Γ 0 (µ) that indicate, respectively, inlet boundary, outlet boundary, and physical walls.The term f (x) depicts the stationary non-homogeneous boundary condition, whereas k(x) denotes the initial condition for the velocity at t = 0. Shape changes are applied to the domain Ω, and in particular to its boundary Γ 0 (µ) corresponding to the airfoil wall.Such shape modifications are associated to numerical parameters contained in the vector µ ∈ R k which, in the numerical examples shown in this work has dimension k = 10.As said, the only portion of the domain boundary subject to shape parametrization is the physical wall of the airfoil Γ 0 (µ), which in the undeformed configuration corresponds to the 4-digits, NACA 4412 wing profile [1,25].To alter such geometry, we adopt the shape parametrization and morphing technique proposed in [22], where k shape functions are added to the airfoil profiles.Let y u , and y l be the upper and lower ordinates of a NACA profile, respectively.We express the deformation of such coordinates as where the bar denotes the reference undeformed state, which is the NACA 4412 profile. The parameters µ ∈ D ⊂ R 10 are the weights coefficients, c i and d i , associated with the shape functions s i .The range of each parameter will be specified in section 5.The explicit formulation of the shape functions can be found in [22], we report them in Figure 1.After the reference profile is deformed, we also apply the same morphing to the mesh coordinates by using a radial basis functions (RBF) interpolation method [7,38,36].With this approach the movement s of all the points which do not belong to the moving boundaries is approximated by an interpolatory radial basis function: where x bi are the coordinates of points for which we know the boundary displacements, for this particular case the points located on the wing surface.N b is the number of control points on the boundary, ξ is a given basis function, q(x) is a polynomial.The coefficients β i and the polynomial q(x) are obtained by the imposition of interpolation conditions where d bi is the displacement value at the boundary points and by the additional requirement: In the present case, we select basis functions for which it is possible to use linear polynomials q(x).For more informations concerning the selection of the order of polynomials see [3].Finally the values of the coefficients β i and the coefficients δ i of the linear polynomials q can be obtained by solving the linear problem: where M b,b ∈ R N b ×N b is a matrix containing the evaluation of the basis functions ξ bibj = ξ( x bi − x bj ), and P b ∈ R N b ×(d+1) is a matrix where d is the spatial dimension.Each row of this matrix, that contains the coordinates of the boundary points, is given by row i (P b ) = 1 x bi .Once the system of ( 9) is solved one can obtain the displacement of all the internal points using the RBF interpolation: where x ini are the coordinates of the internal grid points.The computation of the displacement of the grid points entails the resolution of a dense system of equations that has dimension N b + d + 1. Usually, the number of boundary points N b is much smaller with respect to the number of grid points N h . Dynamical systems approximation by dynamic mode decomposition Dynamic mode decomposition (DMD) is an emerging reduced order method proposed by Schmid in [44] for the analysis of dynamical systems.Approximating the linear infinite-dimensional Koopman operator [28], DMD decomposes the original system into few main features, the so called DMD modes, that evolve linearly in time, even if the original system has nonlinear behaviour.This means that, other than individuating recurrent patterns in the evolution of the system, DMD provides a real-time midcast/forecast of the output of interest.An important advantage of such method is the complete data-driven nature: the algorithm relies only on the system output, without the necessity of any information regarding the model or equations used.Dynamic mode decomposition has been successfully employed in naval hull shape optimization pipelines [13], for online real-time acquisitions in a wind tunnel experiment [59], and in meteorology [4], among others.We also mention the higher order DMD extension [31,32]. In the following paragraph, we provide just an algorithmic overview of the method.For an exhaustive explanation of DMD, its applicability, and possible extensions, we suggest [29,6]. We define the linear operator A such that where x k+1 ∈ R N and x k ∈ R N are the vectors containing the system outputs at two sequential instants.Thus, the operator A : R N → R N expresses the dynamics of the system.In order to construct it using only data, we need to collect m equispaced in time outputs x i for i = 1, . . ., m -from now on called snapshots -then arrange them in two matrices: X = x 1 . . .x m−1 and Y = x 2 . . .x m .Since the corresponding columns in X and Y are sequential snapshots, we are able to use (11) to represent the relationship between X and Y, such that Y = AX.Minimizing the error Y − AX we obtain the linear operator, which however has very large dimension, especially when the studied system requires a fine discretization.To reduce the dimensionality, a POD approach is adopted.The matrix X is decomposed using the singular value decomposition as: where the matrix U contains the orthogonal left-singular vectors.We can then project the operator onto the space spanned by the left-singular vectors to get the reduced operator Ã.It is possible to note that the reduced operator does not require the construction of the high-dimensional one: where the † refers to the Moore-Penrose pseudo-inverse.We can now reconstruct the eigenvectors and eigenvalues of the matrix A thanks to the eigendecomposition of à as ÃW = WΛ.In particular the elements in Λ correspond to the nonzero eigenvalues of A, while the real eigenvectors, the so called exact modes [53], can be computed as Φ = YVΣ −1 W. Thus, being A = ΦΛΦ † , we can approximate the evolution of the system x k+1 = ΦΛΦ † x k .Moreover, it is easy to demonstrate that the approximation of a generic future snapshots can be computed as: In this work we compute the DMD modes of the matrix composed by the value of the time-varying lift coefficient for a set of given geometrical parameters.Then we can predict the future state of the coefficient and, using a regression method, approximate the target function at untried new parameters.All the DMD computation have been carried out by the Python package PyDMD [15]. Active subspaces have also been proven as a useful tool to enhance model order reduction techniques such as proper orthogonal decomposition (POD) with interpolation for structural and fluid dynamics problems [16], and POD-Galerkin methods for a parametric study of carotid artery stenosis [47]. Here we briefly introduce the active subspaces property for functions not depending on time, for the details and estimates regarding the method we refer to [8].For the actual computations to find AS we used the Python Active subspaces Utility Library [12]. Let µ ∈ R k the parameters of our problem, f be a parametric scalar function of interest f (µ) : R k → R, and ρ : R k → R + a probability density function representing uncertainty in the input parameters.Active subspaces are a property of the pair (f, ρ).They are defined as the leading eigenspaces of the second moment matrix of the target function's gradient and constitutes a global sensitivity index more general than coordinate-aligned derivative-based ones [58]. The second moment matrix of the gradients C, also called uncentered covariance matrix of the gradients of f with respect to the input parameters, is defined as where E[•] is the expected value, and C is symmetric thus it admits a real eigenvalue decomposition that reads: where W indicates the orthogonal matrix containing the eigenvectors of C as columns, and Λ is a diagonal matrix composed by the non-negative eigenvalues arranged in descending order.We can decompose the two matrices as follows where M < k has to be properly selected by identifying a spectral gap.In particular, we define the active subspace of dimension M as the principal eigenspace corresponding to the eigenvalues prior to the gap.Then we can map the full parameters to the reduced ones through W 1 .We define the active variable as , and the inactive variable as η = W T 2 µ ∈ R k−M .In practice the matrix C is constructed with a Monte Carlo procedure. AS stipulates that the directional derivatives in directions belonging to the kernel of W T 1 are significantly smaller that those belonging to the range of the same matrix.Moreover this assumptions are made in expectation rather then in absolute sense [57]. Since in this way we are considering a linear combinations of the input parameters, we can associate the eigenvectors elements to the weights of such combinations, thus providing a sensitivity of each parameter.We underline that if a weight is almost zero, that means f does not vary along that direction on average. We can use the active variable to build a ridge function g [33] to approximate the function of interest, that is In this work we want to study the behaviour of a target function f (µ, t) : R k × R + → R that depends on the parameters µ and on time t as well.This results in extending the active subspaces property to dynamical systems, that means having to deal with time-dependent uncentered covariance matrix C(t), and corresponding eigenvectors w i (t).Efforts in this direction has been done in [9] for a lithium ion battery model, in [34] for long term model of HIV infection dynamics, and more recently an application of dynamic mode decomposition and sparse identification to approximate one-dimensional active subspaces in [2].In these works they refer to dynamic active subspaces (DyAS) as the time evolution of the active subspaces of a time-dependent quantity of interest. DyAS are useful to assess the importance of each input parameter at given times and to study how the weights associated to the inputs evolve.In the following we are going to compute the AS for a set of equispaced times t i .If some of the parameters are almost zero in the entire time window we can safely ignore them in the construction of the Gaussian process regression. Computational pipeline In the present section we will discuss the numerical experiments carried out to test the DyAS analysis and present the results obtained.As reported in section 2, each high fidelity simulation is based on a parametric fluid dynamic model governed by the Reynolds Averaged Navier-Stokes (RANS) equations.Thus, a number of flow simulations have been carried out selecting different samples in the parametric space to test the performance -in terms of lift coefficient -of different airfoil shapes.The simulations made use of both the RANS solver provided in the OpenFOAM [54] finite volumes library, and of the DMD acceleration methodology described in section 3. Once the lift coefficients output were available for all the samples tested in the input parameters space, the DyAS analysis was applied to assess possible parameter redundancy.The elimination of the redundant parameters detected in the DyAS analysis allowed for the generation of a surface response model based on a lower dimensional space, which has been finally tested against the original RANS model accelerated through DMD, and against the surface response model based on the original input parameter space.The following sections will further detail each part of the computational pipeline just outlined. Parametric shape deformation The fluid dynamics problem is resolved using the finite volume method.The wing is immersed in a rectangular domain according to Figure 2. The reference mesh counts 46500 hexahedral cells and is constructed using the blockMesh utility of the OpenFOAM library. Figure 2 depicts a detail of the grid in proximity of the wing.The meshes in the deformed configuration have been obtained starting from the reference configuration using a radial basis function smoothing algorithm similar to the one implemented in [5].A single deformation corresponds to a sample µ in the parameter space D := [0, 0.03] 10 ⊂ R 10 .Therefore all the deformed meshes share the same number of cells and the same mesh topology.In particular Wendland [55] second order kernel functions with radius r RBF = 0.1 m have been used.The control points of the RBF procedure have been placed on each mesh boundary point located onto the wing surface.Since the outer boundary points are fixed we decided to neglect them from the RBF computation using a smoothing function defined in such a way that the RBF contribution reduces to zero after a certain distance from a focal point [26]. Particularly, the focal point has been placed in the geometric center of the airfoil chord segment and the distance from the focal point after which the RBF contribution is neglected is set to r out = 7 m.In Figure 3 we depict the envelope of all the tested configurations, and the flow velocity streamlines for a particular sample in the parameter space.A uniform and constant velocity equal to u in = 1 m/s is set at the inlet boundary, while the constant value of the kinematic viscosity is set to ν = 2e−5 m 2 /s.This configuration, considering a chord length D = 1 m, corresponds to Reynolds number Re = 50000.As well known, a flow characterized by Reynolds number of such magnitude requires turbulence modeling to be numerically simulated with reasonable computational effort.In the present work, turbulence has been modeled using a RANS approach with a Spalart-Allmaras turbulence model [45].The pressure velocity coupling is resolved in a segregated manner making use of the PIMPLE algorithm which merges the PISO [24] and the SIMPLE [39] algorithm.The time step used to advance the simulation in time is set constant and equal to ∆t = 1e − 3 s.The convective terms have been discretized using a second-order upwinding scheme, while the diffusion terms are discretized using a linear approximation scheme with non-orthogonal correction.The time discretization is resolved using a second order backward differentiation formula.The simulation is advanced in time until the flow has reached stationary behavior.For the present problem, setting a total simulation time T s = 30 s is sufficient to reach a solution which is reasonably close to the steady state one. Parameter space reduction The present section will discuss the application of DyAS to the problem of the two dimensional turbulent flow simulation past airfoil sections with parameterized shape.Such a fluid dynamic problem is relevant in several engineering fields, as it is encountered in a number of industrial applications, ranging from aircraft and automotive design, to turbo machinery and propeller modeling. A few plots describing the DyAS results for the lift coefficient output are presented in Figure 4, 5, 6, and 7.The plots in the figures are aimed at representing the evolution of the active subspace effectiveness and composition over the time dependent flow simulations.More specifically, the left diagram in each figure plots the lift coefficient at each sample point tested, as a function of the first active variable obtained through a linear combination of the sample point coordinates in the parameter space, that is f (µ, t) against W T 1 µ.Presenting the components of the first eigenvector of the uncentered covariance matrix, the right plot in each figure indicates the weights used in such linear combination to obtain the first active variable.In summary, the right diagram in each Figure suggests the impact of each of the original parameters on the first active variable, while the left diagram is an indicator of how well a one dimensional active subspace is able to represent the input to output relationship.Following the evolution of these two indicators it is possible, at each time instant, to assess how effective the one dimensional parameter dimension reduction is, and what is the sensitivity of the reduced lift coefficient output to variations of the original parameters.The plots in Figure 4, 5, 6, and 7 show the results of the DyAS at the fixed time instants t = 6, 10, 14, 18, respectively.A first look at the right plots for each time steps, suggests that the contribution of the parameters corresponding to the bump shape functions s 1 , and s 5 , for both the top and the bottom part of the airfoil profile are almost negligible.This means the lift coefficient is almost insensitive to variations of these 4 parameters.Alternatively, it can be said that the output function is on average almost flat along directions corresponding to the axes corresponding to parameters c 1 , c 5 , d 1 , and d 5 . Figure 4 and 5 present the characterization of the one dimensional active subspace at time t = 6 s and t = 10 s, respectively.We can clearly see that the lift coefficient is perfectly approximated along the identified direction, and such direction (the eigenvector elements) is almost the same at t = 6 s and t = 10 s.This should not completely surprise as both time instants are included in an initial acceleration phase during which the air coming from the inflow boundary is reaching the airfoil.Given the domain arrangement described in Figure 2, the flow velocity around the impulsively started airfoil leading edge is expected to reach the inflow value at time t = 10 s.For such reason, we will focus the description on the plots for t = 10 s, although the considerations can be immediately reproduced for previous time steps.The left plot in Figure 5 suggests that at this meaningful instant, the first active subspace represents the input to output relationship with remarkably good accuracy.In fact, only a single output value corresponds to each active variable value.In other words, when plotted against the first variable, the output appears like a curve -a line in the present case.A look at the right diagram suggests that the shape parameters having the most impact on the lift generated by the airfoil are c 3 , c 4 , d 3 and d 4 , which are the ones associated to shape functions with peaks located around the middle of the airfoil chord.The positive values of the eigenvector components associated to c 3 , c 4 , d 3 and d 4 , along with the positive slope of the curve in the left plot in Figure 5 suggest that, at this particular time instant, higher values of lift can be obtained by increasing the airfoil thickness in the mid-chord region.Similar considerations can be drawn from Figure 6, which refers the the DyAS analysis carried out at t = 14 s.Here, the points in the left diagram do not completely cluster on top of a single valued curve as was the case for the previous time step considered.Compared to what has been observed at t = 10 s, the data clearly indicate that at t = 14 s an input to output relationship obtained using only a one dimensional active subspace will lead to less accurate lift coefficient predictions.Yet, the points in the plot are still all located within a rather narrow band surrounding a regression line having positive slope.Thus, all the considerations on the lift coefficient sensitivity with respect to variations of the shape parameters that can be inferred from the right plot, will still hold at least by a qualitative standpoint.Here, the eigenvector components suggest that the most influential parameters on the lift coefficient are c 3 , d 3 and d 4 , while c 2 and d 2 affect the output in lesser but not negligible fashion.Compared to the previous case the importance of coefficient c 4 on the output is significantly reduced.We recall that c 4 is associated with increased y coordinates of the airfoil suction side past the mid-chord region.Thus, we might infer that in the acceleration phase higher lift values are obtained not only increasing the front thickness, but also lowering the camber line in the region past mid-chord. Figure 7 shows the results of the DyAS analysis at t = 18 s, when the flow approaches the final regime solution.Following the trend observed for t = 14 s, the left plot in the figure indicates that a one dimensional is not completely able to represent the input to output relationship in a satisfactory fashion.With respect to the previous plots, the output values are here located in an even wider band around a regression line with positive slope.Again, on one hand this increasingly blurred picture suggests that higher dimensional active subspaces are required to reproduce the steady state solution with sufficient accuracy; on the other hand, the diagram still suggests a quite definite trend in the output, which can be exploited for qualitative considerations.Quite interestingly, at the present time step the eigenvector component corresponding to the c 4 coefficient has negative sign.Given the positive slope of the input to output relationship in the left plot of Figure 7, this implies that increases in the airfoil ordinates on the top side in the region past the mid-chord result in lift loss.Thus, this seems to suggest that an airfoil with a higher camber line curvature, combined with a thicker leading edge region might result in increased lift.This should not surprise, as a similar kind of airfoil would result in a higher downwash due to the increased camber line curvature, yet being able to avoid stall by means of a thicker and rounder leading edge.Thus, the DyAS analysis at different time steps shows that as the impulsively started airfoil moves from an acceleration phase to a steady state regime solution, the shape modifications leading to increased lift transit from a purely symmetric increase of the thickness in the mid-chord region, to a non-symmetric modification of the camber line united with a symmetric leading edge thickness increase, respectively.Such behavior is indicated by the sign of c 4 coefficient in the eigenvector characterizing the one dimensional active subspace, which is likely detecting that at steady state, regime solution, airfoils with higher camber line curvature and thicker leading edges produced higher downwash. We underline that the eigenvector components of all the time instants presented corresponding to the coefficients c 1 , c 5 , d 1 , and d 5 are almost zero.This means that on average the lift coefficient is almost flat along these directions.We are going to exploit this fact by freezing these parameters and constructing a GPR on a reduced parameter space. GPR approximation and prediction of the lift coefficient The previous analysis pointed out the presence of several input parameters with minimal average influence on the target function.Making use of such consideration we construct a response surface which only depends on the remaining parameters.Both for the full parameter space and the reduced one, we use a Gaussian process regression with a RBF kernel implemented in the open source The relative error is computed on 100 test samples, using the high-fidelity lift coefficient to train the regression for t ≤ 20 s, while for t > 20 s the DMD forecasted states are used for the training. Python package GPy [19].We then compare the performance of the two regression strategies by computing the relative error over a test data set composed by 100 samples.The error is computed as the Euclidean norm of the difference between the exact and the approximated solution over the norm of the exact solution.The training set is composed by the same 70 samples, in 10 dimensions for the GPR over the original parameter spaces, and in 6 dimensions for the reduced one.Up to t = 20 s the training is done using the high-fidelity simulations.To speed up the convergence to the regime state (t = 30 s) we applied the DMD to get the future-state prediction of the lift.In Figure 8 we compare the two GPR performance at each of the time steps analyzed in the simulations.Until 12 s, the regressions behave in a very similar fashion, while from 15 s the accuracy gain obtained by distributing the 70 samples in a lower dimensional space becomes significant.The error gap between the 6 and 10 dimensional response surface in fact, consistently increases from 1% at 15 s to more than 4% at steady state. Conclusions and perspectives We presented a computational pipeline to improve the approximation of the time-varying lift coefficient of a parametrized NACA airfoil.The pipeline comprises automatic mesh deformation through RBF interpolation, high-fidelity simulation with finite volume method of turbulent flow past the airfoil, global sensitivity analysis exploiting AS, and future state prediction via DMD reduced order method.This resulted in more accurate Gaussian process regression of the lift coefficient even if in a reduced parameter space. After the creation of the high-fidelity solutions database the application of AS highlighted a possible reduction of the parameter space due to negligible contributions of 4 different parameters.We exploit this reduction to construct a GPR over a smaller parameter space, thus improving its performance.Since the training of the regression model is done over 6 dimension instead of 10, given the same high-fidelity database dimension, the GPR is able to better approximate the solution manifold.This results in better lift coefficient predictions for new untried parameters.We also applied DMD to have future-state prediction of the target function up to 30 seconds and proved that the effective gain of the new GPR is preserved also for any time after the 20 seconds simulated with FV.In particular from 13 seconds the actual gain is significant, at 15 seconds we have an increased performance of 1% in the relative error.Evolving in the future the error drop increases up to more than 4% at regime.This computational pipeline can be seen as a parametric dynamic mode decomposition for some extent.Moreover, the sensitivity analysis has a negligible computational cost with respect to the creation of the offline high-fidelity database. Future developments can be the study of adaptive sampling strategies exploiting a generic n-dimensional active subspace, and the coupling of different model order reduction methods.It would be interesting to use this non-intrusive setting as a preprocessing tool to reduce the number of simulations required to build a reduced basis space which is later used in an intrusive manner [46].We think this new computational pipeline can be of much interest in the context of shape optimization and dynamical systems. Figure 1 : Figure 1: Airfoil shape functions with respect to the profile abscissa.The leading edge corresponds to x = 0. Figure 2 : Figure 2: Sketch of the computational domain used to solve the fluid dynamics problem in its reference configuration.The left picture reports a schematic view on the domain with the main geometrical dimensions.The right plot reports a zoom on the mesh in the proximity of the wing. Figure 3 : Figure 3: The left picture reports in light blue the envelope of all the tested configurations used during the training stage.The right picture depicts the flow velocity streamlines for one particular sample inside the training set µ = [0.0071;0.0229; 0.0015; 0.0015; 0.0087; 0.0107; 0.0033; 0.0130; 0.0247; 0.0280]. Figure 4 : Figure 4: On the left the sufficiency summary plot for the lift coefficient at time t = 6.0 seconds.On the right the first eigenvector components at the corresponding parameters. Figure 5 : Figure 5: On the left the sufficiency summary plot for the lift coefficient at time t = 10.0 seconds.On the right the first eigenvector components at the corresponding parameters. Figure 6 : Figure 6: On the left the sufficiency summary plot for the lift coefficient at time t = 14.0 seconds.On the right the first eigenvector components at the corresponding parameters. Figure 7 : Figure 7: On the left the sufficiency summary plot for the lift coefficient at time t = 18.0 seconds.On the right the first eigenvector components at the corresponding parameters. Figure 8 : Figure 8: The relative error of the approximated outputs at different times.The relative error is computed on 100 test samples, using the high-fidelity lift coefficient to train the regression for t ≤ 20 s, while for t > 20 s the DMD forecasted states are used for the training.
2020-01-16T02:00:36.417Z
2020-01-15T00:00:00.000
{ "year": 2020, "sha1": "33acbce9ca97901ba8ae6f4f86ab4f74e69e0476", "oa_license": "CCBY", "oa_url": "https://amses-journal.springeropen.com/track/pdf/10.1186/s40323-020-00177-y", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "5693af3e41cf0bd90e38e544f2d5ad2ae3804d93", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
232129575
pes2o/s2orc
v3-fos-license
Potassium Improves Drought Stress Tolerance in Plants by Affecting Root Morphology, Root Exudates, and Microbial Diversity Potassium (K) reduces the deleterious effects of drought stress on plants. However, this mitigation has been studied mainly in the aboveground plant pathways, while the effect of K on root-soil interactions in the underground part is still underexplored. Here, we conducted the experiments to investigate how K enhances plant resistance and tolerance to drought by controlling rhizosphere processes. Three culture methods (sand, water, and soil) evaluated two rapeseed cultivars’ root morphology, root exudates, soil nutrients, and microbial community structure under different K supply levels and water conditions to construct a defensive network of the underground part. We found that K supply increased the root length and density and the organic acids secretion. The organic acids were significantly associated with the available potassium decomposition, in order of formic acid > malonic acid > lactic acid > oxalic acid > citric acid. However, the mitigation had the hormesis effect, as the appropriate range of K facilitated the morphological characteristic and physiological function of the root system with increases of supply levels, while the excessive input of K could hinder the plant growth. The positive effect of K-fertilizer on soil pH, available phosphorus and available potassium content, and microbial diversity index was more significant under the water stress. The rhizosphere nutrients and pH further promoted the microbial community development by the structural equation modeling, while the non-rhizosphere nutrients had an indirect negative effect on microbes. In short, K application could alleviate drought stress on the growth and development of plants by regulating the morphology and secretion of roots and soil ecosystems. Introduction Climate change induces abiotic stressors which are the major threat to plant growth and productivity under the natural environment. Plants respond the changing climate to enhance persistence under new environmental conditions through ecological strategies such as phenotypic plasticity or evolving adaptations as rapidly as possible [1]. Climate change has led to frequent weather extremes and unstable water supply resulted in the trend of normalization of droughts [2]. The United Nations pointed out in the World Water Resources Integrated Assessment Report that water resources will become a significant limiting factor for global economic and social development, with the potential to trigger conflicts and contradictions among countries [3]. It is estimated that losses caused by resistance with plants by affecting the microbial community, establishing a root-nutrientmicrobial interaction. The effects of water stress on two cultivars under different: drought stress were significantly decreased the primary root length of CY36 and the total length, the number of tips and crossings, primary root length, and surface area of YY57 when compared CK with treatment-KII ( Figure 1). Instead, the number of tips, forks, and crossings of the CY36 ( Figure 1) and the root average diameter of YY57 (Figure 1f) was significantly increased by drought. Whereas the water stress was relieved by inputting K, which K supply significantly increased eight root morphological indices in Figure 1 of CY36 and those indexes excluded the crossing and average diameter of YY57 compared the absence (KI) with presence (KII-KIV) of K. Those positive effects of K enhanced with the increasing does of K supply and the treatment-KIV recorded the highest level of total root length, tips, primary root length and surface area in both CY36 and YY57 ( Figure 1). Interestingly, treatment-KV significantly inhibited the root morphological development in two cultivars. In contrast, the root-shoot ratio decreased with the increase of K concentration and treatment-KV recorded the maximum ( Figure 1). Figure 1. Effects of K supply level on the root system architecture: (a) total length, (b) the number of tips, (c) the number of forks, (d) the number of crossing, (e) primary root length, (f) average diameter, (g) surface area, (h) root-shoot ratio of two rapeseed cultivars. KI, KII, KIII, KIV, and KV indicate K supply level (K 2 SO 4 ) 0, 0.1, 1, 10, and 100 mM under drought stress with 15% PEG, respectively, and CK indicate the control (without 15% PEG) 0.1 mM K 2 SO 4 . Means and standard errors (n = 4). Different small letters mean significant differences in Duncan multiple range tests among different treatments at 5% level (p < 0.05). Changes in the Quantity and Composition of Organic Acids with K Supply In this experiment, nine organic acids were detected by HPLC in rapeseed root exudates: oxalic acid, lactic acid, citric acid, succinic acid, malonic acid, acetic acid, propionic acid, formic acid, and malic acid. Apparently, there were more kinds of organic acids in YY57 root exudates than those in CY36 ( Figure 2). Drought inhibited the exudation of malic acid in YY57 and formic acid in CY36 when compared CK with treatment-KII, and decreased the quantity of organic acids in YY57 ( Figure 2). Compared to treatment-KI, organic acids content significantly increased with increasing amount of K (excluded malonic acid of CY36), but the trend was recorded differently in two cultivars: YY57 was in the order of KIII > CK > KIV > KII > KI (p < 0.05) and CY36 was in the order of KIV > KIII > KII > CK > KI, which the maximum difference was up to 59.41% and 41.77%, respectively ( Figure 2). For the composition of organic acids, the absence of K inhibited the secretion of formic acid and propionic acid in YY57, while the higher supply level of K stimulated the exudation of propionic acid in CY36 ( Figure 2). Figure 2. Effects of K supply level on the organic acids in root exudates of two rapeseed cultivars. KI, KII, KIII, and KIV indicate K supply level (K 2 SO 4 ) 0, 0.1, 1 and 10 mM under drought stress with 15% PEG, respectively, and CK indicate the control (without 15% PEG) 0.1 mM K 2 SO 4 . The numbers indicate the average content of organic acids (n = 4). Different small letters mean significant differences in Duncan multiple range tests among different treatments at 5% level (p < 0.05). Soil Available K Activated by Organic Acids Compared to the blank, root exudates extract significantly increased the activation content of AK by 87.3% and 129.2% of CY36 and YY57 (p < 0.05) on average, respectively ( Figure 3). Available K (AK) content in CK was higher than this in drought treatments (KI-KIV), by 17.7% of CY36 and 11.1% of YY57 ( Figure 3). Applying K increased mobilization of soil AK, by 8.2% of CY36 and 12.4% of YY57 on average when the present (KII-KIV) compared to the absence (KI) of K ( Figure 3). The pattern search further indicated that nine organic acids positively correlated with the activation of soil AK (Figure 4a). Formic acid and malonic acid were the best predictors of AK mobilizing (p < 0.01), followed by lactic acid, oxalic acid, and citric acid (p < 0.05) (Figure 4a). Effects of organic acids on AK activating in two rapeseed cultivars. Here the value of AK is the difference before and after adding the organic acid extract, KI, KII, KIII, and KIV indicate K supply level (K 2 SO 4 ) 0, 0.1, 1 and 10 mM under drought stress with 15% PEG, respectively, and CK indicate the control (without 15% PEG) 0.1 mM K 2 SO 4 . The blank is water treatment to compare with the extracts. The black dots represent the maximum, upper quartile, lower quartile, and the minimum, and yellow dots represent the median. Means and standard errors (n = 4). The numbers indicate the correlation coefficients between organic acids and the activation of AK. ** p < 0.01; *p < 0.05. (b) The partial least squares discriminant analysis of restoration evaluation in two rapeseed cultivars. KI, KII, KIII, and KIV indicate K supply level (K 2 SO 4 ) 0, 0.1, 1, and 10 mM under drought stress with 15% PEG, respectively, and CK indicate the control (without 15% PEG) 0.1 mM K 2 SO 4 . Partial Least Squares Discriminant Analysis To determine the mitigative effect of K on drought stress, partial least squares discriminant analysis (PLS-DA) was conducted separately on each sample according to root morphological characteristics and organic acids. A clear separation of samples into four quadrants was achieved by principal components, which explained 73.3% by PC1 and 7.3% by PC2 of the total variation ( Figure 4b). The KII, KIII, and KIV treatments in YY57 were separated in the first quadrant ( Figure 4b). The KI-treatment in YY57 and CY36 were similar, which were scattered throughout the second quadrant. The KII and KIII treatments of CY36 were separated in the third quadrant with those CK (Figure 4b). Moreover, PLS-DA divided the KIV-treatment of CY36 with the CK of YY57 in the last quadrant ( Figure 4b). 2.2. Experiment II 2.2.1. Physicochemical Properties of Rhizosphere and Non-Rhizosphere Soil From the results obtained in Figure 5, it seems that the soil pH was intermediately acidic and pH levels of non-rhizosphere soil (5.65-6.37) were higher than those in rhizosphere soil (5.46-6.22). Soil moisture content, K-fertilizer, and their interactions showed highly significant differences (p < 0.01) on pH values (Figure 5a). In CY36, treatment-K2 recorded the highest pH numerical values under the water-limited condition in the rhizosphere and non-rhizosphere soil, which amount to a 0.41 and 0.52 unit significant increase, respectively, compared to treatment-K1 (Figure 5a). Similar results were observed under the water-unlimited condition that pH levels of rhizosphere and non-rhizosphere soil increased a 0.57 and 0.24 unit, respectively (p < 0.05) ( Figure 5a). However, YY57 did not show the trends, and the trends of pH levels in the rhizosphere differ from those in nonrhizosphere soil (Figure 5a). Under the water-limited condition, treatment-K3 recorded the highest pH numerical values (6.20) in the non-rhizosphere while it also recorded the lowest level (5.62) in rhizosphere soil ( Figure 5a). Mostly, no difference under water-unlimited conditions was witnessed between K1, K2, and K3 in YY57. Soil available nitrogen (AN), available phosphorus (AP), AK levels in non-rhizosphere were also higher than rhizosphere soil, by 42.8%, 30.7%, and 47.9% on average ( Figure 5). Soil moisture content significantly affected soil AN content in both rhizospheres (p < 0.05) and non-rhizosphere (p < 0.01) soil, while no significant impact of K-fertilizer on AN was demonstrated ( Figure 5b). Despite no significant differences in soil AP content of YY57, applying K-fertilizer significantly increased the content of AP in CY36 under the waterlimited condition, which amounts to a 29.9% and a 9.4% increase in the rhizosphere and nonrhizosphere soil, respectively, compared K3 with K1 treatments (Figure 5c). Additionally, there were significant (p < 0.01) impacts of soil moisture content and K-fertilizer, which resulted in increased values of soil AK (Figure 5d). This positive effect of K on soil AK content was higher in the non-rhizosphere, namely, significant differences were showed among K1, K2, and K3 in the non-rhizosphere while they were only witnessed between K1 and K3 in rhizosphere soil ( Figure 5d). Microbial Diversity Indexes under Water-Potassium Combination Analysis of variance indicated that soil moisture content, K-fertilizer, and their interaction significantly affected Simpson index (D), richness index (S), Shannon index (H), while the evenness index (E) was only significantly affected by soil moisture content ( Figure 6). After 14 days of drought stress, the levels of D, S, and H decreased significantly by 0.27%, 4.4%, 1.03% in CY36 and 0.59%, 17.5%, 2.50% in YY57 on average compared to water-unlimited condition, but the level of E increased significantly by 0.67% and 2.76% of CY36 and YY57, respectively ( Figure 6). Under the water-limited condition, the effect of treatment-K3 varied with different cultivars ( Figure 6). The addition of K to soil in CY36 would be arranged in an ascending order of values with D, S, E, and H: K2 > K1 > K3 (p < 0.05) ( Figure 6). This amounts to a 2.45%, a 1.2%, a 2.50%, and a 2.60% increase in the level of D, S, E, and H compared treatment-K2 with K1 ( Figure 6). Treatment-K2 also recorded the highest numerical values of those in YY57, but the trend was different from CY36, which was an ascending order of D, S, and H: K2 > K3 > K1, increasing 6.83% (D), 15.4% (S), 3.46% (H) under treatment-K2 when compared to treatment-K1 ( Figure 6). Figure 5. Effects of K supply level on the rhizosphere (values above x-axis) and non-rhizosphere (values below the x-axis) (a) pH, (b) available nitrogen, (c) available phosphorus, and (d) available potassium content of two rapeseed cultivars under different soil moisture content. W1 and W2 indicate 40% and 75% water-holding capacity, respectively, and K1, K2, and K3 indicate K (K 2 O) applications rates 0, 80, and 160 mg·kg −1 , respectively. Means and standard errors (n = 4). Different small letters mean significant differences in Duncan multiple range tests among different treatments at 5% level (p < 0.05). Two-way analysis of variance (ANOVA) was performed to evaluate the effects of soil moisture content (W), K-fertilizer (K), and their interactions (W×K). NS means non-significant. * and ** indicate significant differences at p < 0.05 and p < 0.01 probability levels, respectively. Relationship in the Soil Ecosystem Structural equation modeling (SEM) brought forward a solution to visualize the direct and indirect impacts of K-fertilizer and the interactions between soil properties and microbe under the different soil moisture content ( Figure 7). There were more significant effects under the water-limited condition ( Figure 7). A positive effect of K-fertilizer on NR-nutrient and R-microbe while a negative effect on R-nutrient was observed with drought stress (Figure 7). NR-pH and R-nutrient revealed a significant positive correlation with NRnutrient. R-microbe under the water-limited condition was influenced by many factors, which are directly positive impacts of R pH and nutrient and a negative impact of NRnutrient ( Figure 7). However, there was only a positive significant influence of K-fertilizer on R-pH under the water-unlimited condition and we also observed that NR-pH showed a more considerable impact on R-pH than K-fertilizer ( Figure 7). water-unlimited condition. SEM are colored according to classification (blue for K-effect, red for R-region, and black for NR-region, while gray lines represent non-significant coefficients (p > 0.05). Full lines represent positive relationships, while dotted lines represent negative relationships, which the width of lines indicates the strength of the relationship. *** p < 0.001; ** p < 0.01; * p < 0.05. Discussion As predicted, K application was beneficial for drought-stressed plants. In our experiments, we found that K elongated the root length to obtain water from even deeper and wider soil layers and simultaneously increased the root density and surface area to expand their contact surface with the surrounding, thus helping plants efficiently uptake available water and in this way mitigating drought. Additionally, K also made the root-shoot ratio towards 1, which could balance the ratio and function of whole plants to further improve the efficiency of resource utilization [25]. Interestingly, this mitigation varies with the increase of K supply level, closely matching the concept of hormesis effect that mainly described ions of unknown physiological function [26], which assumes that the effect of an element on plant depends on its concentration [27]. K is an inorganic solute that play imperative role for osmotic potential in roots [28]: too high a supply level of it can damage the turgor-pressure-driven translocation of solute and break the water balance in plant organs [29]. The stressed root system developed healthily with increasing K concentration, and KIV was the gradient that could achieve the best recovery of rapeseed across all treatments. However, a further increase of the supply level reaching KV shifted the effect from beneficial to negative, severely limiting the root biochemical and physiological functions. In other words, the appropriate range of K facilitated the root development and metabolism, whereas the excessive input of K might trigger an imbalance of ionic homeostasis in the organism and then producing toxic effects. Different cultivars adopt distinct drought response strategies to allow itself to absorb as much water as possible and withstand the adversity: the drought-sensitive cultivar (CY36) increased the root density while the drought-tolerant cultivar (YY57) thickened the diameter of the root system ( Figure 1). Concomitantly, the K alleviating effect was stronger in the drought-sensitive cultivar. The results of PLS-DA further verified that KII and KIII supply level could help the CY36 stressed root system develop comparable to its control, and CY36 even restored to YY57 control state under KIV level, while the alleviating state was similar between KII, KIII, and KIV in YY57. Those also demonstrated that plant exposed to water deficit require more internal K [30]. Significant genotypic differences also observed in the composition of organic acids. Across all treatment, malic acid, propionic acid, and formic acid widespread in the droughttolerant cultivar (YY57) compared to the drought-sensitive cultivar (CY36), especially malic acid was only detected in the control of YY57 (Figure 2). Song et al. demonstrated that a physiological adaptation might exist in drought-tolerant cultivars to enhance nutrient solubility in the rhizosphere and mitigate the toxic effects of water stress [16]. Malic acid has been reported as a reducing matter of organic acids in root exudate, which can transform high valence into low valence metal ions to raise the efficiency of nutrients in soil [19]. Malic acid, propionic acid, and formic acid might be reasons why drought-resistant cultivars could utilize the surrounding resources more efficiently, which was also proved by the lack of those three acids in the treatment without K supply in YY57 (Figure 2). To tolerate adversity, roots could regulate matters in the soil environment by conditioning actively or passively the composition and quantity of organic acids. We found that formic acid and malic acid was inhibited in CY36 and YY57 under drought stress, respectively, which is the result of plants limited growth and development reduce the energy costs of secreting organic acids and no longer need a rich nutrients environment. Similarly, this is why a significant decline in organic acid content was observed under the water stress in two cultivars compared with those controls. In addition to the root system architecture, K also affect the quantity and composition of organic acids in order to tackle water scarcity. K, a primary cellular osmoticum, plays a vital role in neutralizing the negative charges [7]. On the one hand, K mediated the controlled release of organic acids thought anion channels in roots as factors affecting membrane integrity may affect organic acids exudation [17]. In the 1990s, researchers identified that anion channels mediated root-controlled release of organic acids and no association between their exudation and levels within roots [31,32]. Large cytoplasmic K + diffusion potential and protons create positively-charged gradients through the extrusion of ATPase, thereby stimulating the release of carboxylate anions [33], which is the reason why propionic acid and formic acid secreted extra under K supply of CY36 and YY57, respectively. On the other hand, the balance between anions and cations in the rhizosphere environment is one of the main factors of organic acids. The root process of uptaking cations (especially K + ) is accompanied by the need of negative charges to maintain constantly the ionic equilibrium in soil environment that is usually provided by organic acids, such as malic acid, malonic acid and citric acid [34]. Interestingly, the highest content of organic acids was recorded at KIV supply level in both cultivars, which was same with the maximum points of root tip number, total root length, primary root length, and surface area. This confirmed that organic acids are mainly secreted from the tips of primary and lateral roots for active translocation and an indirect effect on organic acids from changes in root morphology due to nutrient application is in a more dominant state [35]. The regulation of root growth and branching in nutrient-rich patches areas may be consistent with root exudates increased to affect nutrient dynamics and microbial communities [11], therefore improving metabolic activities and defenses virtually [18]. Clearly, the pattern search results showed that the significant effect of organic acids on AK activation was in the order of formic acid, malonic acid, lactic acid, oxalic acid, and citric acid (Figure 4a), which confirmed that the release of non-exchangeable K could be accelerated by root exudates [9]. There were two ways of activation from organic acid on AK in soil: acidic hydrolysis and complexing dissolution. On the one hand, H + dissociated in organic acids could not only promote the dissolution of insoluble minerals through acidic hydrolysis but also replaced the K from the crystal lattice to release K + since the size of H3O + formed by H + was similar to that of K + [36]. On the other hand, low-molecular-weight organic acid with carboxyl (-OH) and hydroxyl (-COOH) groups in the ortho-position tended to form metal-organic complexes with metal ions in the mineral structure [37,38], which accelerated the decomposition of soil minerals. Formic and malonic acids with acid sites in medium strength, are prone to hydrolysis as they have a weak electric field force on the H + ionized. Lactic acid with -OH and -COOH, and citric acid with -COOH, are prone to complexing effects, and oxalic acid, a medium-strength acid with -COOH, is endowed with both effects [38]. This suggests that the acid strength of organic acids influences mobilization of AK in the soil as much as their complexing effect, but the conclusion was based on the drought condition as water to soil ratio was a key factor affecting the influences of organic acids [39]. In Experiment II, K-fertilized rapeseed showed a better nutritional and healthy soil environment with neutral pH, and higher content of AK and AP, which could support plants to maintain the fundamental function of metabolic processes under a water-limited condition [11]. Given that roots induced the soil environment changes, it was improbable that soil properties in the rhizosphere were similar to the non-rhizosphere. Organic acid with acidic pH secreted from roots decreases the average level of pH in the adjacent soil, which caused pH in the rhizosphere is lower than this in the non-rhizosphere (Figure 5a). Likewise, this trend was also observed in available nutrients, confirming that resources surrounding roots are consumed by plant growth, enrich microbes, and active soil animals [40]. The analysis of SEM demonstrated that microbe under drought stress were positively affected by three factors: K fertilizer with the most significant (p < 0.001), then pH of R-soil (p < 0.001) and, finally, nutrients of R-soil (p < 0.05) (Figure 7a). This result is in accord with the previous studies [41][42][43] that favorable pH and abundantly available nutrients provided an excellent environment for microbial development and reproduction. In contrast to what we expected, we did not see a progressive increase in the microbial index with K inputs. K2 level recorded the highest value of Simpson index, Shannon index, and richness index in two cultivars, while K3 no longer improved those indexes and sometimes significant decreases were observed in CY36. According to the SEM, the unexpected result might mean that high K application could promote the plant and root growth better, which in turn intensified competition between roots and microbes for limited R-nutrients [40,44] to hinder the microbial growth. Meanwhile, the pH of the rhizosphere under treatment-K2 was close to a neutral environment, which is more suitable for soil community development [42]. However, in SEM, there is a negative effect on R-microbe from nutrients in non-rhizosphere soil (p < 0.01) and a positive effect (p < 0.05) of the R-nutrient on the NR-nutrient (Figure 7a). This suggests that the NR-nutrient might be indirect negative for the microbial community by competing R-nutrients, mainly because nutrients are transported from nutrient-rich rhizosphere to non-activated non-rhizosphere areas through various pathways, such as water flow, soil microbial, and animal activities. Interestingly, the effect of K-fertilizer and the water-K interactions that had been signed under drought stress was no longer significant under water-unlimited condition, and there was only a significant effect of K-fertilizer on the R-pH (Figure 7). This proved that K-fertilizer was more effective under drought condition, and plants could regulate resource demand and nutrient acquisition strategies in the underground when confronted with the environmental change to recover biochemical and physiological functions after re-watering [11]. Experiment I: Effects of K Supply Level on Root Morphology and Root Exudates under Drought Stress The first experiment consisted of two phases (Figure 8), a sand culture (to establish root system architecture) and a hydroponic culture (to obtain root exudates), conducted in 2019 at the agroecology laboratory of the Southwest University, Chongqing, China (29 • 49 32 N, 106 • 26 02 E). Two rapeseed (Brassica napus) cultivars, CY36 (droughtsensitive) and YY57 (drought-tolerant), were selected as plant materials, which was screened from 15 rapeseed cultivars in previous work according to biomass under drought stress followed by water resupply in soil culture. Drought stress was simulated by adding 15% PEG6000 in the nutrient solution, and five K supply level, 0 mM, 0.1 mM, 1 mM, 10 mM, and 100 mM K 2 SO 4 , were recorded as KI, KII, KIII, KIV, KV, respectively, setting up a K application (0.1 mM) without drought stress as a control (CK). H 2 SO 4 or Ca(OH) 2 was used to adjust the pH to 6.0 as necessary. In the sand culture, both cultivar seeds were disinfected by soaking in 3% NaOCl for 10 min and then thoroughly washed with deionized water. After the seeds were dried naturally, 100 seeds of both cultivars were planted evenly in plastic germination box of 15 cm length, 13 cm width, and 10 cm height with 300 g quartz sand at the bottom, approximately 2 cm thickness ( Figure 8). Each germination box was supplied with 75 mL nutrient solution, which was the standard amount to cover the quartz sand at the bottom. All germination boxes with random arrange were cultured for 10 days in an illumination incubator at 25 • C, 16 h·d −1 light cycles, 75% humidity, and 3000 Lx light intensity. The nutrient solution level of all boxes was determined and maintained through the weighing method. After 10 days of treatment, two plants were harvested in four replicates to determine root traits. In the hydroponic culture, five seedlings were randomly selected from each germination boxes cultured to 10 days, transferring to a black plastic pot of 120 mm diameter and 110 mm height to grow hydroponically (the treatment-KV was not transferred due to stunting). Each pot was filled with 1 L Hoagland nutrient solution and all treatments were fixed, cultivating in the same condition of illumination incubators (Figure 8). After 14 days of continued growth, the plants transfer to light-proof bottles containing 1 L of 0.5% CaCl 2 solution for 12 h to collect root exudates behind the residues attached to the root surface were thoroughly washed with deionized water. The extracting solution was concentrated to 50 mL (20-fold concentration) using a rotary evaporator (temperature 50 • C, rpm 80-90 r·min −1 ) and stored frozen at -80 • C for the determination of organic acids and activation experiment of soil AK [45]. Figure 8. Schematic illustrating the design of Experiment I and II. Two rapeseed cultivars (CY36 and YY57) were cultured by those ways in both experiments. In the sand culture, 100 seeds were planted evenly in a germination box to establish root system architecture. In the hydroponic culture, 10-day-old rapeseed seedlings were transplanted to each pot to obtain root exudates. The treatment of Experiment I consisted of five replicated pots. In Experiment II, the dotted line record root bags that separate the rhizosphere and non-rhizosphere soil. The soil moisture treatments were started when the seedlings reached the six-leaf stage, each replicated four times. Experiment II: Effects of the Water-Potassium Combination on Soil Nutrient and Microbes The experiment was conducted in the glass greenhouse at the Southwest University, Chongqing, from September 2018 to January 2019. The soil was purple typical of southwest China, naturally dried and filtered through a 3 mm sieve. The chemical properties were as follows: 11.6 g·kg −1 organic matter, 0.5 g·kg −1 total nitrogen, 37.6 mg·kg −1 available nitrogen AN, 17 mg·kg −1 AP, 84 mg·kg −1 available potassium AK and 22% maximum field capacity. A split-plot experiment designed in the soil culture, in which soil moisture content (water-limited and water-unlimited) was defined as the main factor and K-fertilizer supply level (three concentration) was the assistant factor ( Figure 8). The moisture level in each pot reached soil drought stress at 40% water-holding capacity (WHC) (W1) and normal condition at 75% WHC (W2). The K treatment used sulfate (K 2 SO 4 ) as a fertilizer contained: 0 mg·kg −1 K 2 O (K1), 80 mg·kg −1 K 2 O (K2), 160 mg·kg −1 K 2 O (K3), summing to six treatments: W1K1, W1K2, W1K3, W2K1, W2K2, W2K3, each treatment included two cultivars (CY36, YY57) same with Experiment I, each replicated four times. Nitrogen and Phosphorus fertilizers were applied at 800 mg N and 400 mg P 2 O 5 per pot, respectively, both before sowing. A root bag made of 300 mesh nylon sieve (14 cm in length, width and height) was designed in each pot to separate the rhizosphere and non-rhizosphere soil [46]. A pot with an inner diameter of 28 cm and height of 18 cm contained 10 kg of the soil mentioned above, of which 2 kg soil were held in a root bag and 8 kg residual soil were outside (see Figures S1 and S2 in Supplementary Materials). Five rapeseed seeds were spotted per root bag on 21 September 2018 and then thinned out seedlings after one month of emergence, keeping two seedlings per pot. All treatments were maintained at a normal moisture supply during the early stages of rapeseed growth, starting soil moisture treatment of 14 days when the seedlings reached the overwintering stage (six-leaf stage). Samples were taken at the end of the treatment. Root Morphological Traits In Experiment I, the primary root length and shoot height were recorded. The remaining fresh sample was used for scanning root morphological parameters by an Epson perfection V700 photo scanner (Epson, Nagano, Japan), and root morphological analysis was carried by a Win RHIZO system (Regent Instrument Inc., Quebec, QC, Canada). Root-shoot ratio was calculated using the length of the primary root and shoot. Organic Acids in Root Exudates and Activation on Soil AK HPLC Shimadzu LC-20AD (Shimadzu, Kyoto, Japan) was used in Experiment I to determine the secretion of organic acids in root exudates, focusing on nine organic acids: oxalic acid, tartaric acid, formic acid, malic acid, malonic acid, citric acid, succinic acid, propionic acid, and lactic acid [47]. Activation on soil AK was calculated by adding the root exudates extracting solution to 2.50 g of original air-dried soil at a ratio of 1:1.5 with deionized water as a blank control and incubated at 26 • C for 10 days in a constant temperature incubator [48]. Soil Sampling and Physicochemical Properties Soil samples were taken separately from rhizosphere and non-rhizosphere soil at the end of Experiment II. By this time, roots were occupied the root bag entirely, ensuring that the soil remaining after removing the top 2 cm layer of the bag was the rhizosphere sample and the soil outside the bag was the non-rhizosphere sample. All samples were quickly passed through a 2 mm sieve to remove rootlets and gravels, in which one part of the samples was stored after dried naturally to measure soil nutrients, and the other part was stored at 4 • C in a refrigerator to determine microbe. Soil physicochemical properties were determined for both rhizosphere and non-rhizosphere. A glass electrode determines soil pH with a soil: water ratio of 1:10 (w/v). AN was obtained by the alkaline hydrolysis diffusion method. AP was detected using the molybdenum-blue method after extracted with sodium bicarbonate. AK was measured by atomic absorption spectrophotometer after extracted with ammonium acetate [49]. Soil Microbial Community Functional Diversity In Experiment II, soil microbial community functional diversity was determined for rhizosphere only by Biolog EcoPlate TM (Biolog Inc., Hayward, CA, USA) that were incubated at 28 • C for one week [50]. Microbial data from incubations up to 120 h were selected to calculate the, Simpson dominance index (D), richness index (S), and evenness index (E), Shannon diversity index (H), calculated as follows: D = 1 − ∑ Pi 2 , S = the total number of carbon sources utilized, E = H/ln S, H = −∑ Pi (ln Pi), where Pi is the ratio of the relative absorbance value of i hole to the sum of those of the whole plate. S is the number of holes in the ECOplate with a color change when the absorbance value is less than 0.25 [51]. Statistical Analysis We performed the data of both experiments were analyzed using one-way analysis of variance (ANOVA) for significant differences among treatments and Duncan's new multiple range method for multiple comparisons using IBM SPSS 25.0 software (SPSS Inc., Chicago, IL, USA). PLS-DA was used to assess the results of drought mitigation by different concentrations of K-treatment based on the data of root morphology and organic matter content in Experiment I. Differences in the content of AK before and after addition of root exudates extracting solution was used to indicate the ability of organic acids to decompose the AK in soil. Pattern search was used to rank the contribution of nine organic acids in root exudates to activate AK. PLS-DA and pattern search were conducted by MetaboAnalyst 5.0 (https://www.metaboanalyst.ca, accessed on 23 January 2021). In Experiment II, we carried out ANCOVAs to explore soil moisture, K supply level, and their interactive effects on soil properties and microbial community in IBM SPSS 25.0 software. Structural equation modeling (SEM) was used to determine the direct and indirect contributions of K-fertilizer to rhizosphere and non-rhizosphere soil properties and microbial communities and their internal structure and causal relationships with each other [52]. The SEM fitness was proved on the index of a non-significant chi-square test (p > 0.05), the goodness of fit index (GFI), the comparative fit index (CFI), and the root mean square error of approximation (RMSEA) using the lavaan package carried out in RStudio Version 1.3.1093 (RStudio, Inc., Boston, MA, USA). Data were mean centering or log transformation to satisfy normality and homoscedasticity as necessary. Conclusions Our study concluded that the underground part, including the root system and soil ecosystem, played an important role in K mitigation on drought stress. The stressed rapeseed with suitable K supply not only showed an improved potential of elongating root length, boosting root density, and balancing root-shoot ratio to increase water uptake, but also stimulated organic acids secretion to enhance nutrient acquisition and utilization. However, this alleviation had a hormesis effect, which means that K has a continuous promotion in the appropriate range but severely inhibits plant growth once the application is excessive. Furthermore, the positive effects of K were more effective in water-limited conditions. K application contributed to microbial community development by enriching nutrients and neutralizing pH to establish a healthy soil environment, which could help plants maintain resistance and tolerance against drought. Our study quantified the potential ability of organic acids on AK activation to withstand water deficit, but fully understand the underlying mechanism of root exudates responses of climate changes requires deep interpretation on the molecular level.
2021-03-07T06:16:21.633Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "cd7ec33c5ca35be6171c4cdc42e057ae87516904", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1989/11/3/131/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8233c2a18a58ad3b978fa5104ab39fc3ecccbd80", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
262125391
pes2o/s2orc
v3-fos-license
A conjugate self-organizing migration (CSOM) and reconciliate multi-agent Markov learning (RMML) based cyborg intelligence mechanism for smart city security Ensuring the privacy and trustworthiness of smart city—Internet of Things (IoT) networks have recently remained the central problem. Cyborg intelligence is one of the most popular and advanced technologies suitable for securing smart city networks against cyber threats. Various machine learning and deep learning-based cyborg intelligence mechanisms have been developed to protect smart city networks by ensuring property, security, and privacy. However, it limits the critical problems of high time complexity, computational cost, difficulty to understand, and reduced level of security. Therefore, the proposed work intends to implement a group of novel methodologies for developing an effective Cyborg intelligence security model to secure smart city systems. Here, the Quantized Identical Data Imputation (QIDI) mechanism is implemented at first for data preprocessing and normalization. Then, the Conjugate Self-Organizing Migration (CSOM) optimization algorithm is deployed to select the most relevant features to train the classifier, which also supports increased detection accuracy. Moreover, the Reconciliate Multi-Agent Markov Learning (RMML) based classification algorithm is used to predict the intrusion with its appropriate classes. The original contribution of this work is to develop a novel Cyborg intelligence framework for protecting smart city networks from modern cyber-threats. In this system, a combination of unique and intelligent mechanisms are implemented to ensure the security of smart city networks. It includes QIDI for data filtering, CSOM for feature optimization and dimensionality reduction, and RMML for categorizing the type of intrusion. By using these methodologies, the overall attack detection performance and efficiency have been greatly increased in the proposed cyborg model. Here, the main reason of using CSOM methodology is to increase the learning speed and prediction performance of the classifier while detecting intrusions from the smart city networks. Moreover, the CSOM provides the optimized set of features for improving the training and testing operations of classifier with high accuracy and efficiency. Among other methodologies, the CSOM has the unique characteristics of increased searching efficiency, high convergence, and fast processing speed. During the evaluation, the different types of cyber-threat datasets are considered for testing and validation, and the results are compared with the recent state-of-the-art model approaches. for developing an effective security systems.Hindy, et al. 38 deployed a machine learning based IDS framework for ensuring the security of IoT networks.In this system, the MQTT-IoT-IDS2020 dataset has been utilized to test the performance of this system.The purpose of this work was to categorize the normal and benign traffic by using 6 different types of machine learning techniques.According to this analysis, it is studied that the DT technique outperforms the other approaches with increased detection results. Duraisamy, et al. 39 implemented a Krill-Herd (KH) optimization integrated Deep Learning Neural Network (DLNN) technique for improving the security of smart city networks.The KH was one of the most popular optimization technique extensively used for feature selection and dimensionality reduction.In addition to that, the min-max normalization mechanism was utilized to preprocess the given dataset.The key benefits of this work were increased detection accuracy, high level of security, and minimal time consumption.However, it has the following limitations: difficult to understand, reduced convergence rate, and complex mathematical calculations.Alsarhan, et al. 40 deployed a Support Vector Machine (SVM) classification technique for detecting intrusions in the Vehicular Ad-hoc Networks (VANETs).Here, three different types of optimization techniques such as Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) were separately used for selecting the most suitable technique.From this study, it is analyzed that the combination GA-SVM outperforms the other approaches with increased performance results.Also, it has the key benefits of reduced false positives, error rate, and better convergence speed. Reference 41 integrated the federated learning model with the smart city application systems for improving its security and privacy.The focus of this work is to conduct a comprehensive review for analyzing the list of AIoT techniques for maximizing the network security.Bangui, et al. 42 presented a comprehensive survey for investigating the recent machine learning techniques used to develop an advanced IDS framework.It includes the popular mechanisms of Recurrent Neural Network (RNN), Game Theory, SVM, K-means, Self-Organizing Map (SOM), Logistic Regression (LR), and Random Forest (RF).Among other mechanisms, the RNN provides an increased detection accuracy and efficiency.Maseleno, et al. 43 deployed a Random Monarch Butterfly (RMB) optimization integrated RNN technique for protecting the smart society networks against the cyber-threats.During optimization, the migration and butterfly adjusting operators have been used to identify the best optimal solution with reduced number of iterations.Moreover, the attack detection performance of this system was validated and tested according to the parameters of detection level, f-measure, accuracy, and error rate.The primary advantages of this technique were capability of handling large dimensional datasets, reduced training and testing time.Table 1 reviews some of the recent state-of-the-art model techniques used for smart city security and intrusion detection 44 . The motivations behind the proposed are given below: • To thoroughly investigate the research gaps in the linked devices of smart cities' network intrusion detection procedure.• To create a network intrusion detection system for smart cities that is more precise and effective. • Developing the aforementioned mechanism in the framework of cyborg intelligence to gain from both machine and human intelligence.• To put into practice the proposed method of a specific dataset for evaluating the effectiveness of this mecha- nism. • To evaluate the success rate of current machine learning techniques for network intrusion detection. Therefore, the proposed work intends to develop a novel Cyborg intelligence mechanism for securing the smart city networks with reduced computational and time complexity.The major research objectives of this paper are as follows: • To design and develop a novel Cyborg intelligence based security model for protecting the smart city networks against the cyber-threats.• To normalize and preprocess the input cyber-threat datasets, the Quantized Identical Data Imputation (QIDI) mechanism is employed that effectively improves the quality of dataset by filtering the attributes.The other portions of this paper are organized into the followings: Section "Methods" presents the clear description about the proposed CSOM-RMML based Cyborg intelligence mechanism with its appropriate working flow and algorithms.The results of the proposed mechanism are validated and compared by using different datasets and parameters in Section "Results".At last, the entire paper is summarized with its findings, challenges, and future work in Section "Conclusion". Methods This section presents the clear description about the proposed Cyborg intelligence model for increasing the security of smart city systems.The original contribution of this work is to implement a novel optimization and classification techniques for designing a novel IDS framework to protect the smart city networks against the cyber-threats.The overall working flow of the proposed system is shown in Fig. 1, which includes the following stages: • Dataset Preprocessing and imputation • Feature Optimization Here, the different types of cyber-attack datasets are taken as the inputs for processing, which includes the UNSW-NB15, DS2OS, CICIDS-2017, BoT-IoTIDS 2020, and NSL-KDD.These are all the most popular benchmark datasets highly used in many application systems.At first, the Quantized Identical Data Imputation (QIDI) is used to preprocess the cyber-attack datasets by identifying the missing data vectors.Also, it helps to improve the overall quality of input for obtaining the maximum detection performance.After that, the Conjugate Self-Organizing Migration (CSOM) based optimization algorithm is implemented to choose the optimal number of features for classifier training and testing.The primary purpose of using this mechanism is to accurately detect the cyber-threats with increased computational efficiency.Consequently, the Reconciliate Multi-Agent Markov Learning (RMML) based classification methodology is used to predict and categorize the type of cyber-threat against the smart city networks.The key benefits of using the proposed CSOM-RMML based Cyborg intelligence mechanism are high security level, increased attack detection performance, easy to understand, and minimal complexity in computations. Preprocessing. For different types of data, different cleaning techniques are required.In machine learning, missing data must be treated with caution since it is more essential.There are two approaches to handle missing data, although both may produce information that is less than ideal: • Eliminating records with values that are missing: This is not the best course of action because it could result in the loss of information that could be instructive.• Using previous observations to impute the missing values: It is not ideal and could result in information loss because the value was initially missing but we added it. In this stage, Quantized Identical Data Imputation (QIDI) mechanism is employed to preprocess the given cyber-threat datasets by normalizing the attributes.Here, the main purpose of using this preprocessing technique is to increase the quality of data by identifying the missing fields, and eliminating the irrelevant attributes.This preprocessed data helps to obtain an improved classification performance.Conventionally, the different types of filtering and normalization techniques are used in the existing works for dataset preprocessing.However, it has the problems of presence of noise, inconsistency, and error values.Therefore, the proposed work objects to employ a new QIDI mechanism for preprocessing the datasets, which has the key benefits of simple to understand, easy to implement, less time for processing, and high quality of data.During this process, the missing input data vector is generated at first, as shown in below: where, N indicates the number of features.After that, the distance is estimated for the individual features according to the weight vector by using the following equation: where, D i indicates the Euclidian distance between the input vector and weight vector i, x k = k is the element of current vector, n indicates the dimension of the input vector and w ik = k denotes the element of the weight vector i.Consequently, the mask value is estimated based on the input vector as shown in below: where, m k denotes the mask value.Then, the Best Identical Unit (BIU) is identified by adjusting the weight vector of the winner neuron.Hence, the BIU and its adjacent neurons are move closer to the input vectors in the space, which also helps to increase the agreement between the input and weight vectors.This adjustment is carried out by using the following model: where, w t indicates the element of weight vector, t is the time factor, η(t) is the learning rate, and h f is the neighborhood function.Consequently, the learning rate monotonically decreases with the number of iterations increased as represented in the following model: where, η 0 initial learning rate and T training length.After that, the quantization error is estimated by using the following model: where, N is the number of input vectors used to train the map, W ib prototype weight vector of the best matching unit of X i , and .denotes the Euclidean distance.Finally, the proportion of variance of a variable is predicted from other variable by using the following equation: where, x i indicates the observed value of i th features, y i represents the trained value of i th features, x ′ denotes the mean of observed value,y′ is the mean of the trained value, and n represents the number of observations.Based on this process, the given input cyber-threat dataset is preprocessed and the attributes are normalized. Algorithm I -Quantized Identical Data Imputation (QIDI) Input: Cyber-threat dataset Output:Preprocessed data; Step 1: Missing input data vector is initialized with number of features as represented in equ (1); Step 2:Then, the Euclidean distance is estimated according to the weight vector by using equ (2); Step 3:The mask value is computed for each column of feature as indicated in equ ( 4); Step 4: The Best Identical Unit (BIU) is identified by adjusting the weight vector of the winner neuron by using equ (5); Step 5:Consequently, the learning rate ( ) is decreased with the increase of number of iterations, as represented in equ ( 6); Step 6:The quantization error is also estimated according to the number of input vectors, and Euclidean distance value as shown in equ (7); Step 7:Finally, the proportion of variance is predicted with respect to the trained value, mean value, and number of observations as represented in equ ( 8); Feature optimization.Furthermore, we employed a feature optimization approach to reduce the input dimension through choosing the optimal feature subset.After imputation, the Conjugate Self-Organizing and Migration (CSOM) optimization algorithm is employed to optimally choose the features for classifier training and testing.In the existing smart city frameworks, various nature-inspired and bio-inspired optimization techniques are utilized for reducing the dimensionality of data and, improving the detection rate of classifier.Nevertheless, they have the major problems of reduced convergence speed, more number of iterations for reaching the optimal solution, high time consumption, and complex computations.Thus, the proposed work motivates to develop a novel and intelligent optimization technique for selecting the relevant features from the normalized cyber-threat datasets.It is motivated by the smart, successful, and cooperative behavior of population members who use numerous migration loops to find the problem's ideal, world-wide solution.A stochastic optimization technique that draws inspiration from the intelligence of creatures like birds and fish.The goal of the field of numerical optimization is to look for globally optimal solutions.In order to do that, the technique starts by creating a population of a certain number of people, each of whom is a potential solution to the issue.Through numerous migration loops, further solutions that were superior to the first ones based on rivalry and collaboration amongst these individuals, a crucial component of the swarm intelligent algorithm, are subsequently produced.Then it is continued until the algorithm's specified stop criteria are met.This mechanism encompasses the following operations: Setup parameters.At first, the setup parameters are initialized that includes the controlling parameters Num set , CPT , pop no , stopping parameters Mig , Dist m , and iteration count cnt m . • Num set is a controlling parameter that defines the number of steps before the end of the movement. • CPT is an another controlling parameter that determines that whether the individual population will move along the chosen coordinate to the leader.The possible value is 0.3; • pop no is a control parameter that is used to estimate the size of the individual population.Suggested value pop no > 10; • Migration ( Mig ) is a stopping parameter showing the maximum number of iterations.Suggested Mig > 10; • Dist m is a stopping parameter that is determined based on the value of goal function, which duplicates the average deviation among the three population leaders.The algorithm will come to a halt if this number is www.nature.com/scientificreports/less than the target value.Once the value is entered, the condition is verified as, if it is negative, the condition won't be satisfied, and the search will end once the allotted number of migration cycles has been reached.• cnt m is an iteration counter required to terminate the algorithm when it reaches the Migration number. • Let cnt m =0; Generation of individual population.After that, the individual population is generated according to the coordinates, which are randomly generated x j within the interval of α j , β j as represented below: where, n is the number of iteration. Migration loop.Consequently, the migration loop is executed, in which the leaders are selected at first according the best values.This selection is carried out after evaluating each individual by using the objective function.During this process, the population is sorted according to the target function in a non-decreasing order {x 1 , . . ., x pop no } .After that, the first three individuals are selected with respect to the lowest value of the objective function.For all individuals, two clones are created, which is an individual with the same coordinated as represented in below: End for; Moreover, the random number is created for each individual's coordinates, before they begin to travel in the direction of leader.Then, it is compared to the controlling parameter CPT as represented in below: Subsequently, all other individuals starts to move towards the leader, where the movement occurs in steps until the final destination on the iteration is reached with respect to the parameter Num set .It is estimated for the first leader by using the following model: Consequently, it is also estimated for the second and third leaders by using the following models: After all movements, the best step is identified for each individual (the step at which the value of the objective function is small), and the individual takes this location, assigning itself with the corresponding coordinate values, and moves to the next population.It is estimated for the first leader as shown in below: (8) x j = α j + rand j [0, 1] β j − α j , j = 1, 2, . . ., n, for all k = 1, . . ., pop no (9) x For the second and third leaders, the functions are calculated as follows: Convergence testing.Furthermore, the convergence speed of this optimization algorithm is validated by using the following model: ≥ Dist m and cnt m < Mig.If the above conditions are satisfied, the maximum number of migrations are not reached, then go to Step A; Otherwise, go to Step B: Step A: • Updating the population. • Sort all individuals based on the non-decreasing objective function: Where P = 3 * pop no ; • Remove the last 2 * pop no + 1 3 * pop no individuals thus leaving only x 1 , . . ., x 2 3 * pop no . • Generate 1 3 * pop no new individuals, where, k = 2 3 * pop no , . . ., pop no and j = 1, 2, . . ., n • Increase the iteration counter: Step B: • Refinement and stop criterion of algorithm; • Increase the Num set parameter and conduct a migration cycle for the second and third leaders relative to the first leader: Predict new leader position, Returning the best solution found during the search: Reconciliate multi-agent Markov learning (RMML). After feature selection, the novel technique, named as, Reconciliate Multi-Agent Markov Learning (RMML) is employed to predict and categorize the intrusion according to the selected features.It is a kind of machine learning mechanism mainly used for accurately predicting the cyber-threats against the smart city networks.In the existing works, the different types of machine learning techniques such as DT, RF, LR, SVM, KNN, and etc. are developed for developing an effective IDS security framework.Yet, it has the drawbacks of increased false alarm rate, error rate, complex to understand, high training and testing time.Hence, the proposed work intends to develop an advanced Cyborg intelligence mechanism by designing an optimization incorporated machine learning classification methodology, which helps to ensure the security of smart city networks against the cyber-threats.In the proposed work, the RMML based machine learning model is mainly used to predict the intrusion from the smart city networks.This algorithm is developed based on the conventional multi-agent markov decision technique, which is more suitable for handling the prediction problems.When compared to the other machine learning techniques, the proposed RMML has the primary advantages of low computational complexity, reduced time consumption, and high training speed.In the proposed technique, the probability density function is estimated for taking an accurate decisions while predicting intrusions from the intrusion data.Typically, the deep learning techniques consume more time for training and testing data samples, and also it follows some complex computational operations to obtain the best classification results.When comparing to the deep learning techniques, the machine learning techniques consume less time to provide the classified label.But, their accuracy and efficiency were not up to the mark.Hence, the proposed work aims to implement the novel and effective machine learning technique for intrusion identification and classification.In this model, the vector prediction, weight matrix formulation, coupling coefficient estimation, and probability density function estimation are performed to take an accurate decisions at the time of intrusion detection, which lowers the complexity of classification with ensured accuracy.Initially, the samples in i th label is represented as u i , and samples for the j th is considered as s j .After that, the vector prediction is performed by inferring all features in the layer i as represented in below: where, W ij is the transformation matrix which is connected to decision process, u i be the prediction vector for i th label, and B j indicates the bias of the j th label.After that, the prediction vector for the j th label is considered as vote, and the weight matrix is estimated according to the coupling coefficient as shown in below: where, Z ij is the dynamic coupling coefficient that is computed as follows: where, p ij indicates the probability that common features between i th label and j th label.Consequently, the prob- ability value is computed for each category of label by using the following equation: where, δ j obtained from voting is computed by the multiple iterations of the algorithm model training to update p ij .Moreover, the backpropagation function is used to optimize the network parameters with the interval of loss function as represented in below: where, c is the number of categories of the training samples, and I c is the indicator function as calculated below: where, m + indicates the upper bound correcting false positives, m − denotes the upper bound correcting false negatives, ϑ is the sale factor that adjust both upper and lower bounds, and class no is the total number of class.Based on the markov property and theory of probability of moving estimation, the discriminant for the sample corresponding to the category is estimated by using the following model: where, Pr(X|Y ) is the probability density function of the data, Pr(X) is the prior probability distribution of the each data of the particular category, and Pr(Y ) is the prior probability distribution of the each data of the any category.Then, the markov theory is computed in below: where, c denotes the sub category of each label, C indicates the number of class, and τ controlling parameter of the space term as represented in below: where, R is the random field of the normalization constant, and the potential function f c (x) is computed as follows: Furthermore, the probability density function is estimated as represented in below: where, x i is the i th sample, y i is the category of the i th sample and M is the total number of samples.Based on the Markov decision formula, the probability is obtained for the category of, Then, this probability function is converted into a negative algorithmic form, where the problem of probability maximization is transformed into the minimum value problem by using the following model: where, (a, b) is the Kronecker function, Pr y i |x i is the posterior probability of the output value from the neural network, and Pr y i is the prior probability of the category.It is calculated based on the proportion of the current category after each iteration and, used as the input value for the next iteration.Based on this model, the proposed classifier predicts and categorizes the type of cyber-threat with reduced training and testing time. Results This section validates and compares the performance and results of the proposed Cyborg intelligence mechanism used for ensuring the security of smart city networks.To test this security systems, the different types of cyberthreat datasets are utilized in this work, which includes the UNSW-NB 15, NSL-KDD, BoT-IoT IDS, DS2OS, and NSL-KDD.Moreover, the performance of the proposed optimization technique is also validated according to the number of iterations, best score, objective space, average fitness value, and searching history.In this study, the three distinct and well-known datasets UNSW-NB 15, BoT-IoT, and DS2OS are used for verifying and evaluating the proposed CSOM-RMML approach.Since these datasets are among the most recent and widely used in security application systems, they are also the most recent and popular public datasets.Additionally, it contains contemporary assaulting data that could be quite helpful for analyzing network attacks.These datasets are used by the proposed system to assess the performance and results of the system because of their emergence, popularity, and ease of accessible.Additionally, ToN-IoT, another current dataset, is employed in this study to assess the superiority of the suggested work.For large-scale application contexts like smart cities, IoT, IIoT, and others, the suggested datasets are appropriate.The results show that the suggested CSOM-RMML could handle these datasets with excellent accuracy and efficacy.As a result, it can handle very large intrusion datasets with superior performance and prediction rate. Figure 2 shows the estimated benchmark testing function of the proposed CSOM optimization technique, then its corresponding searching history and average fitness value are shown in Figs. 3 and 4 respectively.Moreover, the best score obtained with respect to the varying number iterations is graphically represented in Fig. 5. Based on these results, it is analyzed that the proposed CSOM optimization technique provides an efficient results by finding the best optimal solution with reduced number of iterations.Due to the proper parameter setup and migration loop execution, the best optimal solution is effectively computed with increased convergence rate. Figure 6a to e presents the generated confusion matrix of the proposed Cyborg intelligence mechanism for the different types of datasets.Typically, the confusion matrix is mainly used to validate the detection performance of the classifier.According to the improved values of TPR, the increased accuracy of classifier is determined.In this analysis, the confusion matrices are validated for all types of cyber-threat datasets.The estimated results prove ( 41) −1 if sample belongs to same category +1 otherwise . (44) where, TP -True Positives, TN -True Negatives, FP -False Positives, and FN -False Negatives.Among other parameters, the accuracy is considered as one of the key factor used to assess the detection efficiency of the classifier.It must be improved for ensuring the better system operations and performance.Figure 7 shows the accuracy of the conventional and proposed COSM-RMML attack detection approaches used for securing the smart city networks.The obtained results depict that the COSM -RMML technique overcomes the other approaches with increased accuracy.Similarly, the classification accuracy is estimated for the conventional 50 and proposed optimization integrated classification techniques according to the different types of classes of NSL-KDD dataset in Fig. 8. In addition to that, the overall accuracy value is validated for the multi-objective optimization based classification techniques by using the NSL-KDD dataset.Based on the computed results, it is clearly illustrate that the proposed COSM -RMML technique provides an increased accuracy for all types of attacking classes, which is highly improved than the conventional approaches.Due to the proper feature identification, the classifier training and testing operations are enhanced, which supports to obtain the maximum accuracy during intrusion detection and classification.Accuracy of various optimization integrated classification technique in represented in Fig. 9. Figure 10 presents the overall performance analysis of the conventional and proposed classification based intrusion detection approaches.Here, the results are estimated in terms of accuracy, detection rate, False Alarm Rate (FAR), and f1-score.According to the results, it is evident that the combination COSM -RMML technique overwhelms the other approaches with improved performance results. Consequently, the detection rate is validated for the state-of-the-art IDS mechanisms, and standard machine learning techniques 51 In addition to that, the elapsed time and CPU time of the conventional and proposed security approaches are validated and compared in Figs. 13 and 14 respectively.Here, the time analysis is performed according to the different types of attacking classes in the UNSW-NB 15 dataset.Typically, the time cost can vary for both training and testing operations of classifier that is highly proportional to the type of predicted class.For instance, the normal class has the largest proportion during training and testing, hence it takes an increased amount of time with low frequency of data.From the observed results, it is identified that the proposed COSM-RMML technique requires the reduced time cost, when compared to the conventional approach.Moreover, the accuracy of the standard machine learning and proposed classification models are validated by using UNSW-NB 15 dataset as shown in Fig. 15.Similarly, the overall performance results of the conventional and proposed COSM-RMML intrusion detection approaches are validated and compared by using DS2OS, UNSW-NB15, and CICIDS-2017 dataset as represented in Figs. 16, 17 and 18.Here, the results are estimated in terms of accuracy, precision, recall, and f1-score.For proving the superiority, the proposed security framework is validated and tested by using these DS2OS, UNSW-NB15 and CICIDS 2017 datasets.Depending on the type of attacking classes, the detection rate and accuracy of classifier can vary.From these results, it is evident that the combination of COSM-RMML has an increased capability to handle all kinds of datasets with improved performance outcomes.When compared to the other approaches, the results are highly increased in the COSM-RMML system, which illustrates the superiority and betterment of the proposed model. Figure 19 validates the log loss value of the existing and proposed classification techniques for both DS2OS and UNSW-NB-15 datasets.Typically, the log loss value should be minimized for ensuring an accurate detection operations, because the increased loss value can degrade the performance of entire security model.Based on the estimated analysis, it is observed that the proposed COSM-RMML technique provides the reduced log loss value for both datasets by properly handing the input datasets.Furthermore, the FAR of the standard machine learning and proposed techniques are validated and compared by using the BoT-IoT IDS dataset as shown in Fig. 20.Due to the proper training and testing of features in the classifier, the FAR of the proposed classifier is effectively reduced, when compared to the other approaches. In this study, several parameters including accuracy, detection rate, false alarm rate, f1-score and time consumption have been estimated for assessing the performance of the proposed model.For this validation, the distinct and more popular intrusion datasets are used in this work, which helps to evaluate the performance results of the proposed model.For the NSL-KDD dataset, the intrusion classification accuracy is increased to 99% with respect to the different types of attacks in the dataset.Similarly, the detection rate is improved up to 99.5% for the UNSW-NB 15 dataset with the accuracy of 99.6%.Moreover, the elapsed time is reduced to 0.2 s in the proposed system by using UNSW-NB 15 dataset. Conclusion In order to safeguard the networks of smart cities against cyber threats, this article introduces a new security paradigm based on cyborg intelligence.This work's key contribution is the creation of a low-complexity computational and economical intrusion detection framework for smart city security.Here, this security approach is put into practice using the most well-known and widely accessible benchmark datasets.The stages of data pretreatment and imputation, feature optimization, intrusion detection, and categorization are all included in this framework.In the beginning, the QIDI technique is used to carry out the data imputation and normalization procedures, where the identification of the missing fields and the removal of undesired attributes are carried out to provide Figure 9 . Figure 9. Accuracy of various optimization integrated classification technique. Figure 12 . Figure 12.Detection rate of various machine learning techniques using UNSW-NB 15 dataset. Figure 15 . Figure 15.Accuracy of machine learning classifiers using UNSW-NB dataset. Table 1 . Recent To optimally choose the features for training the classifier model, an intelligent and advanced Conjugate Self-Organizing Migration (CSOM) based optimization algorithm is developed.• To accurately predict the intrusion with its category, a novel Reconciliate Multi-Agent Markov Learning (RMML) based classification approach is implemented.• To test and validate the results and efficacy of the proposed CSOM-RMML mechanism, the different types of evaluation indicators are estimated. state-of-the-art model analysis.In this study, an ensemble weighted average approach has been used to categorize the normal and intrusive events from the smart city networksComputational complexity and overfittingMachine learning modelHere, the different types of machine learning algorithms are implemented for intrusion detection High reliability, fast in process, and better prediction rate• www.nature.com/scientificreports/ that the combination of proposed CSOM-RMML based Cyborg intelligence mechanism provides an accurate predicted results by properly detecting intrusions and its appropriate classes.The accuracy, precision, recall, detection rate, and F1-score are mainly used to validate the detection results of classifier, which are estimated as follows: Vol.:(0123456789) Scientific Reports | (2023) 13:15681 | https://doi.org/10.1038/s41598-023-42257-0
2023-09-23T06:17:40.097Z
2023-09-21T00:00:00.000
{ "year": 2023, "sha1": "d46c10ef77d0f996b9048e85d40fce53a922e636", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-42257-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07e9c2dd140735be80854fd1a168ccc3af23796c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
58660551
pes2o/s2orc
v3-fos-license
Dysmorphic contribution of neurotransmitter and neuroendocrine system polymorphisms to subtherapeutic mood states Abstract Objective From an evolutionary perspective, emotions emerged as rapid adaptive reactions that increase survival rates. Current psychobiology includes the consideration that genetic changes affecting neuroendocrine and neurotransmission pathways may also be affecting mood states. Following this hypothesis, abnormal levels of any of the aminergic neurotransmitters would be of considerable importance in the development of a pathophysiological state. Materials and Methods A total of 668 students from the School of Medicine of the University of Malaga (Average = 22.41 ± 3; 41% men) provided self‐report measures of mood states using POMS and GHQ‐28 questionnaires and buccal cells for genotyping 19 polymorphisms from 14 selected neurotransmitter pathways genes (HTR1A; HTR2A; HTR2C; HTR3B; TPH1; SLC18A1; SLC18A2; COMT; MAOA; MAOB) and neuroendocrine system (AVPR1B; OPRM1; BDNF; OXTR). Results MAOA rs3788862 genotype correlates with decreasing levels of Tension among females (beta = −0.168, p‐value = 0.003) but it is neutral among males in this subscale. On the contrary, it correlates with lower GHQ‐28 depression scores among males (beta = −0.196, p‐value = 0.008). Equivalently, SLC18A1 and HTR2A variants correlated with anger and vigor scores, only among males. From the neuroendocrine system, OPRM1 rs1799971 correlated increasing levels of female's Anxiety, depression and Social Dysfunction scores. Conclusion Our findings suggest that these polymorphisms contribute to define general population mood levels, although exhibiting a clear sexual dimorphism. | INTRODUC TI ON The study of human personality, behavior, and mood has been addressed from multiple disciplines. Understanding the intimate nature of our emotions, let us give a rational voice to the feelings that condition our behavior in society. Emotions have been explained from the evolutionary perspective as rapid adaptive reactions that increase survival rates among vertebrates (Nesse & Ellsworth, 2009). Anxiety and fear, for example, are triggered from the amygdala even before our frontal cortex processes the origin of the warning stimulus. This alarm system is tightly regulated but allows an overreaction. From a purely biological point of view, it is much more economical to be alarmed without reason than not to once a situation deserves it (Marks & Nesse, 1994, Sanjuán and Casés, 2005, Garakani, Mathew, & Charney, 2006. Certain types of depression would emerge as a strategy of energy savings once facing the impossibility of achieving an objective, therefore reducing the risk to new stressors, a situation that would be reversed when these objectives are achieved (Sanjuán and Casés, 2005;Kinney & Tanaka, 2009). In this sense, depressed patients with a poor therapeutic response exhibit a significant improvement upon facing a favorable environmental change. This is also compatible with the hypotheses related to social competition, according to which the levels of serotonin (5-hydroxytyroxine or 5-HT) in the central nervous system are elevated upon the achievement of dominance, which is associated with the decrease in stress levels and mood enhancement (Raleigh et al., 1991;Price, Sloman, Gardner, Gilbert, & y Rohde, 1994). Each element that contributes to the mood state is influenced by a wide spectrum of individual and collective factors; therefore, it might be considered one of the most complex human traits to study. Knowing the biochemical pathways that comprise mood states is especially relevant when associated with the daily clinical practice. This happens whenever a subject reaches a pathological level of the different components of the mood state and exhibit anxiety disorder, bipolar disorder, or major depression. We must take into account that the different dimensions of mood states are quantitative variables that are differently affecting general population. Currently, we have a diverse scope of technical approaches to study mood states. The Goldberg General Health Questionnaire (GHQ-28) is an instrument originally designed to identify nonpsychotic mental disorders in contexts of general medical practice. It allows to differentiate in a simple way, psychiatric patients from those considered healthy (Goldberg, 1978;Retolaza et al, 2003). The GHQ-28 consists of four subscales: A-scale refers to somatic symptoms, B to anxiety and insomnia, C to social dysfunction, and D to depression. GHQ-28 can be applied to the general population and is suggested for the assessment of mental health. The Profile of Mood States (POMS) test consists of 65 items rated using a Likert type format, with five alternatives response ranging from 0 to 4 (McNair, Lorr, & Droppleman, 1971). It allows to obtain a general index of the alteration of seven partial measurements: tension, depression, anger, vigor, fatigue, confusion, and friendship. At the beginning, this test was used to evaluate the effects of psychotherapy and medication in external psychiatric patients although it was also tested with a variety of nonpsychiatric samples and has become a very popular instrument (Andrade et al., 2002). Current psychobiology includes the consideration that genetic changes affecting neurotransmission pathways may also be affecting mood states. Following this hypothesis, abnormal levels of any of the aminergic neurotransmitters, dopamine, norepinephrine, and serotonin, would be of considerable importance in the development of a pathophysiological state (Baldwin & Birtwistle, 2002). Serotonin offers remarkable action on sleep-wake cycle, behavior, cardiac function, endocrine secretions, pain perception, appetite, and sexual activity. Tryptophan is the known precursor of serotonin. Functional mutations affecting the coding region of the tryptophan-hydrolase 2 gene (TPH2) have been found among families with bipolar disorder Grigoroiu-Serbanescu et al., 2008). Other studies have analyzed the role of genes involving the neurotransmitter synthesis, transport, and degradation such as SLC6A3, HTR2A, MAOA, COMT, and SLC6A4 (O'Donovan et al., 2008;Williams et al., 2011). Coding variants within the COMT gene, related to dopamine degradation, have been shown to be associated with bipolar disorder risk (Zhang et al., 2009). A single-nucleotide polymorphism (SNP) in the promoter region of the serotonin receptor gene HTR1A was also significantly associated with bipolar disorder risk (Kishi et al., 2011) as well as different genomic variants of the mono amine oxidase genes (MAOA, MAOB) (Fan et al., 2010). Polymorphisms within SLC6A4 (5-HTTLPR) have been studied among major depressive disorder patients and been included in several meta-analyses that demonstrated a small but significant association to bipolar disorder (Lasky-Su, Faraone, Glatt, & Tsuang, 2005;Cho et al., 2005). Metaanalysis studying the different alleles of the TPH1 gene concluded that it is not associated with major depressive disorder but rather with bipolar disorder (Halmoy et al., 2010). Other genes have been also found to affect different neuropsychiatric disorders such as the brain-derived neurotrophic factor (BDNF gene), which is involved in both the pathogenesis of depression and the mechanism of action of antidepressant treatments (Duman & Monteggia, 2006;Verhagen et al., 2010). However, in spite of the role of aforementioned genes in the development of pathological status, literature is scarce about how the different genetic configurations affect mood states among healthy subjects. In order to evaluate in a quantitative manner the role of these genetic variants over the different dimensions of the mood state within the general population, we initiated a study in which 20 genetic variants affecting different neuroendocrine biochemical pathways were analyzed in a series of volunteers from the University of Malaga who phenotyped using POMS and GHQ-28 questionnaires. | DNA donors The study subjects of this research were 668 healthy students of the University of Malaga who voluntarily decided to participate in the project. Inclusion criteria were being adult and fell healthy without apparent psychiatric disease. The following demographic variables were taken: weight, height, age, sex, and whether they were currently taking any drug treatment. DNA was extracted from buccal swap according to standard procedures. This research was carried out with the approval of the Ethics Committee of the University of Malaga and all the students signed an informed consent. This work was carried out in accordance with the principles of the Declaration of Helsinki. | Single-Nucleotide Polymorphisms Genotyping was outsourced to Genologica SL. SNP analysis was performed using the TaqMan Open Array Genotyping System from Applied Biosystems. The results obtained were processed using TaqMan Kolmogorov-Smirnov test was used to determine the normality of the quantitative data series. For bivariate correlations studies, both the Pearson's correlation coefficient and Spearman's Rho were calculated. For models that included both the genetic variants and other covariates, the linear regression models were used. The level of significance was 0.05. | RE SULTS The study comprised 668 students from the School of Medicine of the University of Malaga recruited between 2011 and 2015. The age of the study subjects was relatively homogeneous (22.41 ± 3 years) although ranged between 18 and 51 years. The series was composed by 41% men and all from Caucasian origin. All of them were sampled for buccal swap for subsequent determination of genetic polymorphisms. Call ratios had an average of 96%, although they ranged from 98% for SNPs such as rs3813929, rs3027452, or rs2254298, and the minimum of 89% obtained with rs6313. Hardy-Weinberg equilibrium (HWE) was determined for those SNPs mapping autosomal chromosomes and only those with p > 0.05 were used for further analyses (all but rs2254298, rs324981, and rs1800532, Supporting Information Table S2). When volunteers were then invited to fill the POMS and GHQ-28 questionnaires, from the initial 668 students, 601 (90%) completed both tests. Regarding the variables under study, a summary of the mood variables determined using POMS and GHQ-28 is shown in Supporting Information Table S3. We first determined the correlation between both tests and evaluated the effects attributed to age, sex, or BMI (Supporting Information Table S4). Gender exhibited statistically significant differences in Vigor T-score (lower among females, Spearman's p-value = 0.004) and chronic Anxiety (higher among females, Spearman's p-value = 0.008). Age also correlated with different parameters such as vigor, friendship, and new onset Depression, evidencing the need to use them as covariates to determine the potential role of the genetic variants under analyses. Beyond this, we found a relevant intercorrelation between the different variables within the same questionnaire (GHQ-28 chronic and new onset) as well as a significant correlation between equivalent variables interrogated in POMS and GHQ-28. As an example, we found that the POMS T-score measuring fatigue positively correlated with GHQ-28 chronic anxiety and depression levels (Rho > 0.435, p-value < 0.001) (Supporting Information Table S4). Therefore, both test might be considered to a certain extend an internal replica. Next, we performed a multiple correlation analysis between the three genotypes for each genetic variant and the POMS Tscores. Results are shown in Table 1. A particular haplotype captured by the two variants within the MAOA gene correlated with a lower degree of Tension. HTR2A rs6313 also correlated with Vigor (Rho = 0.134, p-value = 0.004) suggesting that those subjects harboring the mutant homozygous genotype reported an increased Vigor than those with the reference genotype. We might mention the associations found for SLC18A1 variants and BDNF rs6265; however, the p-values obtained do not support multiple correction and therefore should be treated with caution. Of mention, when quantified with GHQ-28 while being neutral among females. TA B L E 1 Association between the genetic variants and the POMS variables measured Equivalently, SLC18A1 and HTR2A variants correlated with increasing levels of anger and vigor, respectively, but only among males. From the neuroendocrine system-associated genes, we might highlight the association among females between OPRM1 polymorphism and increasing levels of anxiety and somatization, concomitantly with lower Social Dysfunction scores. conditions that did not give rise to a psychopathology. This sensitivity on detection mood variability and genotype among euthymic subjects could be supported by the high homogeneity of the age a sociocultural features of the population studied. Perhaps because of this, some statistical association may not support multiple testing correction. | D ISCUSS I ON Similar studies (Takeuchi et al., 2015) relate the polymorphism of DRD2 with the POMS test and find differences between sexes in a similar population. Yarosh, Meda, Wit, Hart, & Pearlson, (2015), performed a multivariate analysis of polymorphisms of a whole genome association study with the POMS test in healthy subjects treated with amphetamine, finding association with SNPs related to genes of the glutamatergic signal pathways, which seem to mediate behavior and in cardiovascular responses to amphetamine. On the other hand, in the present study a sexual dimorphism is shown both when we correlate general items such as age, sex, and BMI and when we observe them associated with genotypes. Among the general items such as BMI, women have lower BMI than men in our population, which is reversed in the adult population, perhaps due to the low average age of the sample (Wells, 2007). The male sex correlates positively with the vigor. Age correlates positively with BMI (Livshits et al., 2012), and negatively with anger. This points that age correlates with lesser anger, higher vigor, friendship, and somatization. Thus, the irritable attitude decreases with age, as the sensation of activity and energy increases, what taking into account the age range of the population this could be associated with a greater hormonal balance, increases the capacity of relationships that is translated into friendship, and somatic sensations increase maybe due to greater recognition of one's body. The correlations between the test items show that tension, nervousness, agitation, etc., positively correlate with depression, anger, fatigue, and TMD of POMS questionnaire and anxiety, somatization, depression, and social dystonia in their chronic profile from GHQ-28 and negatively with the vigor and friendship (POMS questionnaire shall be more predisposed to depression and less tension, inversely than men, according to a higher MAO activity found among women (Jansson, 2005 According to the literature, mutant allele is related to increased pain, suggesting a compromised protein function (Slavich, 2014). In our study, we found an association to increased anxiety and somatization symptoms, while decreases Social Dysfunction, but only among women. This, in terms of quality of life, could be substantiated in a greater sensitivity to pain and less pleasant rewards to intense stimuli. Variant G carriers are adapted to relaxation stimuli or lower endorphinic pleasure than carriers of A. In fact could explain why this polymorphism and in particular the G allele is associated with a greater tendency to addiction and variations in the pharmacological response to it (van den Wildenberg et al., 2007;Anton et al., 2008). We did not find in the literature any association study on emotional response, most of them refer to predisposition to addiction, and some to depression, but indirect processes as a consequence of pain. Our results show an association in the responses to the Goldberg test of the G allele with anxiety and insomnia, somatization and very strongly negatively associated with social dysfunction among women. This result indicates that subjects in whom the degree of neurological reward mediated by the opioid effect is diminished are the most socially adapted or those with less social dysfunction indicate always that not fall in addiction. It could be interpreted that their state of less neurological pleasure leads to a more correct social response, perhaps abounding on the hypothesis that the greater tendency to pleasure is associated with a greater rebellion to social restrictions (Slavich, Tartter, Brennan, & Hammen, 2014). Overall, it can be deduced that genetic variations within the neurotransmitter (HTR2A, SLC18A1, MAOA) and neuroendocrine (BDNF and OPRM1) systems, determined in a euthymic population, are associated with emotional traits quantitatively assays using POMS and GHQ-28 questionnaires, although the gender shall be considered as both determining and excluding criteria in the different associations. | Study limitations Whenever a genetic association is reported, the suspect of facing a false positive arises. Given the amount of variables assayed, we might consider that defining an alpha of 0.05 might be too permissive. Here, we analyzed 14 genes and conducted two mood tests, but the different variables under study are not purely independent. MAOA polymorphisms for instance are in linkage disequilibrium. In Caucasian and that the Spanish population is largely homogeneous (Gayan et al., 2010). For these reasons, an independent replica of the current findings would be required to confirm our findings. ACK N OWLED G M ENT We thank all the volunteers for their participation in this study. CO N FLI C T O F I NTE R E S T S The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. This work was financed using internal sources from the Department of Surgery, Biochemistry and Immunology, Universidad de Málaga.
2019-01-22T22:34:52.285Z
2019-01-17T00:00:00.000
{ "year": 2019, "sha1": "c92ffa6693e61daefbcf00c217ae2bc2ff0691f5", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.1140", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c92ffa6693e61daefbcf00c217ae2bc2ff0691f5", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
236743957
pes2o/s2orc
v3-fos-license
About the asymmetry structure of the leaf blade Common plantain Using the method of geometric morphometrics the contour of the image of the leaf blade of the Common plantain (Plantago major) was aligned along the axis of symmetry to answer the question of whether the asymmetry and shape were influenced by environmental pollution by vehicles and climatic conditions. Procrustes distances showed that fluctuating asymmetry was higher in roadside populations in 2019. In the control populations, a mixture of two types of asymmetry, fluctuating and directional, was obtained. In 2020 with high precipitation (47% more than in 2019), asymmetry was represented by higher directional asymmetry (p <0.0001), although the overall asymmetry remained the same. The nonparametric Kruskal-Wallis test showed the influence on the shape only climatic environments of the year (p <0.001). The geographical location of the populations and the combined effect of the factors year and the place of leaves gathering did not affect the shape of the leaf blade. In 2020, no data were obtained on the excess of asymmetry in roadside populations compared to the control, therefore authors conclude about the weak bioindicative properties of the Common plantain in response to traffic pollution. Introduction The high morphological plasticity of plants is a serious cause interferes bioindication on fluctuating asymmetry index. Developmental stability, which is asessed on the basis of undirected, fluctuational deviation from bilateral symmetry, is considered as a genetic and phenotypic property [1,2,3]. Some researchers believe that fluctuating asymmetry (FA) is associated with plastic variability and adaptation, that is, FA is included in the adaptation arsenal, but does not serve as a measure of stress. Fluctuating asymmetry refers to a special type of phenotypic variation that depends on genetic properties.To determine the effect: dose -stress -FA, it is important to take into account the stress load gradient and the choice of suitable control, as well as careful measurement and checking various types of measurement errors [4]. The studies in which the value of FA is associated with stress exposure from the environment are carried out with various plant species, but they do not always show unequivocal positive results. A large number of publications on the indicative properties of silver birch arouse unremitting interest in the field of fluctuating asymmetry. For example, the influence of the elevation of the relief and the geographical position of populations with climatic features on FA is reported [5][6][7][8][9][10]. The obstacles are the different genetic status of the studied populations, i.e. different trajectories canalized ontogenetic path and unexplored effects hormesis and paradoxical effect with inadequate reaction to toxins [11]. The development of the methods of geometric morphometrics makes it possible to determine the difference in the coordinates of the left and right halves of the leaf blade. For this, the configuration of the leaf blade samples is aligned along the symmetry axis, and the FA value is determined in two way analysis of variance (Procrustes ANOVA). The proposed work took into account the advantage of this analysis for study the fluctuating asymmetry and shape of the leaf blades Common plantain (Plantago major), a common ruderal plant that can serve as an indicator of environmental pollution. Studies with the measurement of dimensional characteristics have shown that the plantain leaves often possess directional asymmetry [12][13], and the high plasticity of the leaf shape depended on the salinity of the soil and the geographic characteristics of the population area [14]. Exceeding a certain threshold of the FA value indicates a deviation in developmental stability. The relationship between plastic variability and developmental instability has not been sufficiently studied and probably based on the principle of feedback [15][16]. Therefore, a comparison of the shape and asymmetry, using the coordinates of the labels applied to the image of the leaf blades, can help in answering the question the relationship between two types of variability. In the proposed paper, we studied the shape and asymmetry of the leaf plate depending on the pollution of the air along the highways for 2 years with different volume of rainfall in three sites separated by distance 100 km along the same geographical latitude. At each sampling site, the experimental one covered square 50-100m × 1km along the road. The control zone with the same square lied not less than 400 m from the experimental zone. The sampling areas had fairly uniform physicochemical properties, which could indicate a close trajectory of ontogenetic development. Both zones did not differ in illumination and represented an open area occupied by common ruderal urban vegetation (quinoa, chicory, bluegrass, wheatgrass); the projective cover of plantain was 30-50%. The leaf blades with a half leaf width of 6.0 ± 0.1 cm, two to three from each of 50 plant, were harvested in August-September 2019-2020 and photographed twice. From each site, 100-150 leaf blades were sampled, i.e. a total sum was about 2,000 leaf blades. The calculation of vehicle emissions was carried out according to the standart methodology by the number of vehicles passing along the highways per unit of time. The total emission, taking into account all pollutants on the highways, was 0.002g/sec (Orechovo and Moscow) and 0.003 g/sec (Vladimir, standart error ±0.0001). Measuring and statistics We considered both the plant and the leaf blade as a conventional experimental unit. The first two labels applied were not paired and formed an axis of symmetry; the other 25 pair represented the homologous paired traits. Labeling was carried out twice on each image in the JPEG format. The software used (TPS and MorphoJ), as well as recommendations for their use, are freely available on the website of the morphogeometric group https://sbmorphometrics.org. The essence of Procrustes generalized analysis of variance (GPA) is in the constructing a consensus shape, removing the difference in the size of leaf blade samples and in evaluating the difference in the variance of paired labels XY coordinates [17][18][19]. First, the coordinates of the consensus centroid were determined. Then the difference in shape (classifier 'individual'), the value of FA ('individual × side'), and directional asymmetry, DA ('side') was statistically determined on the mean F-Goodall ratio, correspondingly FI×S and FS. On the basis of vector coordinates, the covariance matrices of two types were createdsymmetric matrices and asymmetry matrices. The canonical variate analysis (CVA) was carried out on the data XY coordinates. The symmetry matrix showed the difference in coordinates in the samples among the same landmarks and was used to test the shape of the leaf blade. The asymmetry matrix showed the difference in homologous landmarks between left and right sides and revealed differences in bilateral asymmetry. The difference assumed the distances between the centers of the sets of coordinates, the so-called Procrustes distances. An alternative method was the Kruskal-Wallis analysis. The statistical evaluation used the significance level α = 95%. Permuting multiplication of samples was carried out in 10,000 rounds. The measurement error was calculated in the percentage of the MS value of fluctuating asymmetry. Bilaterall asymmetry Procrustes analysis of the total amount of leaves for 2 years showed a high content of fluctuating asymmetry in the whole set of leaves, in control, and in leaves sampled near the road (Table 1). In all three cases, the consensus size did not differ (p >0.05). Classifier "individual" showed a difference in shape (p <0.0001). The control and experimental leaves showed the same size of consensus p-value and significant fluctuating asymmetry without directional asymmetry (the "side" factor is not statistically significant). The value of F-Goodall ratio was higher in control, than in experiment (correspondingly: 52,05 и 11,09). The analysis of populations carried out separately in 2019 and in 2020 showed the next results: in 2019, FA value slightly prevailed in the roadside population, in all cases there was the presence of directional asymmetry (Table 2). In 2020, compared to 2019, the content of directional asymmetry was 3.8 times higher near the road, and 1.8 times higher in the control (see values FS = 7.49 and FS = 17.31; p <0.0001). The error measure (residuals) was from 0.1% (control) to 9.01% (road). Accordingly, the FA value in plants near the road was lower. Procrustes distances in 2020 between the set of coordinates of control leaves and leaves near the road did not differ (Procrustes distance 0.003; p > 0.05). In 2019, the difference was significant (Procrustes distance 0.005; p <0.0001). Thus, the rainy growing season in 2020 reduced the differences in asymmetry. The asymmetry differed in the controlexperiment pair in the summer of 2019, with average precipitation value. The comparison of the two years of follow-up showed that there were no differences in overall asymmetry. This was demonstrated by the diagram of the canonical analysis of variance for the first component CV1 (Fig. 1A). The difference in asymmetry between years was higher (0.008; p <0.0001) than the difference between the total control and all leaves by the road (0.003; p = 0,002 Fig. 1B). Thus, in a wet year, an increased directional asymmetry was observed, in contrast to a temperate climatic year, where fluctuating asymmetry occupied big dole of the asymmetry. The larger samples neutralized the presence of directional asymmetry. This was true for both years of study. The sample size with n = 50, even with a fourfold increase in replicas, was clearly insufficient to obtain a representative result. So the plantain leaf blades had a high fluctuating asymmetry, tested only at the maximum sample sizes, which should be taken into account when evaluating the FA and developmental stability. Shape of leaf blade Non-parametric Kruskal-Wallis analysis showed the impact of the only factor 'year' on the shape of the leaf blades (p <0.001), a sampling site had no effect on the shape. The differences in Procrustes distances between the centers of the control and experimental sets in 2020 were less high than within the same samples, which indicated accented variability in the morphology of leaf blades in a rainy summer. The symmetric component of shape variation varied pronouncedly from year to year in CVA (Fig. 2). In the summer of 2020 the shape of the blade looked oval, and in 2019 the shape was close to the ovoid form with a narrowed proximal. Thus, if in 2019 plantain acted as a reliable bioindicator (75% of positive results), then in 2020 the results did not allow us to classify this plant as an indicator of developmental stability. This is confirmed by the results obtained on metrical bilateraly symmetrucal traits [13]. Note that bioindication as a property is not so often encountered in recent works, especially concerning the indication of developmental stability by the FA index. The transit of asymmetry from FA to DA, or vice versa, deserves attention as the interaction of genom and phenotype. The opinion about directional asymmetry as a bioindication property for assessing developmental stability remains controversial, although it is known that DA is emerged under stress environment, implying a genotype effect on plant morphology [20]. The high morphological plasticity of this species is the serious reason it could not be considered a reliable indicator with 90-100% positive results. A weak correlation was obtained between the two components of the shape (r = 0.13-0.14; p <0.001), i.e. shape has been associated with asymmetric variability. Directional asymmetry, as latent, expressed within high degree of freedom df only at small sample sizes and could depend on the frequency distribution of the sample. For example, the control leaf pool showed directional asymmetry at all level of classifier (individual, leaf and image). Testing of experimental leaves showed statistically significant FA (FI×S = 5.05; p <0.0001) with statistically insignificant directional asymmetry. Subsequent testing at the image level revealed directional asymmetry (FS = 2.4; p <0.0001). Such a "hidden" DA was revealed when testing the asymmetry of woody plants [21][22]. The further research will be reffered to the study of the heterogeneity of the samples distribution. The multivariate factor analysis of shape and asymmetry over several years of research as data accumulates is also important. Conclusions a) Plantain leaf blades had a pronounced fluctuating asymmetry, which was determined with a significant volume of leaves (more then 150 leaf blades). b) Large plantain, like other herbaceous perennial plants (for example, strawberry species), has dubious bioindication properties on the fluctuating asymmetry index, because climatic factors, for example, the amount of precipitation, have a stronger effect than man-made pollutants. c) During 2 years of observation, the structure of asymmetric variability, available at high sample sizes, changed from FA to directional asymmetry under high humidity environment. The shape of the leaf plate of Common plantain varied significantly correlating to the total asymmetry which contained two kinds varying components -FA and DA.
2021-08-03T00:05:36.221Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "77dcf0d563ebae2eb6775c44890fcf820bdb6139", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/38/e3sconf_iteea2021_04004.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "94b78c771265bfffdb4c93e17d34217f2542cdb8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
18351138
pes2o/s2orc
v3-fos-license
Small indels induced by CRISPR/Cas9 in the 5′ region of microRNA lead to its depletion and Drosha processing retardance MicroRNA knockout by genome editing technologies is promising. In order to extend the application of the technology and to investigate the function of a specific miRNA, we used CRISPR/Cas9 to deplete human miR-93 from a cluster by targeting its 5’ region in HeLa cells. Various small indels were induced in the targeted region containing the Drosha processing site and seed sequences. Interestingly, we found that even a single nucleotide deletion led to complete knockout of the target miRNA with high specificity. Functional knockout was confirmed by phenotype analysis. Furthermore, de novo microRNAs were not found by RNA-seq. Nevertheless, expression of the pri-microRNAs was increased. When combined with structural analysis, the data indicated that biogenesis was impaired. Altogether, we showed that small indels in the 5’ region of a microRNA result in sequence depletion as well as Drosha processing retard. Introduction MicroRNAs are a class of 22-25 nt endogenous non-coding RNAs that play pivotal roles in the regulation of gene expression. 1 They are processed from pri-miRNAs, their precursors with stem-loop structure, to form »70 nt pre-miRNAs by the nuclear RNase III protein Drosha. The pre-miRNAs are then exported to the cytoplasm and cut by another RNase III family member, Dicer, to form a »23 bp miRNA/miRNA* duplex before one of the strands is incorporated into the RNA-induced silencing complex. 2 MicroRNAs recognize their targets mainly through mechanisms mediated by the seed sequence (the 2-8 nt from the 5'end of the miRNA) and induce translational repression or mRNA de-adenylation and degradation. 3 Loss-of-function methods have proved to be the best ways to delineate the biological roles of miRNAs in various model systems. Antisense technology has been widely used as a lossof-function method in miRNA studies. 4 The drawbacks of this method are incomplete masking of the miRNA function and off-target effects on miRNAs sharing similar sequences, especially in the seed region. Different gene-knockout techniques have also been used to understand the function of miRNAs in animal models. [5][6][7][8] Homologous recombinationbased miRNA knockout methodology has been limited due to its low efficiency and complicated procedure. 9 Genomeediting technologies using engineered endonucleases have been shown to be useful for specific gene targeting such as zinc finger nuclease, and transcription activator-like effector nucleases (TALENs). 10 The clustered, regularly interspaced, short palindromic repeats (CRISPR)/CRISPR-associated (Cas) system provides a rapid and efficient technology that is quickly replacing TALENs and becoming the preferred platform for targeted genome editing. [10][11][12][13][14][15] The Cas9 protein from the type II CRISPR/Cas system of Streptococcus pyogenes relies on small CRISPR RNAs (crRNAs) to target chromosomal DNA, thereby triggering an error-prone repair process and producing targeted mutagenesis of the genomic level in cells. A single RNA molecule that combines the transacting RNA with the crRNA, termed gRNA, directs sitespecific DNA cleavage which occurs 3 base-pairs upstream of NGG, the protospacer adjacent motif (PAM) sequence of target genes. 12 The miRNA database (miRBase release 20) has currently registered 2000 mature miRNAs in Homo sapiens. 16 Among them, a significant number are expressed as miRNA clusters. During evolution, miRNA families were formed with miRNAs sharing the same 5' seed region. Functional knockout of one such miRNA without affecting the expression of other cognate miR-NAs is highly desirable. It has, however never been established how few nucleotides have to be altered in order to diminish the expression of one miRNA in such a scenario. Functional knockout of a particular miRNA by genomic editing has been proved promising. However, little is known about the mechanism and consequences of the microRNA sequence impairment. MicroRNA knock-out by deleting fragments as large as possible is recommended, 11 but 2 targeted cutting sites are required in such cases. Destroying the processing sites was an effective strategy using TALEN technique. 17 In this study, we attempted to deplete a single microRNA by targeting its 5' region, including the Drosha processing site and seed region using the CRISPR/Cas system as a novel tool. We choose miR-93, a critical onco-miRNA from a cluster, as the target. By establishing multiple cell lines carrying various mutants and exploring the consequences of genomic alteration, we showed that small indels in 5' region of this miRNA leads to a successful and specific gene knockout due to sequence impairment and biogenesis of the microRNA was blocked as well. Results and Discussion To knockout a particular single microRNA with the CRISPR/ Cas system, we chose miR-93, a member of the miR-106b-25 cluster as the target. miR-106b-25 and miR-106a-363 clusters are paralogs of the miR-17-92 cluster (Fig. 1A). 5,[18][19][20] The miR-17 family members not only have identical sequences in the seed region but also share extensive similarities along the whole length of the mature miRNA, with only a few nucleotides differing from each other (Fig. 1B). Such a target gene provided a good example in which to demonstrate the feasibility of molecular ablation of one miRNA without affecting the expression and function of other miRNAs. The 5 0 -end of microRNAs is determined by precise cleavage by Drosha and contains the seed region, which is critical for target recognition. 1, 3 We designed one gRNA targeting the PAM sequence at the 5 0 region of miR-93. Indels therefore tended to occur around the Drosha cleavage site and in the seed region ( Fig. 1C). After co-transfecting the expression constructs of Cas9 protein and gRNA into HeLa cells, we extracted whole genomic DNA for T7 endonuclease I (T7EI) assay 21 to assess the mutation efficiency, and showed that the gRNA for mir-93 induced mutations at a frequency of 16% (Fig. 1D). We cultured and expanded the HeLa cells containing mutated forms of miR-93. They were separated into single cells by flow cytometry and cultured for one week until single colonies were formed. Seventy clones were obtained and the genomic DNA was extracted. The miR-93 gene regions were individually amplified by PCR and sequenced. The clones with overlapping sequencing peaks were confirmed by sub-cloning and re-sequencing. As a result, 9 mutant cell clones were identified out of the 70. Among the 9, 7 clones had indels at the seed region of miR-93 in both alleles and 2 others had one mutant and one wildtype allele ( Fig. 2A). To evaluate whether these indels result in depletion of miR-93 in the mutated cells, we carried out quantitative reverse-transcription PCR (qRT-PCR) using stem-loop primers (Fig. 2B). Astonishingly, miR-93 was almost undetectable in all the 7 clones with disrupted seed sequence in both alleles, while in the 2 clones, miR-93-m1 and miR-93-m15, where one wild-type allele was still intact, the amount of miR-93 fell to half compared with the wild-type control. It was a surprise that even very small indels in the seed region (such as those in miR-93-m4 and miR-93-m21) had the ability to totally abolish the production of mature miR-93. To assess whether the indels at miR-93 locus would affect the expression of miR-25 and miR-106b, the other 2 closely located miRNAs in the miR-106b-25 cluster, qRT-PCR was performed with the total RNA of the 9 mutant clones and it was found the expression of miR-25 and miR-106b did not change in the miR-93 knockouts, with the exception of miR-93-m23 (Fig. 2C). We noted that the indel in one allele of miR-93-m23 extended to miR-25 locus and disrupted this gene as well, and this is in good accordance to the fact that miR-25 showed »50% decrease. We further determined the expression of other miR-17 family members, which share the same seed sequence and highly conserved mature sequence with miR-93. According to the small-RNAsequencing results, miR-17 and miR-20a are highly expressed in HeLa as well as miR-93 and miR-106b. We then performed qRT-PCR of these 2 microRNAs in the 9 mutant clones and found their expression was unaffected by miR-93 deletion (Fig. 2D). In brief, the CRISPR/Cas-mediated small indels in the 5' region of a miRNA can serve as a very clean tool with which to dissect the functions of such miRNAs with high specificity. In order to further confirm the depletion of miR-93, small-RNA-sequencing was performed in the representive miR-93-m33 clone. This cell line had a D2 deletion in one allele and a D17 deletion in the other, which contained small indels next to Drosha processing site (D2) and larger indels that completely deleted the Drosha cutting site (D17). As a result, we found that the sequence counts for this particular miRNA were 99.78% lower than in the wild-type control (from 7126 down to 16), representing a nearly complete depletion of miR-93. Meanwhile, we checked other miRNAs of this family expressed in Hela cells, and found that their counts did not change much (Fig. 2E). These results indicate that small indels generated by CRISPR/Cas system in the 5' region are enough to eliminate the mature miRNA with high specificity. miR-93 is a critical regulator of cell growth. 22 To determine the functional consequence of miR-93 knockout by the CRISPR/Cas system, we determined cell growth by counting cell numbers of the miR-93-m33 cell line at different time points. We found that cell growth is slowed in the mutant cells, consistent with the function of miR-93 as an oncogene (Fig. 3A). To further analyze the consequence of these miR-93 indels on the targetome, we assessed the expression of PTEN, E2F1 and p21, 3 well-known targets of miR-93 responsible for cell proliferation 18,20 in the miR-93-m33 cell line. We found that all the targets were upregulated at both mRNA and protein levels (Figs. 3B and C). In addition, we determined the molecular impact of miR-93 depletion on genome-wide gene expression using mRNA sequencing, followed by cuffdiff analysis. As a result, 140 genes were up-regulated 1.5-fold, 33 of which had miR-93 seed sequence in their 3'UTR, and 7 have been reported to play important role in cell proliferation (Fig. 3D). Next, we questioned whether these small indels could affect Drosha processing. We measured the expression level of pri-mir-93 using qRT-PCR with primers residing outside of the Drosha cleavage site (Fig. 4A). We assumed that the expression level of pri-miR-93 would increase if Drosha processing was disrupted and indeed, the expression level of pri-miR-93 increased markedly in the mutants with indels at both alleles but not in miR-93-m23, which had a D345 deletion disrupting the binding site of the PCR primer. As for miR-93-m1 and miR-93-m15, with a wild-type allele remaining, the level of pri-mir-93 only rose slightly compared to other mutants (Fig. 4B). We analyzed the secondary structure of the alleles with small indels in all the mutants and found the stemloop was not totally disrupted, but the free energy value (DG) in the predicted structure increased, indicating the impaired thermodynamic stability of the pri-miRNA (one example is shown in Fig. 4C). As an example, we checked the small RNA sequencing data of miR-93-m33 to see whether there were newly generated miRNAs and only found one sequence related to the D2 allele with a sequence count of 1 (Fig. 4C). These results indicate that very small indels at the 5' region of miRNA have the ability to disrupt Drosha processing. In this study, we deplete single miRNA by introducing indels at the 5 0 end of its mature sequence using the CRISPR/Cas system as a novel tool. We demonstrated that for a targeted miRNA, alteration of a single or a few nucleotides in the specific genomic sequence not only depletes the mature sequence, but also retards Drosha processing. Of note, although mature miR-93 was undetectable in all mutant cells, the expression level of pri-miR-93 was not lowered, but increased. Analysis of the secondary structure indicate that very small indels lead to a new stem-loop structure that has the potential to be processed to a de novo miRNA, but small-RNAsequencing failed to detect such novel small RNAs. A possible explanation is that Drosha cleavage was blocked by the structural alterations induced by the small indels. It is thought that Drosha processing was determined by the terminal loop as well as the stem-single-stranded RNA junction. [23][24][25] Our study showed that indels around the Drosha processing site inhibit its cleavage ability. This may be explained by the altered stem length which is critical for recognition of a miRNA by Drosha, as shown in a previous study. 25 In summary, we for the first time disrupt a single microRNA from a cluster and an miRNA family by introducing small indels around its 5 0 region to disrupt seed sequence and impair Drosha processing using the CRISPR/Cas system, providing a strategy to knockout microRNAs, particularly those residing in or overlapping within functional genes, and the members of a tight cluster or from one miRNA family. Importantly, this approach enhances our understanding of genome-editing technology for miRNAs. Materials and Methods Cas9 and gRNA design The Cas9 and gRNA expression vectors were made as previously described. 14 In brief, we chose the PAM sequence nearest to the miR-93 seed sequence and the 20-bp sequence upstream as the targeting sequence of gRNA (Fig. 1C). The relative expression level of pri-miR-93 was normalized to GAPDH mRNA and compared with the wildtype cell line. (C) Secondary structure of the miR-93 precursor was generated by Sfold (http://sfold.wadsworth. org/cgi-bin/srna.pl). Right: wild-type allele; the red circles indicate the seed sequence of miR-93. Left: the D2 allele of the miR-93-m33 cell line; the blue circles indicate the newly-generated miRNA with the sequence CAAUGCUGUUCGUGCAGGUAGUGU (*P < 0.05, **P < 0.01, ***P < 0.001 compared to wild-type). Cell culture and transfection Human cervical carcinoma cells (HeLa) were cultured in Dulbecco's modified Eagle's medium (DMEM; Hyclone) containing 10% fetal bovine serum (Hyclone) and 1% penicillinstreptomycin (Gibco) in 5% CO 2 at 37℃. The HeLa cells were seeded into 6-well plates (Corning) and after 24h, the cells were transfected with the 2 plasmids expressing Cas9 and gRNA respectively using Lipofectamine 2000 (Invitrogen) with 2.5mg plasmid each per well. Forty-eight hours later, the cells were harvested for genomic DNA extraction. Generation of the miR-93-deleted cell lines The transfected HeLa cells were cultured on 60-mm dish. After washing and trypsinization, the cells were re-suspended in DMEM. Single cells were picked by flow cytometry (BD FACS-Calibur) and then seeded in 96-well plates. After the colonies formed, the genomic DNA of each clone was extracted for target segment amplification and DNA sequencing. Cell proliferation assay Cells were seeded in 6-well plates at 50,000 cells per well. After 1, 3, and 5 d, the medium was discarded and the numbers of cells were counted after trypsinization. Independent experiments were performed 3 times. RNA extraction and quantitative reverse-transcription PCR Total RNA was isolated from cells in culture using TRIzol regant (Sigma). For miRNA detection, 500 ng total RNA was reverse-transcribed using specific stem-loop primers (RiboBio Co., Ltd) and SYBR Green-based qPCR was carried out using a specific forward primer and a universal reverse primer (RiboBio Co., Ltd). Independent experiments were performed 3 times. For pri-miR-93 and target mRNA qRT-PCR, 500 ng of total RNA digested with DNase I was reverse-transcribed with a random primer (TransGen Biotech) for pri-miR-93 or an oligo(dT) primer (TransGen Biotech) for target mRNA, and gene-specific primers ( Table 1) were used for SYBR Green-based qPCR. Independent experiments were performed 3 times. Western blot Cell lysates were prepared by incubation in lysis buffer (Sigma).Fifty micrograms of protein from each lysate was separated by SDS-PAGE and transferred onto a PVDF membrane (Bio-Rad laboratories). Primary antibodies were used to detect PTEN, E2F1, p21 (Cell Signaling Technology, Inc..) and GAPDH (EASYBIO). Anti-rabbit (for PTEN, E2F1, and p21) or anti-mouse (for GAPDH) secondary antibodies were used and visualized with the ECL substrate (CWBIO). RNA sequencing Total RNA from miR-93-m33 and wild-type controls were isolated in cultured cells using TRIzol regant (Sigma) and treated with DNase I (NEB). Library construction and sequencing for mRNA and small RNA were both carried out at the highthroughput sequencing center of the Biodynamic Optical Imaging Center, Peking University. As for mRNA sequencing, the RNA-seq reads were aligned to the human reference genome (hg19) using TopHat with default parameters. The bam files were used as input of Cufflink and Cuffdiff to detect differentially-expressed genes. And the small RNA sequencing data were analyzed by BGI Tech Solutions Co., Ltd. Statistical Analysis All data are shown as mean § SEM. Statistical analysis was performed by t-test to evaluate single-factor differences between 2 sets of data. P < 0.05 was considered statistically significant.
2018-04-03T00:56:04.739Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "cc8e60c574125c61663e030e8d6b9550e08fdeed", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15476286.2014.996067?needAccess=true", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "cc8e60c574125c61663e030e8d6b9550e08fdeed", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
5787925
pes2o/s2orc
v3-fos-license
A New Algorithm for Initialization and Training of Beta Multi-Library Wavelets Neural Network The resolutions of neurons networks training problems by gradient are characterized by their noticed inability to escape of local optima [Mich93], [Fabr94] and in a least measure by their slowness [Wess92], [Zhan92]. The evolutionist algorithms bring in some domains a big number of solutions: practice of networks to variable architecture [With90], automatic generation of Booleans neurons networks for the resolution of a class of optimization problems [Grua93]. However the effort of research was especially carried on the generation and the discreet network training. In this chapter, we propose a new algorithm of wavelets networks training, based on gradient that requires: • A set of training examples: the wavelets networks are parametrables functions, used to achieve statistical models from examples (in the case of classification) or of measures (in the case of modeling); their parameters are calculated from these examples or couples {input, output}. • The definition of a cost function that measures the gap between the input of the wavelets network and the desired output (in the case of classification) or the measured values (in case of modeling) present on the set of training. • A minimization algorithm of the cost function. • An algorithm of selection of basic function to initialize the network parameters. We try then to show the importance of initialization of the network parameters. Since the output is non linear in relation to these parameters, the cost function can present local minima, and the training algorithms don't give any guarantee to find the global minimum. We note that if we have a good initialization, the local minimum problem can be avoided, it is sufficient to select the best regressions (the best based on the training data) from a finished set of regressors. If the number of regressors is insufficient, not only some local minima appear, but also, the global minimum of the cost function doesn't necessarily correspond to the values of the searched parameters, it is useless then in this case to put an expensive algorithm to look for the global minimum. With a good initialization of the network parameters the efficiency of training increases. A very important factor that it is necessary to underline is: whatever the chosen algorithm, the quality of training wavelets networks is as much better than we have an optimal initialization. O pe n A cc es s D at ab as e w w w .ite ch on lin e. co m Introduction The resolutions of neurons networks training problems by gradient are characterized by their noticed inability to escape of local optima [Mich93], [Fabr94] and in a least measure by their slowness [Wess92], [Zhan92].The evolutionist algorithms bring in some domains a big number of solutions: practice of networks to variable architecture [With90], automatic generation of Booleans neurons networks for the resolution of a class of optimization problems [Grua93].However the effort of research was especially carried on the generation and the discreet network training.In this chapter, we propose a new algorithm of wavelets networks training, based on gradient that requires: • A set of training examples: the wavelets networks are parametrables functions, used to achieve statistical models from examples (in the case of classification) or of measures (in the case of modeling); their parameters are calculated from these examples or couples {input, output}. • The definition of a cost function that measures the gap between the input of the wavelets network and the desired output (in the case of classification) or the measured values (in case of modeling) present on the set of training. • A minimization algorithm of the cost function. • An algorithm of selection of basic function to initialize the network parameters.We try then to show the importance of initialization of the network parameters.Since the output is non linear in relation to these parameters, the cost function can present local minima, and the training algorithms don't give any guarantee to find the global minimum.We note that if we have a good initialization, the local minimum problem can be avoided, it is sufficient to select the best regressions (the best based on the training data) from a finished set of regressors.If the number of regressors is insufficient, not only some local minima appear, but also, the global minimum of the cost function doesn't necessarily correspond to the values of the searched parameters, it is useless then in this case to put an expensive algorithm to look for the global minimum.With a good initialization of the network parameters the efficiency of training increases.A very important factor that it is necessary to underline is: whatever the chosen algorithm, the q u a l i t y o f t r a i n i n g w a v e l e t s n e t w o r k s i s a s m u c h b e t t e r t h a n w e h a v e a n o p t i m a l initialization. Advances in Robotics, Automation and Control 200 2. New wavelets networks architecture 2.1 Presentation From a given wavelets network architecture, it is possible to generate a family of parametrables functions by the values of the network coefficients (weight, translations, dilations).The objective of the wavelets networks training phase is to find, among all these functions, the one that approaches the most possible regression (Beta function for example).This one is unknown (otherwise it would not be necessary to use an approximation by wavelets networks); we only know the observed values (values of the regression to which are added noise) for several values valued by the input (points of the training set).We consider wavelets networks as follows [Belli07]: Where ŷ is the network output and x = {x 1 ,x 2 , ..., x Ni } the input vector; it is often useful to consider, in addition to the wavelets decomposition, that the output can have a linear component in relation to the variables: the coefficients ak (k = 0, 1, ... , N i ). N l is the number of selected wavelets for the mother wavelet family l Ψ . The index l depends on the wavelet family and the choice of the mother wavelet. The network can be considered as constituted of three layers: • A first layer with N i input. • A hidden layer constituted by N Mw wavelets of M mothers wavelets each to a wavelet family of size N l . • A linear output neuron receiving the pondered wavelets outputs and the linear part.This network is illustrated by the figure 1. Description of the procedure of library construction The first stage of the training procedure consists in the construction of the Beta library.We intend to construct a several mother wavelets families library for the network construction.Every wavelet has different dilations following different inputs.This choice presents the advantage to enrich the library, and to get a better performance for a given wavelets number.The inconvenience introduces by this choice concerns the size of the library.A wavelet library having several wavelets families is more voluminous than the one that possesses the same wavelet mother.It implies a more elevated calculation cost during the stage of selection.Ψ having for parameters ti and di is defined as: The wavelet library W, generated from the mother wavelet family, is defined as: The optimization of the network parameters, • The construction of the optimal library, • The construction of the wavelets networks based on discreet transform.The new architecture of wavelets networks founded on several mother wavelets families having been defined.Consequently, we can ask the question of construction of a model, constituted of wavelets network for a given process.The parameters to determine for the construction of the network are: • The values to give to the different parameters of the network: structural parameters of wavelets, and direct terms. • The necessary number of wavelets to reach a wanted performance.The essential difficulty resides in the determination of the parameters of the network.Because the parameters take discreet values we can make profit to conceive methods of wavelets selection in a set (library) of discreet wavelets.The conceived performance depends on the initial choice of the wavelets library, as well as a discriminating selection in this library. Principle of the algorithm The idea is to initialize the network parameters (translations, dilations and weights) with values near to the optimal values.Such a task can be achieved by the algorithm "Orthogonal Forward Regression (OFR)" based on the algorithm of orthogonalization of Gram -Schmidt Contrary to the OFR algorithm in which the best regressors are first selected [Lin03], [Rao04], [Xiao04], [Angr01], then adjusted to the network, the algorithm presented here integrates in every stage the selection and the adjustment.Before every orthogonalization with a selected regressor, we apply a summary optimization of the parameters of this one in order to bring it closer to the signal.Once optimized, this new regressor replaces the old in the library and the orthogonalization will be done using the new regressor.We describe this principle below in detail. Description of the algorithm The proposed algorithm depends on three stages: Initialization Let's note by Y the input signal; we have a library that contains N Mw wavelets.To every wavelet j i Ψ we associate a vector whose components are the values of this wavelet according to the examples of the training sequence.We constitute a matrix thus constituted Vw of the blocks of the vectors representing the wavelets of every mother wavelet where the expression is: . We note by: .N] the dilations. Selection The library being constructed, a selection method is applied in order to determine the most meaningful wavelet for modeling the considered signal.Generally, the wavelets in W are not all meaningful to estimate the signal.Let's suppose that we want to construct a wavelets network g(x) with m wavelets, the problem is to select m wavelets from W. To the first iteration, the signal is Y = Y 1 , and the regressors vectors are the V w (t,d) defined by ( 10).The selected regressor is the one for which the absolute value of the cosine with the signal Y1 is maximal.The most pertinent vector from the family V 1 carries the index i pert1 that can be written as the following manner: V is selected, it can be considered like a parametrable temporal function used for modeling Y.We calculate the weight W i defined by: We define the normalized mean square error of training (NMSET) as: With Y(k) is the desired output corresponding to the example k, and is the wavelets network output corresponding to the example k. Optimization of the regressor The optimization of the regressor is made by using the gradient method.Let's note by: , with Yd : desired output and Y : network output. This optimization has the advantage to be fast because we only optimize here the three structural parameters of the network.After optimization, the parameters : and are solutions of the optimization problem defined by: (,,) Considering the optimal regressor, we reset the network with this regressor that is going to replace the old in the library and the orthogonalization will be done using the new regressor. After one iteration we will have: Orthogonalization The vectors j i V are always linearly independent and non orthogonal (because N >> M W ). The vectors j i V generate a sub-vector-space of M*N dimensions.We orthogonalize the M*N -1 remaining regressor, and the vector Y1 according to the adjusted regressor Therefore, we make the library updating: We will have: To the following iteration we increment the number of N w =N w +1 wavelet.We apply the same stages described above.Let's suppose achieved i-1 iterations: We did i-1 selections, optimizations, and orthogonalizations in order to get the i-1 adjusted regressors ( 1 ,….., i-1) pert pert opt opt ii VV we reset i-1 parameters of the network. The network g(x) can be written at the end of the iteration i-1 as: We have Nw-i+1 regressors to represent the signal Y i in a space of N*M-i+1 dimensions orthogonal to ( We apply the same principle of selection as previously.The index i perti of the selected regressor can be written as: i i(i,j)=arg max Vi − ), we make the updating of the library, then we optimize the regressor and finally an orthogonalization.Finally, after N iteration, we construct a wavelets network of N wavelets in the hidden layer that approximates the signal Y.As a consequence, the parameters of the network are: The obtained model g(x) can be written under the shape: The set E represents the constraints of the problem.With this formulation the function f(x) pass inevitably by the set points of E. In practice, the constraints can contain noise.In this case, the signal that we want to rebuild doesn't necessarily pass by the points of the set E, the interpolation becomes then a problem of approximation: Once we know the function f(x) on the set of the domain x ∈[0,…,N], the problem is to recover f(x) for x> N.This formulation will be called extrapolation of the signal f(x).Interpolation is a problem of signal reconstruction from samples is a badly posed problem by the fact that infinity of solutions passing by a set of points (Figure 5).For this reason supplementary constraints that we will see in the presentation of the different methods of interpolation, must be taken in consideration to get a unique solution. The function ξ(f) is the sum of a stabilizing function S(f) and of a cost function C(f).The parameter γ ∈ [0, 1] is a constant of adjustment between these two functions.When γ goes toward zero, the problem of interpolation turns into a problem of approximation.The stabilizing function S(f) fixes the constraint of curve smoothing and it is defined as the following way: Where, D represents the domain of interest.The cost function C(f) characterizes the anomalies between the rebuilt curve and the initial constraints, this function can be written by: Where Ep = {(xk,yk) | k = 1,2,…,K} represents the set of the known points or the signal constraints. Discretization With the regularization, it is difficult to get an analytic solution.The discretization is useful and several methods can be used.Grimson [Grim83] uses the finished differences to approximate the differential operators; while Terzopoulos [Terz86] uses finished elements to get and solve an equation system (Grimson and Terzopouloses used the quoted methods while treating the case 2D).In the approach that follows, the function f is written like a linear combination of basic functions: Where N is the domain dimension and W i the coefficients.The basic functions Ψ i (x) are localized to x = i∆, Ψ i (x) = Ψ (x -iΔ) While substituting (33) in ( 32) and ( 31), the function ξ(f) can be rewritten as the following way: Where tij is a function of basic functions: Several wavelets functions can be used like activation function.The Figure 6 gives the curve of a new wavelets based on Beta function given by the following definition: Definition The 1D Beta function as presented in [Alim03] and in [Aoui02] is a parametrable function defined by () ( ) 01 ,,, xxp q x x ββ = with x 0 , x 1 , p and q as real parameters verifying: x 0 <x 1 , and: Only the case for p>0 and q>0 will be considered.In this case the Beta function is defined as: Example: Interpolation of 1D data using classical wavelets network CWNN In this example, we want to rebuild three signals F 1 (x), F 2 (x) and F 3 (x) defined by equations ( 39), ( 40) and ( 41).We have a uniform distribution with a step of 0.1 known samples.For the reconstruction, we used a CWNN composed of 12 wavelets in hidden layer and 300 trainings iterations.We note that for Beta wavelets we fix the parameter p=q=30. Table 1.gives the final normalized root mean square error (NRMSE) of test given by equation ( 42) after 300 trainings iterations for the F 1 , F 2 and F 3 signals.We define the NRMSE as: Where N is the number of sample and i y the real output. Example: interpolation of 1D data using MLWNN We intend to approximate the F 1 , F 2 , F 3 using MLWNN, composed of a library of 6 mother wavelets (from Beta 1 to Beta 3 , Mexican hat, polywog 1 and Slog 1 ), in the same condition as the example of approximation using CWNN. M-hat Polywog Table 2. Normalized root mean square error of test and selected mother wavelets To reconstruct the F 1 signal with a NRMSE of 3.79841e-3 using 12 wavelets in hidden layer the best regressors for MLWNN are: 4 wavelets from the Mexican hat mother wavelet, 0 wavelet from the polywog 1 , 3 wavelets from the Slog 1 , 3 wavelets from Beta 1 , 0 wavelet from Beta 2 and 2 wavelets from the Beta 3 mother wavelets.When using a CWNN the best NRMSE of reconstruction is obtained with Beta 2 mother wavelet and it is equal to 1.2119e-2. For F 2 signal the NRMSE is equal to 4.66143e-11 using MLWNN whereas it is of 1.2576e-9 using CWNN with Beta 3 mother wavelet.Finally for F 3 signal we have a NRMSE of 3.84606e-8 for a MLWNN over 1.9468e-7 as the best value for a CWNN. 2 Dimensional data interpolation Previously, for every described method in 1D, the case in two dimensions is analogous, while adding one variable in the equations. Mathematical formulation The mathematical formulation of 2D data interpolation can be presented in an analogous way to the one described in the 1D case [Yaou94] (we will suppose that we want to rebuild an equally-sided surface): Let E the set of the points The set E represents the constraints of the problem.With this formulation the function f(x, y) passes inevitably by the points of the set E. In practice, the constraints can be noisy.In this case, the signal that we want to rebuild doesn't necessarily pass by the points of the set E. The interpolation becomes then a problem of approximation. Method using wavelets networks The formulations for the 2D case are given by: Since the interpolated basic functions (wavelets) are separable, this will always be the case in this survey: Surfaces Table 3. Normalized Root Mean square error of test for the surfaces S 1 (x, y), S 2 (x, y), S 3 (x, y) and S 4 (x, y) using CWNN Table 3 represents the NRMSE of reconstruction of the four considered surfaces, using classical wavelets network constructed with 12 wavelets in hidden layer and based on Beta, Mexican Hat, Slog 1 and Polywog 1 wavelets [Belli05].This table informs that the number of samples to consider as well as their disposition for the reconstruction is important. For a same number of samples, it is preferable to use a uniform sampling than a non uniform one, the more the number of samples is important and the better is the quality of reconstruction. Example: approximation of 2D data using MLWNN [Bellill07-b] The same surfaces are used in the same conditions but using MLWNN with a library composed of two mother wavelets (Beta 1 and Beta 3 ).Experimental results are given in the following table.Table 3 and table 4 inform that the number of samples to consider as well as their disposition for the reconstruction is important: For a same number of samples, it is preferable to use a uniform sampling than a non uniform one, the more the number of samples is important and the better is the quality of reconstruction. When comparing table 3 and table 4 we can say that the performances obtained in term of NRMSE using the MLWNN algorithm are often very better that the one obtained with the CWNN.This shows that the proposed procedure brings effectively a better capacity of approximation using the parametrable Beta wavelets . Beta Table 4. Normalized Root Mean square error of test for the surfaces S 1 (x, y), S 2 (x, y), S 3 (x, y) and S 4 (x, y) using MLWNN 3 Dimensional data interpolation The case of 3 dimensions is analogous to 1D or 2D case.The reconstruction of sampled data using wavelets networks is deduced from the 1D or 2D case. Mathematical formulation The mathematical formulation of 3D data interpolation can be presented in an analogous way to the one described for the 1D case (we will suppose that we want to rebuild an equally-sided volume): Let E the set of the points , , , / 0,1,...., kkkk Ex y z q k k == we wants to recover N×N×N samples of f(x, y, z) as f(x k , y k , z k ) = q k for k = 1,…, K. The set E represents the constraints of the problem.With this formulation the function f(x, y, z) passes inevitably by the points of the set E. In practice, the constraints can be noisy. In this case, the signal that we want to rebuild doesn't necessarily pass by the points of the set E. The interpolation becomes then a problem of approximation: The problem of extrapolation is to recover the values of the function f(x, y, z) for x, y and z not belonging to the domain of interpolation.The problem of reconstruction of a volume from samples is a badly definite problem because an infinity volume passing by a set of points exists.For this reason, some supplementary constraints must be taken into consideration to get a unique solution. Method using wavelets networks Formulations for the 3D case are given by: Example: approximation of 3D data using MLWNN We used the GavabDB 3D face database for automatic facial recognition experiments and other possible facial applications like pose correction or register of 3D facial models.The database GavabDB contains 427 images of 3D meshes of the facial surface.These meshes correspond to 61 different individuals (45 male and 16 female), and 9 three dimensional images are provided for each person.The total of the database individuals are Caucasian and their age is between 18 and 40 years old. Each image is a mesh of connected 3D points of the facial surface without the texture information for the points.The database provides systematic variations in the pose and the facial expressions of the individuals.In particular, there are 2 frontal views and 4 images with small rotations and without facial expressions and 3 frontal images that present different facial expressions. The following experiment is performed on the GavabDB 3D face database and its purpose is to evaluate the MLWNN that we employ against the CWNN in term of 3D face reconstruction. For faces reconstruction quality measurement we adopt the common use of NMSE given by: ( ) Conclusion In this chapter, we described a new training algorithm for multi library wavelets network.We needed a selection procedure, a cost function and an algorithm of minimization for the evaluation.To succeed a good training, we showed that it was necessary to unite good ingredients.Indeed, a good algorithm of minimization finds a minimum quickly; but this one is not necessarily satisfactory.The use of a selection algorithm is fundamental.Indeed, the good choice of regressors guarantees a more regular shape of the cost function; the global minima correspond well to the "true" values of the parameters, and avoid the local minimum multiplication.So the cost function present less local minima and the algorithms of evaluation find the global minimum more easily. For the validation of this algorithm we have presented a comparison between the CWNN and MLWNN algorithm in the domain of 1D, 2D and 3D function approximation.Many examples permitted to compare the capacity of approximation of MLWNN and CWNN.We deduce from these examples that: • The choice of the reconstruction method essentially depends on the type of data that we treat, • The quality of reconstruction depends a lot on the number of samples used and on their localizations.Also we have define a new Beta wavelets family that some one can see that they are more superior then the classic one in term of approximation and we demonstrate in [BELLIL07] that they have the capacity of universal approximation.As future work we propose a hybrid algorithm, based on MLWNN and genetic algorithm and the GCV (Generalised Cross validation) procedure to fix the optimum number of wavelets in hidden layer of the network, in order to model and synthesis PID controller for non linear dynamic systems. Fig. 1 . Fig. 1.Graphic representation of the new wavelets network architecture Nevertheless, using classic algorithms optimization, the selection of wavelets is often shorter than the training of the dilations and translations; the supplementary cost introduced by different dilations can be therefore acceptable.We have a sequence of training formed of N examples distributed in the interval [a, b].Let j Ψ a mother wavelet family, x the variable, ti the translation parameter and di the dilation parameter.The wavelet Fig. 2. Selection of the pertinent vector Fig Fig. 3. Regressor optimization are respectively what remains from the signal and regressors in the orthogonal space to 1 pert opt i V .The model being at this stage, represented by the figure 4. Fig. 4 . Fig. 4. Orthogonal projection on the optimal regressor www.intechopen.comA New Algorithm for Initialization and Training of Beta Multi-Library Wavelets Neural Network 207 pert , The mathematical formulation of the interpolation of 1D data can be presented in the following way:Let the set of the points Fig. 5. Infinity of curves passing by a set of points = q, and n < p; the n th derivatives of 1D Beta function are wavelets [Amar06].Let's note by Beta n the n th derivative of Beta function. Fig. 6 . Fig. 6.Different shapes of Beta wavelets www.intechopen.comA New Algorithm for Initialization and Training of Beta Multi-Library Wavelets Neural Network 211 Table 5 . Evaluation in term of NMSE of 3D face reconstruction using MLWNN and CWNN www.intechopen.comAdvances in Robotics, Automation and Control 218
2016-01-15T18:33:56.328Z
2008-10-01T00:00:00.000
{ "year": 2008, "sha1": "0547c0a08ddf40a47ad64775f0d0b6d6f612ea91", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/citation-pdf-url/4663", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "4cbe7eec611c156b0df07c67a2bab54af34138e9", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
261061130
pes2o/s2orc
v3-fos-license
Traumatic diaphragmatic hernia: delayed presentation with tension viscerothorax – lessons to learn Diaphragmatic rupture is a serious complication of thoracoabdominal trauma. The condition may be missed initially. We describe the clinical course of a patient who sustained blunt abdominal trauma in a car accident. His diaphragmatic injury passed unnoticed, to present two years later with left tension viscerothorax, a rarely reported and hardly recognised entity. Nasogastric tube insertion aborted the emergency situation and the hernia was repaired successfully in a semielective setting. 1 One of the most severe complications of diaphragmatic rupture is tension viscerothorax or gastrothorax. 2 Case history A 30-year-old man, a known asthmatic on irregular medication, presented with chest tightness, shortness of breath, epigastric pain and excessive retching.Two years earlier he had been involved in a car accident.Radiological evaluation at that time, including compu d tomography CT), was reported as normal. On present examination, the patient was unable to lie flat, and looked anxious and distressed.His pulse was 125 beats/min, blood pressure 110/70mmHg, respiration rate 25 breaths/min and temperature 37.5ºC.His oxygen saturation was 85% on room air and there was a tinge of cyanosis. bdominal examination showed mild epigastric tenderness, and chest examination showed right tracheal deviation and decreased chest movement with audible bowel sounds in the left side.His blood picture and biochemistry results were within the normal range.A chest x-ray showed a collapsed left lung, and herniation of the stomach and bowel in the left chest, with a marked mediastinal shift, a picture of tension viscerothorax (Fig 1 ). A nasogastric tube was inserted with difficulty, and this was followed by the passage of air and little bilious fluid.As a result, the distress was relieved and the vital signs almost normalised.Using a nasal cannula, supplemental oxygen Owing to waiting for an operation room slot and because the condition was stabilised, surgery was performed in the second day following admission.Repair was carried out via a left anterolateral thoracotomy, through the seventh intercostal space.The herniated viscera were reduced into the abdomen and the diaphragmatic tear was repaired with interrupted polypropylene sutures reinforced with polypropylene mesh.The postoperative recovery was smooth and chest x-ray was satisfactory (Fig 4 .The patient was discharged home in good condition and remained so at his outpatient visit. Discussion Victims of road traffic accidents may sustain diaphragmatic rupture when there is a sudden increase in the intra-abdominal pressure, caused by the impact.The injury is seen more frequently on the left side. 1 Owing to the continuous motion of the diaphragm, which hinders healing, and aided by the negative intrathoracic pressure, the tear enlarges and more abdominal viscera protrude into the thorax, where they may get obstructed or strangulated. 1This explains why the injury ini ally passed unnoticed in our patient, who presented later with tension viscerothorax.4][5] It may therefore be advisable to obtain a follow-up contrast gastrointestinal series or, preferably, CT scan a few months later, for patients who have sustained thoracoabdominal trauma when no diaphragmatic injury is found initially.Obviously, in patients with severe torso trauma, there should be careful scrutiny to exclude a concomitant diaphragmatic rupture.To this end, helical CT has been claimed to attain high sensitivity and specificity. 1hen doubt exists and in the absence of a frank indication for laparotomy, thoracoscopy or laparoscopy may be used to visualise occult diaphragmatic injury, with possible endoscopic repair or conversion to open surgery. 6,7With laparoscopy, tension pneumothorax may develop owing to a gas leak through the diaphragmatic rent. 8n its climax, a diaphragmatic hernia may present with tension viscerothorax, a grave condition that may end with cardiac arrest or bowel gangrene. 9,10The clinical and radiological similarity of this condition to tension pneumothorax may create a diagnostic dilemma. 11An imprudently inserted chest drain (rather than a nasogastric or orogastric tube to deflate the intrathoracic stomach) 2,9 will certainly add more complications, as spillage of the visceral contents into the thorax would be inevitable.In the case presented here, the presence of bowel sounds in the chest facilitated the diagnosis before x-ray confirmation. Immediate decompression of tension pneumothorax, without radiological verification to avoid loss of valuable time, has been a teaching principle.In such conditions, the history of remote thoracoabdominal trauma should direct the attention to the possibility of a tension viscerothorax.In the presence of reasonable doubt and if time permits, a chest x-ray will allow distinction between the two conditions.Although nasogastric or orogastric tube insertion, when successful, decompresses the dilated stomach and restores oxygen saturation, 2 its insertion may be challenging, wit repeated attempts causing further deterioration or cardiac arrest at times. 9This difficulty is caused by kinking of the stomach at the diaphragmatic defect.For this reason, the most experienced person in the treating team should attempt its insertion.Additionally, an experienced endoscopist, if available, may attempt tube insertion using a gastroscope.However, as a last resort, percutaneous needle insertion into the stomach may decompress it without spillage. 12ur patient presented with many of the features of tension pneumothorax.The presence of audible bowel sounds in the thorax, in addition to the history of old trauma, enabled the correct diagnosis to be reached, which was confirmed with a chest x-ray.A nasogastric tube was inserted just past the oesophagogastric junction and resulted in partial deflation of the stomach.With the aid of supplemental oxygen through a nasal cannula, saturation was maintained.The condition was thus stabilised and the patient was kept comfortable until the time of surgery. Conclusions Despite advances in diagnostic radiology, traumatic diaphragmatic hernia continues to defy early detection in a subset of patients.To avoid this, a high index of suspicion should be maintained while evaluating trauma victims.Follow-up radiology a few months after the injury may recognise those who escaped early detection and, consequently, facilitates timely repair.Tension viscerothorax, which bears many of the features of tension pneumothorax, is a complication of delayed diagnosis.If successful, initial decompression of the stomach through a nasogastric or orogastric t e will abort the emergency situation, to be followed by a definitive repair of the diaphragmatic defect. Figure 1 1 Figure 1 Collapsed left lung with herniation of the stomach and bowel in the left hemithorax with marked mediastinal shift to the right Figure 2 2 Figure 2 Tension viscerothorax with deviation of the mediastinum to the right side Figure 4 4 Figure 4 Postoperative chest x-ray showing expansion of the previously collapsed left lung Figure 3 3 Figure 3 The nasogastric tube lying just below the cardia (white circle) Traumatic Rupture of the Diaphragm. E R Thal, R S Friese, Mastery of Surgery. J E Fischer, K I Bland, PhiladelphiaLippincott Williams & Wilkins20065th edn Tens
2018-04-03T01:41:05.350Z
2014-01-01T00:00:00.000
{ "year": 2013, "sha1": "d73531bd2d1dd6f82138873c3b5f60ebf8e5bacb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Grobid", "pdf_hash": "d73531bd2d1dd6f82138873c3b5f60ebf8e5bacb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
6991034
pes2o/s2orc
v3-fos-license
Platelet‑rich Fibrin: A Paradigm in Periodontal Therapy – A Systematic Review Periodontal tissue regeneration has always been a challenge for the periodontists owing to its structural complexity. Although with tissue engineering as a growing multidisciplinary field, this aim has partially been fulfilled. In recent years, platelet-rich fibrin (PRF) has gained wide attention for its utilization as a biocompatible regenerative material not only in dental but also in medical fields. The following systematic review has gathered all the currently available in vitro , animal, and clinical studies utilizing PubMed electronic database from January 2006 to August 2016 highlighting PRF for soft and hard tissue regeneration and/or wound healing. Although results are encouraging but require further validation from clinical studies to justify the potential role of PRF in periodontal regeneration so that this relatively inexpensive autologous biomaterial can be utilized at a wider scale. in the tissues. is The crucial role of platelets in inflammation and wound healing is due to the presence of several growth factors and cytokines. [7] Furthermore, they contain fibrin, fibronectin, and vitronectin that provide connective tissue, a matrix and create an efficient network for cell migration. [1] This has led to the idea of using platelets as therapeutic tools to improve tissue repair, particularly in wound healing. seArch strAtegy for the IdentIfIcAtIon of studIes The PubMed database of the US National Library of Medicine was utilized as the electronic databases, and a literature search was accomplished on articles using IntroductIon P rimary objective of day-to-day ongoing researches is to optimize healing and the biggest challenges that the researchers are facing is the development of a regenerative biomaterial to regulate inflammation and accelerate wound healing. [1] Healing is a complex process that involves organization of cells, biochemical triggers, and extracellular matrix synthesis for repair of the tissue. [2] Role of platelets in hemostasis and wound healing is well established, but the exact mechanism of healing in depth is still unclear. [3] Role of platelets in regeneration was proven way back in the 1970s, [4] owing to the fact that it is a reservoir of growth factors that are responsible for neovascularization, collagen synthesis, cell division, cell differentiation, induction, and migration of other cells to the injured site. [5] Postperiodontal surgery, wound healing occurs through a complex interaction between gingival fibroblasts, periodontal ligament cells, osteoblasts, and epithelial cell. Damage of blood vessels results in fibrin formation followed by platelet aggregation and elaboration of combination of various MeSH and free text words "Platelet rich fibrin or PRF and Periodontal therapy," "Platelet rich fibrin or PRF and clinical applications," "Platelet rich fibrin or PRF and Periodontology" from January 2006 to August 2016. A total of 49 scientific papers (14 in vitro, 2 animal, and 33 clinical studies) meeting the criteria were scrutinized. There was no restriction on the language and publication status imposed on the articles. Further additional studies were sought by searching the reference lists of identified trials and reviews. clAssIfIcAtIon of PlAtelet-rIch concentrAtes Following the debates about the various components of these platelet-rich concentrate preparations, a first classification was proposed by Dohan Ehrenfest et al., 2009, [8] which is now widely accepted. The classification is simple and is based on the presence or absence of leukocytes and the density of fibrin architecture in platelet concentrates. Depending on the difference in these parameters, it can be divided into the following four main types, i.e., pure platelet-rich plasma, pure platelet-rich fibrin (PRF), leukocyte and platelet-rich plasma, and leukocyte and PRF which are described here forth in Figure 1. ProPosed MechAnIsM of ActIon Properties of platelet concentrates depends on the techniques used as Choukroun's PRF is based on mechanical concentration process. [9,10] PRF is a condensation of suspended growth factors within platelets [ Figure 2]. [11][12][13][14] These growth factors are considered as tissue regenerative boosters and are ramified in wound healing. Based on elaborated growth factors from PRF, optimization of clinical usage of PRF can be done. [15,16] APPlIcAtIon of PlAtelet-rIch fIbrIn In clInIcAl PerIodontology A convincing healing bioregenerative material, PRF, shows compelling data in various in vitro and clinical studies. It can be utilized in various procedures such as management of intrabony defects, gingival recession, furcation defects, extraction socket preservation, and accelerated healing of wound. The following are some of the important studies highlighting its regenerative potential in the field of Periodontology [ Table 1]. dIscussIon The regeneration of the lost periodontal structures is the ultimate aim of the periodontal therapy to restore the health, function, and esthetics of periodontium. From periodontal point of view, the experimental and in vitro studies emphasizing the role of PRF on periodontal regeneration and periodontal wound healing are important and hereby discussed. The breakthrough in vitro study that introduced PRF in medical field was conducted by Choukroun et al. It highlighted improved neovascularization, wound closing with accelerated tissue remodeling in the absence of infectious events. [16] PRF used either in combination with bone grafts (bovine porous bone mineral, nanocrystalline hydroxyapatite, and demineralized freeze-dried bone allograft [DFDBA]) or pharmacologic agents such as metformin gel was found to be more effective in terms of improvements in clinical parameters and radiographic defect depth reduction compared to when bone grafts or metformin used alone. [17][18][19][20]24] Furthermore, the clinical and radiographic results of PRF used alone were comparable to DFDBA for periodontal regeneration. [19] 1 Agarwal et al. [17] January 2016 RCT PRF + DFDBA more effective than DFDBA with saline 2 Pradeep et al. [18] June 2015 RCT PRF + 1% MF group showed better results in clinical parameters and radiographic defect depth reduction compared to MF, PRF, or OFD alone 3 Shah et al. [19] January 2015 RCT PRF showed comparable results to DFDBA in terms of clinical parameters 4 Elgendy and Abo Shady [20] January 2015 RCT PRF + NcHA more effective clinically and radiographically compared to NcHA 5 Gupta et al. [21] July 2014 RCT Emdogain superior to PRF in terms of percentage defect resolution 6 Panda et al. [22] July 2016 SRM Together with OFD, PRF can be utilized as a sole regenerative material 7 Pradeep et al. [23] December 2012 RCT Either PRF or PRP with OFD demonstrated similar probing depth reduction, clinical attachment gain, and radiographic bone fill. PRF is less time consuming and relatively less technique sensitive 8 Lekovic et al. [24] August 2012 RCT PRF group resulted in improvement in clinical parameters while PRF + BPBM group augmented the PRF effects in pocket depth reduction, clinical attachment gain, and defect fill 9 Sharma and Pradeep [25] December 2011 RCT PRF + OFD group demonstrated greater probing depth reduction, clinical attachment gain, and bone fill in comparison to OFD alone group PRF in recession defects 1 Eren et al. [26] August 2016 RCT Root coverage with CAF+PRF resulted in significant increase in GCF TIMP-1 (levels and decrease in GCF MMP-8 and IL-1β levels as compared to CAF + CTG group 2 Femminella et al. [27] February 2016 RCT PRF enriched palatal bandage not only accelerated wound healing at the site of graft harvestation but also reduced the patients's morbidity 3 Moraschini and Barboza Edos [28] November 2016 SRM PRF showed no improvement in terms of root coverage, keratinized mucosa width, or clinical attachment level of Miller Class I and II gingival recessions compared to the other treatment modalities such as CTG group 4 Keceli et al. [29] November 2015 RCT Addition of PRF to CAF + CTG group added no further additive value except increasing tissue thickness 5 Doğan et al. [30] September 2015 RCT Gingival recession defects treated with concentrated growth factor enhanced the keratinized gingival width and gingival thickness 6 Aras et al. [31] August 2015 In vivo Denuded root surfaces after orthodontic treatment when treated with CAF + PRF showed satisfactory occlusal and periodontal results 7 Gupta et al. [32] April 2015 RCT In case of Miller Class I and II recessions combing CAF to PRF provided no added advantage in terms of recession coverage 8 Thamaraiselvan et al. [33] January 2015 RCT In case of Miller Class I and II recessions combing CAF to PRF provided no added advantage in terms of recession coverage except for increase in gingival tissue thickness 9 Tunalι et al. [34] January 2015 RCT In comparison to CTG group, leukocyte-PRF group showed better results in terms of root coverage indicating that it can be an alternative graft material for management of multiple adjacent recessions greater than 3 mm in size 10 Shetty et al. [35] January 2014 RCT Amniotic membrane can be successfully used as an autologous alternative to PRF in reducing the need for a second surgical site Journal of International Society of Preventive and Community Dentistry ¦ Volume 7 ¦ Issue 5 ¦ September-October 2017 Contd... Agarwal et al. [36] January 2013 RCT Double lateral sliding bridge flap+PRF showed an advantage of a single step procedure that resulted in complete root coverage and increased zone of keratinized gingiva 12 Padma et al. [37] September 2013 RCT For Miller Class I and II recessions, addition of PRF with CAF provides superior root coverage and added benefits of gain in clinical attachment levels and width of keratinized gingiva 13 Jankovic et al. [38] April 2012 RCT Laterally, positioned pedicle flap revised technique along with autologous suspension of growth factors and PRF for managing Miller Class II recessions showed stable 80% root coverage after 6 months 14 Jankovic et al. [39] August 2010 Comparative study PRF and CTG showed no difference except for greater gain in keratinized tissue width in CTG group whereas enhanced wound healing in PRF group 15 Aleksić et al. [40] January 2010 RCT No clinical advantage of PRF compared to enamel matrix derivative in covering gingival recession with CAF procedure 16 Del Corso et al. [41] November 2009 In vivo Reduced postoperative discomfort and enhanced tissue healing were the advantage of using PRF 17 Aroca et al. [42] February 2009 Controlled clinical trial Modified CAF+PRF resulted in inferior root coverage results but an added gain in gingival tissue thickness compared to conventional therapy PRF in furcation defects 1 Pradeep et al. [43] October 2016 RCT Combining rosuvastatin, PRF, and porous hydroxyapatite shows synergistic effects as a regenerative material 2 Bajaj et al. [44] October 2013 RCT PRF or PRP both were effective with uneventful healing of sites 3 Sharma and Pradeep [45] October 2011 RCT The use of autologous PRF showed significant improvement implying its regenerative role PRF and In Vitro studies 1 Kawase et al. [46] May 2015 In vivo Advocated the use of heat compression technique in preparing PRF for guided tissue regeneration procedures since it reduces the rate of biodegradation of PRF membrane without affecting its biocompatibility 2 Fan et al. [47] February 2013 In vivo PRF has positive biological effect on human gingival fibroblasts and hence can be utilized in tissue engineering when combined with seed cell human gingival fibroblast 3 Clipet et al. [48] February 2012 In vivo Showed that soluble growth factors can potentially stimulate tissue healing and bone regeneration 4 Gassling et al. [49] May 2010 In vivo PRF was found to be superior to collagen membrane (bioguide) as a scaffold for human periosteal cell proliferation 5 Dohan Ehrenfest et al. [50] September 2009 In vivo PRF cocultured with leukocytes (called chaperone leukocyte) shows double contradictory effect of proliferation/differentiation observed on osteoblasts 6 Choukroun et al. [16] March 2006 In vivo Highlighted accelerated tissue cicatrization because of development of neovascularization, fast wound closing, and tissue remodeling and absence of infectious events PRF in soft tissue healing 1 Del Fabbro et al. [51] Winter, 2014 SRM Suggests positive role of platelet concentrates on bone formation in postextraction sockets 2 Jeong et al. [52] September 2014 Animal study Sinus lift done simultaneously with dental implants are neither predictable nor reproducible when PRF is used as the sole grafting material 3 Hatakeyama et al. [53] February Although the efficacy of PRF as compared to Emdogain was found to be inferior in terms of defect resolution. [21] Studies have shown similar probing depth reduction, clinical attachment level gain, bone fill at sites treated with PRF, or PRF with open flap debridement. However, due to the fact that PRF is less technique sensitive, it may be considered as a better treatment option than PRF. [23] PRF being a reservoir of soluble growth factors and cytokines (transforming growth factor beta-1, insulin-like growth factor 1 and 2, platelet-derived growth factor, cytokine vascular endothelial growth factor, and interleukin 1, 4, and 6) that not only help in tissue regeneration but also accelerate wound healing. Studies have shown that PRF, when used with coronally advanced flap for recession coverage, has shown to decrease matrix metalloproteinase-8 (MMP-8) and interleukin beta levels but increase in tissue inhibitor of MMP-1 levels at 10 days, thereby promoting periodontal wound healing in the earlier phase of the process. [26,27] A systematic meta-analysis by Moraschini and Barboza Edos [28] and clinical studies by Keceli et al. [29] and Gupta et al. [32] have highlighted the inconsistent results of PRF in covering Miller Class I and Class II gingival recessions with no improvement in terms of root coverage, keratinized mucosa width, or clinical attachment level, but it was shown to have increased the gingival thickness. Further, Padma et al. [37] in a randomized controlled trial proved predictable treatment for isolated Miller class I and II recession defects when used with coronally advanced flap. It provided superior root coverage with added benefit in gain in clinical attachment level and width of keratinized gingiva after 6 months postoperatively. On comparing with PRF and connective tissue graft (CTG) in gingival recession procedures, it was found that there was a greater gain in keratinized tissue width in CTG group but better wound healing in PRF group. [39] Similar to the management of infrabony defects, the use of PRF in furcation defects when combined with bone grafts (hydroxyapatite) and rosuvastatin has shown better results emphasizing its role in periodontal regeneration. Various in vitro studies have shown a positive biological effect in human gingival fibroblast which can have a potential role in the management of gingival recession and periodontal tissue engineering. [47] It is well established that PRF contains soluble growth factors that not only stimulate tissue healing but also bone regeneration. [48] For guided tissue regeneration procedures, PRF has proved to be superior scaffold as compared to collagen membrane when used for in vitro cultivation of periosteal cells. [49] PRF has also shown remarkable positive healing effects when used for the preservation of extraction socket and in sinus lift procedures during simultaneous dental implantation (Jeong et al., 2014). [52] The studies show outstanding results with PRF in regenerating periodontal osseous defects and preserving extraction healing socket. Although there were conflicting data when PRF was used for managing gingival recession defects for root coverage. conclusIon Studies have confirmed that PRF is a therapeutic regenerative biomaterial with immense potentiality that has widespread clinical applications in medical as well as dental perspectives. The use of PRF alone or in combination with other biomaterials (such as bone grafts, soft tissue grafts, and pharmacologic agents) provided safe and promising results in the form of improvements in clinical and radiographic parameters in the management of periodontal osseous defects and hard tissue preservation of extraction socket. Although in denuded root coverage procedures in cases of gingival recessions, PRF showed some contradictory findings, and the results were not that favorable, but still, it provided an added advantage in terms of increment in gingival tissue width and thickness (gingival biotype). Tissue biotype is an important factor because it narrates the way a tissue will respond to inflammation, trauma, and surgical insult. Hence PRF does result in thick gingival Hauser et al. [54] June 2013 RCT Socket preservation by PRF results in predictable results 5 Gürbüzer et al. [55] May 2010 RCT PRF might not lead to enhanced bone healing in impacted mandibular third molar extraction sockets 4 weeks after surgery RCT=Randomized control trial, SRM=Systematic review and meta-analysis, DFDBA=Demineralized freeze dried bone graft, MF=Metformin, OFD=Open flap debridement, NcHA=Nanocrytalline hydroxyapatite, BPBM=Bovine porous bone mineral, CAF=Coronally advanced flap, CTG=Connective tissue graft group, TIMP-1=Tissue inhibitor of matrix metalloproteinases-1, MMP-8=Matrix metalloproteinase-8, IL-1β=Interlukin 1β, PRF=Platelet rich fibrin, PRP=Platelet-rich plasma, GCF=Gingival crevicular fluid biotype which shows greater dimensional stability during remodeling and enhancing collateral blood supply to the underlying osseous structure as compared to thin biotype which may compromise it. Although the potentiality of this nonexpensive, autologous biomaterial is encouraging, preparation and storage after preparation form the loop holes that need attention. The time interval between the speed of handling and ultimately its usage is highly crucial for its structural integrity and leukocyte viability. Hence, these limitations should be focused and worked upon by the researchers. Further validation is needed in the form of long-term randomized control studies with larger sample sizes to affirm the benefits and identifying the hidden potential of PRF as a biomaterial in the field of clinical periodontology. financial Support and SponSorShip Nil. conflictS of intereSt There are no conflicts of interest.
2018-04-03T03:34:27.378Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "6a9836093bbabc6d8ce45a5eb8d5e4310aea2759", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "db826d01ac86cc50ee81f99a518d9e7586a5ef87", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244119353
pes2o/s2orc
v3-fos-license
Species Home-Making in Ecosystems: Toward Place-Based Ecological Metrics of Belonging Globalization has undeniably impacted the Earth’s ecosystems, but it has also influenced how we think about natural systems. Three fourths of the world’s forests are now altered by human activity, which challenges our concepts of native ecosystems. The dichotomies of pristine vs. disturbed as well as our view of native and non-native species, have blurred; allowing us to acknowledge new paradigms about how humans and nature interact. We now understand that the use of militaristic language to define the perceived role of a plant species is holding us back from the fact that novel systems (new combinations of all species) can often provide valuable ecosystem services (i.e., water, carbon, nutrients, cultural, and recreation) for creatures (including humans). In reality, ecosystems exist in a gradient from native to intensely managed – and “non-nativeness” is not always a sign of a species having negative effects. In fact, there are many contemporary examples of non-native species providing critical habitat for endangered species or preventing erosion in human-disturbed watersheds. For example, of the 8,000–10,000 non-native species introduced to Hawai‘i, less than 10% of these are self-sustaining and 90 of those pose a danger to native biota and are considered invasive. In this paper, we explore the native/non-native binary, the impacts of globalization and the political language of invasion through the lens of conservation biology and sociology with a tropical island perspective. This lens gives us the opportunity to offer a place-based approach toward the use of empirical observation of novel species interactions that may help in evaluating management strategies that support biodiversity and ecosystem services. Finally, we offer a first attempt at conceptualizing a site-specific approach to develop “metrics of belonging” within an ecosystem. INTRODUCTION Decades of restricting humans from natural areas has sometimes led to failed attempts, socially and economically, to protect and restore our planet's biodiversity. The conservation and protection of nature without humans was our collective response to honoring the forces of nature -and within that effort was a paradigm that native species are inherently good, and non-native species must be removed to protect the integrity of a system. If at all possible and feasible, promotion, and preservation of naturally evolved ecosystems is the gold standard that we should strive to achieve. However, in many parts of the world we cannot uncouple the fact that humans and natural systems are linked-and that pristine landscapes are often in fact a mirage. Certainly this "coupling" is often unsustainable and we as humans need to revise our assumptions and expectations of what and how we can extract from nature so that nature has sufficient space and time to face disturbance. But if we acknowledge that not all non-native species are harmful or especially impactful in ecosystems -can we also reevaluate our attitudes toward native and non-native species and place more importance and emphasis on harmful invasions rather than the mere distinction between native and non-native? In other words, the native/non-native binary assumes that native is "good" and non-native is "bad" and has the effect of uncritically assigning the moral status of species based on a one dimensional logic of origins. This binary treatment can be seen as justifying the deployment of full-scale eradication programs of all non-native species. The authors of this essay are based in Hawai'i and hail from disciplines in philosophy, conservation biology, and natural resource management. In this island environment, there is a sharp focus on the native/non-native binary because >55% of the Hawaiian flora is non-native (Brock and Daehler, 2020). Unlike some continental landscapes where the focus is on one or a few particularly harmful non-native species, in Hawai'i every ecosystem has multiple invaders that often interact with each other (D' Antonio et al., 2017). Perhaps because we live in a socioecological island system that is at one end of a continuum, our viewpoints may differ from those in continental landscapes. Here, we provide examples from a tropical island ecosystem where rates of change occur much faster than continental systems and offer new perspectives that might unsettle long-standing assumptions toward native and non-native species. In this paper, we also discuss the history of militaristic language use in invasive species biology and how that language influenced our attitudes toward conservation. We consider the dichotomies that have biased our understanding of nature and the influence of globalization on the functioning of ecosystems. We propose that the incorporation of place-based empirical observation of novel species interactions is one consideration that can help in evaluating management strategies that support biodiversity and ecosystem services. The place-based framework asks if conservation-based decisions should move away from a universal norm that judges species based on origins or immigration status and suggests that non-native species be evaluated on the degree of damage they impose on ecosystems or how well they "play" with others. THE POLITICAL LANGUAGE OF INVASION "Names are the way we humans build relationships, not only with each other but with the living world" -Kimmerer (2013). In today's world of diversity, equity, and inclusion, some of the language of ecology and conservation, particularly around the processes of biological invasion, can feel dated. Terms in the invasion biology literature commonly describe organisms as alien, exotic, invasive, and enemies; management strategies are described using verbs such as control, combat, and attack. These terms elicit images of a world with distinct boundaries where species and systems are siloed, restricted, evaluated, and/or rejected (Figure 1). Modern scientific objectivity is threatened by the valuations of a social context, yet science is deeply saturated in the social logics of language in order to render intelligible its empirical truths. Hence, it is no surprise that the clarity of how to articulate unwanted and alien nature that motivated early writings in invasion biology is deeply influenced by the logics of the social world. This move to understand how language operates in representing nature reveals that language rarely captures nature accurately. How we classify nature often represents more of our human values of nature rather than nature itself. Yet, nature betrays our secure values and "surprises" us, providing motivation to reassess our understandings. The language of conservation shifts as our understanding of nature shifts. One important term to reckon with is native nature. For many Indigenous peoples, terms such as native in modern scientific discourse disclose a familiarity, a connection to nature (Salmón, 2000). More specifically, the Anishinaabe people view nature through a person-centered ontology hence understanding nonnative species as subjects that have migrated to new lands. Rather than a western perspective that treats non-native species as inherently invasive, this indigenous ecological perspective views non-native species as potential members of an ecological community once proper observation and understanding of the contributions non-native species brings to the ecological community (Reo and Ogden, 2018). However, in modern scientific discourse the term native is specific. It refers to origins of species, a biological natalism. Charles Warren (2021) points out that implicit in the discourse surrounding biological nativism is the assumption of a form of nativist purism that emerges in social and cultural contexts, which inevitably imports FIGURE 1 | A mural by street-artist Banksy, that was located in Clacton-on-Sea in the Essex district in eastern England, that was removed by the local council due to complaints of being offensive (https://www.bbc.com/news/uk-england-essex-29446232). We highlight here to demonstrate that the concepts of belonging cross-cut species' boundaries. sentiments of racism and ecological fascism. Hence, how we think about a species origin will influence how we value immigrant species. One potential problem with this logic is that there can be a huge difference in a species that is non-native/noninvasive compared with one that is unwanted/invasive based on the degree of negative impacts the species have on a given ecosystem. Another potential problem is that native is placed in static terms (tied fixedly to origins) rather than within a dynamic and migratory understanding of an ecosystem with changing stressors and environmental conditions, that is bound to encounter migrant species. THINKING BEYOND THE NATIVE/NON-NATIVE BINARY TOWARD A CONTINUUM "Being naturalized to place means to live as if this is the land that feeds you, as if these are the streams from which you drink, that build your body and fill your spirit" -Kimmerer (2013). How we talk about nature informs how we think about nature; thus, part of the problem of conceptualizing non-native species is the paucity of language that is non-militaristic at our disposal to think about unwanted nature. Charles Elton, in his career-defining work, The Ecology of Invasions by Animals and Plants Elton (1958), utilized militaristic language to describe the threat of invasive species, characterizing the rapid migration as "ecological explosions." Wilson (1997) along with other leading American conservationists argued for a "national program to combat invasions." The use of militaristic language in ecology and conservation biology literature was recently quantified, and word counts of militaristic language were greater in articles on invasive species than other topics and were also greater in basic science journals than in applied science journals (Janovsky and Larson, 2019). Although these word choices may have been said unwittingly (i.e., alerting toward newly found problems), to some they overly express a nativist language of militarism, inciting greater protections of nation-state borders through the preservation of ecosystems to resist biological invaders (see Figure 1). It appears that the native/non-native binary has the unfortunate consequence of eliding the descriptive term of "nonnative" with the more prescriptive or normative terms such as "alien, " which elicits xenophobic value judgments (Warren, 2021). Furthermore, the ethic of killing implied in the use of militaristic language contradicts the ethic of care in conservation management (Warren, 2021). Several authors have suggested recommendations to remedy word choices to better reflect the harm a species does or its ability to spread (Byrne and Hart, 2009;Janovsky and Larson, 2019). Scholars have challenged the ethical implications of the native/non-native binary charging that the binary is impractical to apply to conservation management policies (Warren, 2007). Further, the native/non-native binary can be viewed as ethically supporting colonial logics of exclusion and dispossession -ideas that had been used to undermine undocumented migrants and Indigenous peoples (Sinclair and Pringle, 2017). Moreover, the native/non-native binary does not fit well within the context of many contemporary restoration examples in which non-native species often provide critical habitat for endangered species or prevent erosion in human-disturbed watersheds (Ewel and Putz, 2004;Schlaepfer et al., 2011). Given the ethical and practical problems of the native/non-native binary in conservation ecology, it has been suggested that the binary has functioned as an unrealistic myth and ought to be reconsidered in terms of guiding conservation management policies (Warren, 2021). Most of the public is not necessarily concerned with ecological authenticity (Warren, 2007), but rather have developed new relationships within a continuum of species. These relationships with plants and animals (native or non-native) typically revolve around a species utility (food or function), a cultural link, or beauty and awe (Selge et al., 2011;Kueffer and Kull, 2017;Vilà and Hulme, 2017). The length of time that non-native species are in an ecosystem also influences social and ecological impacts. For example, non-native bird or frog calls may become beloved, or plants or animals that become symbols representing a place to which they are not considered native (such as the coconut tree is to Hawai'i). Plants brought by early Hawaiians on voyaging canoes, known as canoe plants, are non-native plants, but have multi-faceted cultural significance in Hawai'i, are widely valued for their practical utility, have names in the Indigenous language, and representations of these plants convey a sense of place and are often the subject of contemporary art and fashion. Hawai'i has become linked to these plants through its human history. This linkage is not necessarily good or bad-it should be evaluated in a place-based manner. We argue that non-judgmental observations of novel species interactions should elucidate when a system is supportive of biodiversity or ecosystem services. Non-native species could be evaluated on the degree of damage they impose or how well they "play" with others rather than where they are from or how they got there. In a global analysis of 1,551 individual cases that addressed the impact of a non-native plant species, it was concluded that impact is strongly dependent on context, and that there was no singular measure (Pyšek et al., 2012). The fact that non-native species' effects are place-based, dependent on species' characteristics, species interactions, environmental conditions, and the resident community, suggests that decisions about non-native species by the conservation community ought to move away from ahistorical and delocalized methodologies and shift toward evaluative standards that could be inclusive of placed-based values and needs. THE IMPACTS OF GLOBALIZATION ON NATIVE SYSTEMS The ecological impacts of globalization on natural systems are far reaching and well documented (Vitousek et al., 1996;Young et al., 2006;Meyerson and Mooney, 2007;Hulme, 2009;Morse et al., 2014;Ricciardi et al., 2017;Závorka et al., 2018;Tromboni et al., 2021). The movement of species both intentionally and accidentally has spawned decades of research on the ecological and economic impacts of non-native species (Pimentel et al., 2001;Pyšek et al., 2012;Vilà and Hulme, 2017). Furthermore, the interaction of species movement with anthropogenic disturbances and stressors such as climate change and land conversion have exponentially elevated this issue to the point that no ecosystem is exempt from vulnerability to invasion (Didham et al., 2005;Brook et al., 2008;Crowl et al., 2008;Lugo, 2020). It is now clear that many ecosystems are largely governed by novel systems and interactions (Van Kleunen et al., 2015), and that ecological integrity is a continuum from high functioning ecosystems to low functioning systems, relative to disturbance and invasion. In this continuum, high functioning novel ecosystems can exist, but the vast majority of our Earth's ecosystems (ranging from native to novel) lie somewhere in the middle of the continuum (Vitousek et al., 1997;Sanderson et al., 2002;Watson et al., 2016). In lieu of this reality, there is a renewed interest in better understanding novel ecosystems and their potential positive or negative contributions to ecosystem integrity, ecosystem services, and resilience (Ricciardi et al., 2013;Kuebbing and Nuñez, 2015;Sapsford et al., 2020). Examples from Hawai'i demonstrate how native species can benefit from non-native species interactions. An endangered sphinx moth (Manduca blackburni), dependent on an endangered tree in the Solanaceae family as a host for the caterpillar stage, now relies on a non-native Solanaceae tree species (Nicotiana glauca) for this service (Mitchell et al., 2005). Similarly, populations of the endangered hawk known as the 'io (Buteo solitarius) have shifted their foraging strategies to include non-native food sources (rodents, non-native birds, etc.) (Griffin et al., 1998). Further, in some habitats, pollination and dispersal of many native species are now largely from non-native animals (Foster and Robinson, 2007;Aslan et al., 2014). Arguably, it is unclear if these analog species serve the role equally as well as their native counterparts, but a partial service is clearly better than none (Rodriguez, 2006;Schlaepfer et al., 2011). It is also unclear how cascading impacts (i.e., the consequences of non-native species on nontarget and tangentially related species) will influence ecosystem functioning. Rodriguez (2006) argues that facilitative interactions between invasive and native species or non-native and native species can have both positive and negative cascading effects across trophic levels, leading to restructured communities and ultimately evolutionary changes. Inserting non-native species deliberatively and or haphazardly into a system can be considered dangerous -and a sign of giving up. For clarity, we are not discounting the most precious benefits we receive from a healthy native ecosystem. Preservation and conservation of the Earth's biodiversity and associated ecosystems must be of the utmost priority. Preservation of these natural areas is as important now than ever. In fact, there are many successful examples of passive restoration where removal of threats to that system leads to a functioning native system. But, striving to return a highly disturbed environment to an all-native historic ecosystem in many areas is often an unproductive and unsustainable use of time and resources (see Cordell et al., 2016 for a case study). It is also becoming increasingly clear that removal of many novel interactions will not benefit ecosystem integrity and could lead to ecosystem harm or extinction (Zavaleta et al., 2001;Prior et al., 2018). Examples of this include reduced populations of rare endemic snails in the Azores following the removal of nonnative vegetation (Van Riel et al., 2000). Corbin and D'Antonio (2012) elucidate belowground legacy effects of non-native species where nutrient dynamics and mycorrhizal associations have been altered over time and do not readily recover following removal of these species. Restoration outcomes often result in successful regeneration of new assemblages of non-native plant species. These examples illustrate that novelty in ecological systems is a current reality, particularly in urban environments (Aronson et al., 2015). Finally, we fully understand that many non-native species have the potential to become invasive due to changing dynamics and lag times. Weed risk assessments and barrier zones are effective but not foolproof tools to reduce the likelihood of future invasion (Coutts et al., 2018). FIGURE 2 | A new conceptual model for determining the viability of a non-native species in a given area. The criteria presented here are not absolutes but can be modified depending on circumstances. The end result would be a score analogous to a weed risk assessment. Frontiers in Ecology and Evolution | www.frontiersin.org A REVISED CONCEPTUAL FRAMEWORK From our reference point, we propose a first attempt at a revised conceptual framework to evaluate all species, rooted in weed risk assessments, but expanded for local conditions and values. Weed risk assessments were developed as a tool to evaluate the likelihood of non-native species becoming problematic (Pheloung et al., 1999). They employ quantitative scoring on biological characteristics of a species that are summed, with thresholds set to gauge the overall risk of a species becoming invasive (Williams and Newfield, 2002). The predictive value of these tools has been tested (Daehler et al., 2004;Gordon et al., 2008;Gassó et al., 2010) and their limitations have been well articulated (Hulme, 2012). Our proposition is that the concept of weed risk assessments could be reimagined as metrics of belonging (Figure 2). Rather than simply scoring on biological characteristics (e.g., dispersal mode), sociocultural components could be added such as economic and cultural values. Importantly, the qualities scored (i.e., questions asked in the assessment) should be site-specific, so that these species assessments are locally based. A non-native species may be deemed harmful to the environment in one part of its range yet fulfill core cultural roles in another area. Like a weedrisk assessment, the characteristics of a species would be summed, but sociocultural characteristics could be weighted differently, depending on local values. Figure 2 suggests some of the qualities by which a species could be judged, but these are suggestions, not absolutes. The metrics of belonging concept is still a work in progress that with further development could become a decision support tool. Consensus would need to be developed on the framework, with the understanding that the evaluative component is flexible, pragmatic, and value laden. Thus, by its very nature, the metric of belonging is place-based, context-dependent, and subjective. Decisions could be made with input from community members working in the landscape, in a way that is participatory. However, there are risks to accepting a species as belonging into an ecosystem, and thus any management decision needs to acknowledge actions to potentially mitigate those risks. A recent example that could test out this framework is the controversy surrounding the release of a biocontrol of strawberry guava (Psidium cattleianum) in Hawai'i (Warner and Kinslow, 2013). The plant does enormous harm to the environment that is undisputable (Patel, 2012), but it provides some modest income to local residents (e.g., jams, back scratchers, and furniture), has been used as hula implements in replacement of using native trees, such as 'ōhi'a and has a name in the Hawaiian language. While this highly invasive species would not be chosen as belonging in the Hawaiian wet forests, a metric could explicitly acknowledge that this plant provides gifts to cottage industries, consumers, and cultural practitioners, thereby making room for evaluations of species based on reciprocity in ecosystems. Perhaps spelling it out in this way would allow for clear messaging that harms outweigh benefits, and in this way might reduce community strife. In reality, humans rely on ecosystems now more than ever. We conclude that forced distinctions of language and binaries impede our ability to move forward and focus on the promotion of resilient forest landscapes. Rather than argue about semantics and whether species are good or bad, we need to focus on understanding socioecological factors that influence both degradation and successional trajectories of future ecosystems and how and when interventions can help. We promote the idea that nature and strategies to restore natural systems lies in a continuum that requires place-based empirical observations of how novel species interact with native species. However, this framework does not discount the need to protect native biodiversity, nor abandon effective management actions to support and promote native systems. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. FUNDING The concepts in this paper grew out of past research, which included funding from NSF DEB-1754844, NSF REU-1757875, Strategic Environmental Research and Development Program (Project RC-2117), and the Hawai'i Army National Guard.
2021-11-16T14:20:09.481Z
2021-11-16T00:00:00.000
{ "year": 2021, "sha1": "42fb49e62f0b3ace6f343626939becee3ed34354", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fevo.2021.726571/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "42fb49e62f0b3ace6f343626939becee3ed34354", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [] }
2761963
pes2o/s2orc
v3-fos-license
Cheminformatics-Based Drug Design Approach for Identification of Inhibitors Targeting the Characteristic Residues of MMP-13 Hemopexin Domain Background MMP-13, a zinc dependent protease which catalyses the cleavage of type II collagen, is expressed in osteoarthritis (OA) and rheumatoid arthritis (RA) patients, but not in normal adult tissues. Therefore, the protease has been intensively studied as a target for the inhibition of progression of OA and RA. Recent reports suggest that selective inhibition of MMP-13 may be achieved by targeting the hemopexin (Hpx) domain of the protease, which is critical for substrate specificity. In this study, we applied a cheminformatics-based drug design approach for the identification and characterization of inhibitors targeting the amino acid residues characteristic to Hpx domain of MMP-13; these inhibitors may potentially be employed in the treatment of OA and RA. Methodology/Principal Findings Sequence-based mutual information analysis revealed five characteristic (completely conserved and unique), putative functional residues of the Hpx domain of MMP-13 (these residues hereafter are referred to as HCR-13pf). Binding of a ligand to as many of the HCR-13pf is postulated to result in an increased selective inhibition of the Hpx domain of MMP-13. Through the in silico structure-based high-throughput virtual screening (HTVS) method of Glide, against a large public library of 16908 molecules from Maybridge, PubChem and Binding, we identified 25 ligands that interact with at least one of the HCR-13pf. Assessment of cross-reactivity of the 25 ligands with MMP-1 and MMP-8, members of the collagenase family as MMP-13, returned seven lead molecules that did not bind to any one of the putative functional residues of Hpx domain of MMP-1 and any of the catalytic active site residues of MMP-1 and -8, suggesting that the ligands are not likely to interact with the functional or catalytic residues of other MMPs. Further, in silico analysis of physicochemical and pharmacokinetic parameters based on Lipinski's rule of five and ADMET (absorption, distribution, metabolism, excretion and toxicity) respectively, suggested potential utility of the compounds as drug leads. Conclusions/Significance We have identified seven distinct drug-like molecules binding to the HCR-13pf of MMP-13 with no observable cross-reactivity to MMP-1 and MMP-8. These molecules are potential selective inhibitors of MMP-13 that can be experimentally validated and their backbone structural scaffold could serve as building blocks in designing drug-like molecules for OA, RA and other inflammatory disorders. The systematic cheminformatics-based drug design approach applied herein can be used for rational search of other public/commercial combinatorial libraries for more potent molecules, capable of selectively inhibiting the collagenolytic activity of MMP-13. Introduction MMP-13 (Collagenase 3) is a zinc dependent protease which catalyses the cleavage of type II collagen, the main structural component of articular cartilage [1]. It is capable of cleaving the peptide bond at amino acid positions 775-776 in all three strands of the mature triple helical type II collagen molecules [2]. MMP-13 is expressed in articular cartilage and joints of osteoarthritis (OA) and rheumatoid arthritis (RA) patients, respectively, but not in normal adult tissues [3,4]. Preclinical data implicate human MMP-13 as the direct cause of irreversible cartilage damage in arthritic conditions [4,5,6,7]. This is supported by the findings that i) over expression of MMP-13 induces OA in transgenic mice, ii) its mRNA expression codistributes with type II collagenase activity in osteoarthritic cartilage, and iii) an inhibitor of MMP-13 has been shown to disrupt the degradation of explanted human osteoarthritic cartilage. In arthritic syndromes, the expression of MMP-13 is elevated in response to the inflammatory signals by leukocytes and other immune cells, in particular interleukin 1 (IL-1) and tumour necrosis factor alpha (TNF-a) [3]. The increased levels of MMP-13 result in imbalance in their regulation by tissue inhibitors of metalloproteinases (TIMPs), thus likely contributing to the diseased state [8]. As a result, the MMP-13 protease has been a target for the inhibition of the progression of OA and RA. Early broad spectrum MMP inhibitors directed towards the zinc region of the catalytic domain (inhibitors exploiting the hydroxamate function as a zincbinding group) have been ineffective because of their dose limiting toxicity in the form of musculoskeletal syndrome (MSS), characterised by joint stiffness and inflammation [9]. Conversely, specific inhibitors targeting the non-zinc region of the catalytic domain have been shown to effectively reduce the cartilage damage [4]. Recent studies have, therefore, focused on the search for selective inhibitors of MMP-13 [9,10,11]. The Hpx domain of the protease [12,13,14], which is critical for substrate specificity, represents an alternative target for the search of such inhibitors. All MMPs in general have similar domain architecture, namely an N-terminal signal sequence to target for secretion, a propeptide domain to maintain latency for cell signalling, a catalytic domain containing catalytic zinc binding motif, a linker region that links the catalytic domain region with the C-terminal four bladed propeller structure Hpx domain [15]. The catalytic domain of these MMPs are unable to cleave the triple helical collagens without the Hpx domain [16]. Further, the removal of the Hpx domain from MMP-1, -8 and -13, which belong to the collagenase family, has been shown to result in a loss of collagenolytic activity [13]. Thus, the Hpx domain in the C-terminal region maintains the specificity of collagenase family MMPs by affecting the substrate binding [2]. In this study, we applied a cheminformatics-based drug design approach to i) define the putative characteristic functional residues of the Hpx domain of MMP-13, ii) identify and characterize ligands binding to these residues and iii) assess the selectivity of these ligands by testing their cross-reactivity to other collagenase family members, MMP-1 and -8. Such screened and selected potential specific inhibitors can then be tested by molecular experiments to validate their specificity to MMP-13 and their application as drug targets. Materials and Methods Sequence-based analysis to identify putative characteristic functional residues of the Hpx domain of MMP-13 The identity of characteristic residues specific to the Hpx domain of MMP-13 have not been reported previously [13]. We conducted sequence-based analyses to identify these amino acid residues by performing a multiple sequence alignment and using the AVANA tool (http://sourceforge.net/projects/avana/) to compare the mutual information between subsets of the alignment for the location of the characteristic sites [17]. The sequences of all reported human MMP proteins were retrieved by performing PSI-BLAST [18] search against the nonredundant (nr) NCBI Entrez protein database using the MMP-13 query sequence obtained from the Protein Data Bank [19] (PDB ID:1PEX). A total of 50 MMP sequences were obtained from the BLAST search (Table S1 and Table S2). These sequences were then aligned using Muscle v3.6 [20] and the resulting alignment was manually inspected and corrected for misalignments using BioEdit [21]. The regions of the alignment representing the propeptide domain, catalytic domain and the linker region were deleted, leaving only the Hpx domain. The alignment of the MMP Hpx domain sequences was then submitted to AVANA to identify residues that are completely conserved and characteristic to MMP-13 (i.e. characteristic residues are defined as those with 100% amino acid identity and mutual information value of 1). AVANA has a built-in functionality to identify conserved, characteristic sites between subsets of sequences in an alignment using entropy and mutual information theories [17]. Herein, the two subsets for our alignment in AVANA were i) 8 MMP-13 sequences and ii) all other MMPs (42 of them). Having identified the Hpx characteristic residues (abbreviated as HCR for brevity) of MMP-13 (i.e. HCR-13), those that matched the putative functional residues of Hpx [15] were identified (abbreviated as HCR-13 pf ). Two main caveats herein include the small sample size and the sampling bias for the MMP sequences reported in the public database. However, the data used in this study was the most representative and comprehensive available in the public database to date (May 2009). Further, the characteristic residue list can be refined with the availability of more sequence data in the future. Virtual screening We next aimed to identify and characterize ligands that interact with the HCR-13 pf . The in silico structure-based high-throughput virtual screening (HTVS) method of Glide, version 5.5 (Schrödinger, LLC, New York, 2009) [22], was used to identify potential ligand molecules that interact with at least one of the HCR-13 pf residues on the 3D structure of MMP-13 (PDB ID: 1PEX). The binding of ligands to these residues is postulated to render selectivity to the inhibition of the proteolytic activity of the enzyme MMP-13. A total of 16908 molecules derived from public libraries namely Maybridge (14400; www.maybridge.com), PubChem [23] (2438; obtained from Shanghai Institute of Organic Chemistry) and Binding (70; www.bindingdb.org), were selected for virtual screening against 1PEX. Before performing HTVS, hydrogen atoms and charges were added to the crystal structure of 1PEX and then the complex was submitted to a series of restrained, partial minimizations using the optimized potentials for liquid simulations all-atom (OPLS-AA) force field [24]. The 3D structure was processed by use of the 'Protein Preparation module' with the 'preparation and refinement' option before docking. The grid-enclosing box was centred to all HCR-13 residues in 1PEX, so as to enclose the residues within 3 Å from their centroid. A scaling factor of 1.0 was set to van der Waals (VDW) radii for these residue atoms, with the partial atomic charge less than 0.25. The ligand molecules collected from the databases were prepared using 'LigPrep' module and were subsequently subjected to Glide 'Ligand docking' protocol with HTVS mode. Glide extra precision docking for the screened ligands All the ligands selected from the screening step were then subjected to Glide docking with extra precision (XP) to identify residues involved in hydrogen bond interactions with 1PEX. Glide XP mode determines all reasonable conformations for each lowenergy conformer in the designated binding site. In the process, torsional degrees of each ligand are relaxed, though the protein conformation is fixed. During the docking process, the Glide scoring function (G-score) was used to select the best conformation for each ligand. Final G-scores were selected based on the conformation at which the identified ligands formed hydrogen bonds to at least one of the HCR-13 pf with optimal binding affinity. The docking procedures were performed on a Dell RHEL 5.0 workstation. The ligands were then assessed for cross-reactive binding to MMP-1 and -8, using Glide XP; these MMPs were analysed because they also contribute to collagenolytic activity and contain an Hpx domain as MMP-13. The better resolution 3D structure for MMP-1 (1SU3 with catalytic and Hpx domains) and -8 (1BZS, only containing catalytic domain; no structure available with Hpx domain) obtained from PDB were used for the docking. The binding analysis on these structures was focused on the known active site residues of the catalytic domain of MMP-1 [25] and -8 [26] and the reported putative functional residues of Hpx domain of MMP-1 (285-295; Asp-Ala-Ile-Thr-Thr-Ile-Arg-Gly-Glu-Val-Met) [13]. It is noted that when aligned, the positions of the reported putative functional residues of the Hpx domain of MMP-1 do not correspond to those reported for MMP-13. This may be because of the selectivity of these two MMPs to different substrates, such as type I collagen for MMP-1 and type II for MMP-13 [27]. Assessment of drug-like properties of selected optimized ligands The selected optimized lead molecules from the cross-reactivity assay were studied for their drug-like properties based on Lipinski's rule of five [28], by use of the ADME-Tox application at the Mobyle portal (http://mobyle.rpbs.univ-paris-diderot.fr). The percentage of their human oral absorption was also predicted to determine the toxicity levels, by use of QikProp version 3.2, Schrödinger, LLC, New York, NY, 2009 [29]. Results and Discussion In this study, we identified 34 characteristic residues for the Hpx domain of MMP-13 (HCR-13) that were completely conserved and unique to the analyzed sequences of this domain (Figure 1). Five (Lys318, Arg344, Arg346, Lys363 and Lys372) of these were part of the 11 putative functional residues of Hpx [15] (these five are referred to as HCR-13 pf ). Binding of a ligand to as many of these HCR-13 pf and possibly the remaining HCR-13 are postulated to result in increased selective inhibition of the Hpx domain of MMP-13. Through HTVS, we identified 25 ligands that interact with at least one of the HCR-13 pf . The ligands were screened from a large library of 16908 molecules obtained from the public databases Maybridge, PubChem and Binding; all the identified 25 ligands were from Maybridge. Docking analysis using the more precise XP mode of Glide revealed that the 25 ligands formed hydrogen bonds with 1-3 residues of HCR-13, of which 1-2 were HCR-13 pf . In addition, hydrogen bonds were also formed by 1-2 non-HCR-13 putative functional residues and 1 non-HCR-13 non-putative functional residue (Table S3). Assessment of cross-reactivity of the 25 ligands with MMP-1 (containing both catalytic and Hpx domains) and MMP-8 (only catalytic domain), members of the collagenase family as MMP-13, returned seven lead molecules that did not bind to any one of the putative functional residues of Hpx domain of MMP-1 and any of the catalytic active site residues of MMP-1 and -8. Also, the closest distance between the putative functional residues of the Hpx (MMP-1) or the catalytic active site residues (MMP-1 and MMP-8) to the lead molecules was more than 10 Å (data not shown), suggesting that the ligands are not likely to interact with the functional or catalytic residues. The docking results of the final seven lead molecules to 1PEX are given in Table 1. The The structural scaffold of the lead molecules contains carboxylic acid functional group, mainly responsible for the hydrogen bond(s) formed with the HCR-13 pf . The binding conformation of the lead molecules with the hydrogen bond interactions to the Hpx domain of MMP-13 are given in Figure 3. The short hydrogen bond distances, ranging from ,1.5 to ,2.4 Å , and the favourable binding G-scores (29.22 to 27.55 kcal/mol) ( Table 1) suggest strong enzyme-ligand interactions. These carboxylic acid containing lead molecules were found to exhibit hydrophilic contacts with 1PEX, mostly with the polar side chains of amino acids Arg344 and Arg346 of HCR-13 pf . They also exhibited polar interaction with other functionally important amino acid residues that are not part of HCR-13, namely Arg300 and Lys347. In accordance with Lipinski's rule of five, the Mobyle portal was used to evaluate the drug-likeness of the lead molecules by assessing their physicochemical properties. Their molecular weights were ,500 daltons with ,5 hydrogen bond donors, ,10 hydrogen bond acceptors and a log p of ,5 (Table S4); these properties are well within the acceptable range of the Lipinski rule for drug-like molecules. These compounds were further evaluated for their drug-like behaviour through analysis of pharmacokinetic parameters required for absorption, distribution, metabolism, excretion and toxicity (ADMET) by use of QikProp. For the seven lead compounds, the partition coefficient (QPlogPo/w) and water solubility (QPlogS), critical for estimation of absorption and distribution of drugs within the body, ranged between , 20.1 to ,2.3 and , 24 to , 20.05, cell permeability (QPP Caco ), a key factor governing drug metabolism and its access to biological membranes, ranged from ,26 to ,276, while the bioavailability and toxicity were from ,3.4 to ,0.4. Overall, the percentage human oral absorption for the compounds ranged from ,46 to ,79%. All these pharmacokinetic parameters are within the acceptable range defined for human use (see Table 2 footnote), thereby indicating their potential as drug-like molecules. As of May 2010, the number of MMP sequences in the NCBI Entrez protein public database almost doubled since our last data collection (May 2009). The May 2010 data contained a total of 94 MMP sequences, an increase of 44 since May 2009. Analysis of the 94 sequences revealed that the number of HCR-13 residues (completely conserved and unique to MMP-13) reduced significantly from 34 to only 10 (Gln309, Ala312, Lys318, His334, His337, Arg344, Asn352, Lys372, Ser378, and Glu373), whereas the HCR-13 pf reduced from 5 to 3 (Lys318, Arg344, and Lys372). This was expected because of our small initial sample size. Nonetheless, there was no change in the HCR-13 pf residues bound by our seven lead molecules, except for two (compounds 1 and 6). The putative functional residue Arg346 that interacts with both these compounds is no longer classified as an HCR-13, but the compounds still bind to one other HCR-13 pf residue (Table 1). Conclusion The present work describes the identity of the putative functional residues characteristic to Hpx domain of MMP-13, and the identification of seven lead drug-like molecules binding to the HCR-13 pf , with no observable cross-reactivity to MMP-1 and MMP-8. These molecules are potential selective inhibitors of MMP-13 that need to be experimentally validated, while the systematic cheminformatics-based drug design approach applied herein can be used for rational search of other public/commercial combinatorial libraries for more potent molecules, capable of selectively inhibiting the collagenolytic activity of MMP-13. Further, the backbone structural scaffold of these seven lead compounds could serve as building blocks in designing drug-like molecules in the treatment of OA, RA and other inflammatory disorders. The proposed binding mode of the lead molecules are shown in ball and stick display and non carbon atoms are coloured by atom types. Critical residues for binding are shown as lines colored by atom types. Hydrogen bonds are shown as dotted yellow lines with the distance between donor and acceptor atoms indicated. Atom type colour code: red for oxygen, blue for nitrogen, grey for carbon and yellow for sulphur atoms respectively. The HCR-13 pf residues that interact with the lead molecule are indicated by the arrow. The Maybridge database ID of the lead molecules are as follows: compound 1-3764; compound 2-764; compound 3-13196; compound 4-3705; compound 5-632; compound 6-7789; and compound 7-1598. doi:10.1371/journal.pone.0012494.g003 Author Contributions Conceived and designed the experiments: RK AMK LA. Performed the experiments: RK AMK B. Analyzed the data: RK AMK B AG LA. Contributed reagents/materials/analysis tools: YSC LA. Wrote the paper: RK AMK B LA. Contributed to the concept for this project, followed through this study and assisted in preparing the manuscript: LA RK YSC. Assisted in writing the paper and referencing: RK AG LA.
2014-10-01T00:00:00.000Z
2010-08-31T00:00:00.000
{ "year": 2010, "sha1": "0815773832a6c7fc406a43030cf37a598e0c56d6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0012494&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a323e65d9ff4720d8199c3c6ec750bb23133caa9", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
150198178
pes2o/s2orc
v3-fos-license
Workplace flourishing : Measurement , antecedents and outcomes The continuous growth and manifestation of employee attrition, especially within the highly skilled talent pool, is becoming increasingly problematic. When organisations establish policies that retain productive personnel, remove unproductive personnel and select the most suitable new candidates, they should have a favourable composition of qualified and talented employees (Adnot, Dee, Katz, & Wyckoff, 2017). Thus, two factors are emphasised: employee retention and employee performance. To address this persisting issue, researchers need to establish different factors that impact employee retention and employee performance, factors in addition to wellestablished disease-driven aspects such as stress, burnout and depression (e.g. Steinhardt, Smith Jaggars, Faulk, & Gloria, 2011). Although the preceding aspects are undeniably significant in their own right, they tend to ignore or fail in furthering our understanding of the role flourishing plays in employee well-being (Youssef-Morgan & Luthans, 2014). Introduction The continuous growth and manifestation of employee attrition, especially within the highly skilled talent pool, is becoming increasingly problematic.When organisations establish policies that retain productive personnel, remove unproductive personnel and select the most suitable new candidates, they should have a favourable composition of qualified and talented employees (Adnot, Dee, Katz, & Wyckoff, 2017).Thus, two factors are emphasised: employee retention and employee performance.To address this persisting issue, researchers need to establish different factors that impact employee retention and employee performance, factors in addition to wellestablished disease-driven aspects such as stress, burnout and depression (e.g.Steinhardt, Smith Jaggars, Faulk, & Gloria, 2011).Although the preceding aspects are undeniably significant in their own right, they tend to ignore or fail in furthering our understanding of the role flourishing plays in employee well-being (Youssef-Morgan & Luthans, 2014). The concept of flourishing in life (Keyes, 2002) has positioned itself as the most prominent multidimensional well-being model (Keyes, 2002;Seligman, 2011).Building on the study of Keyes (2002), Rothmann (2013) studied the multidimensionality of flourishing, comprising emotional well-being, psychological well-being and social well-being in work and organisational contexts.Workplace flourishing (WF) has been defined as an employee's perception that he or she is feeling and functioning well in the workplace (Rautenbach, 2015). Using the Flourishing-at-Work Scale (FAWS; Rautenbach, 2015), Janse van Rensburg, Rothmann, and Diedericks (2017) found that person-environment fit relates (PEF) positively with flourishing and negatively with intention to leave (ITL) via flourishing.However, the effects of WF (in addition to PEF) on other organisational outcomes have not been studied.Moreover, the social well-being scale of the FAWS as developed by Rautenbach (2015) and used by Janse van Rensburg et al. (2017) included only one item per social well-being dimension.Therefore, it is vital to investigate additional factors associated with WF to strengthen the FAWS's psychometric properties. The study aim was to examine relationships among PEF, WF, ITL, in-role performance and organisational citizenship behaviour (OCB). Flourishing at work Emotional well-being incorporates three employee judgements: job satisfaction, positive emotions and negative emotions.Job satisfaction reflects the amount of congruence between employees' perceptions and standards (Weiss & Cropanzano, 1996).Positive affect stands key towards an individual's capacity to flourish.It refers to pleasant reactions at work (e.g.joy, interest, and gratitude), while negative affect encompasses unpleasant reactions (e.g.sadness, boredom and anxiety).Although both job satisfaction and affect tap into employee emotions, the focus of the constructs differs slightly, making both constructs valuable flourishing components.Job satisfaction reveals employees' perceptions that their wants are attended to, while affect relates to employees' perceptions that their needs are attended to (Rojas & Veenhoven, 2013). Psychological well-being at work includes dimensions (autonomy, personal growth, mastery, meaning, purpose and positive relations) in Ryff and Singer's (1998) model, as well as engagement (Seligman, 2011).Three dimensions in the model of Ryff and Singer (1998) relate to three psychological needs (Deci & Ryan, 1985), namely autonomy (i.e.autonomy satisfaction -experiencing independence and choice in executing work-related tasks), mastery (i.e.competence satisfaction -feeling effective in work environment interaction) and positive relations (i.e.relatedness satisfaction -experiencing a sense of connectedness to others in the workplace).Meaning reflects the perceived significance of employees' work experiences, while purpose refers to possessing a sense of the preferred outcomes connected to one's work-related behaviour (Barrick, Mount, & Li, 2013).Learning (similar to personal growth in the model of Ryff & Singer, 1998) is defined as the acquisition and application of knowledge and expertise to one's job (Spreitzer, Lam, & Fritz, 2010).Finally, work engagement comprises three components, namely a physical component (involvement and the exhibition of vitality), a cognitive component (absorption and involvement) and an affective component (connectedness to one's job and dedication) (Kahn & Heaphy, 2014). Social well-being includes five dimensions: social acceptance (the acceptance of the diversity of colleagues), actualisation (the belief in one's organisation, team and colleagues' potential), coherence (the belief that one's organisation and social relations at work are both meaningful and comprehensible), contribution (the belief that one's daily work tasks add value to one's team, department, and organisation) and integration (the belief that one experiences a sense of communal connectedness and belongingness) and the mental health continuum (MHC) (Keyes, 1998(Keyes, , 2005)).The social element of work plays a pivotal part in flourishing (Grant, 2008), as employees are embedded within social organisational structures, facing endless social tasks and challenges (Keyes, 1998). Person-environment fit, workplace flourishing, intention to leave and performance Fishbein and Ajzen (1975) contend that beliefs precede attitudes, intentions and ultimately behaviours.Metaanalytic studies (Hoffman & Woehr, 2006;Kristof-Brown, Zimmerman, & Johnson, 2005;Oh et al., 2014;Verquer, Beehr, & Wagner, 2003) suggest that person-organisation and person-job fit generally has a strong association with attitudes (e.g.job satisfaction), while having a weaker association with intentions (e.g.ITL) and behaviours (e.g.in-role, extra-role).Therefore, although PEF strongly relates to important job attitudes, intervening variables may be required to amplify its association with intention and behaviour.A feasible intervening variable in the relationship between PEF and the outcomes contained in this study (ITL, in-role performance, and organisational citizenship behaviour [OCB]), could be WF.WF contains various components that directly link to PEF and the study outcome, as well as numerous components that could mediate fitoutcome relationships. Cable and DeRue (2002) adopted a three-factor personenvironment fit conceptualisation (person-organisation, needs-supplies, and demands-abilities).Person-organisation fit reflects the degree of perceived similarity in terms of values between employees and organisations (Cable & DeRue, 2002).Employees feel attached to the organisation's broader mission when they believe that their values match that of their organisation and colleagues.Employees form separate cognitions regarding organisational and job-related fit.Needs-supplies fit reflects the perceived similarity between job rewards (what the organisation offers) and employee needs (Cable & DeRue, 2002).When employees perceive a match between what they expect from their job and what they receive, job satisfaction materialises (Dawis & Lofquist, 1984;Locke, 1976).Demands-abilities fit reflects the perceived similarity between job demands and an employee's inhabited knowledge, skills and capabilities (Cable & DeRue, 2002). Traditionally, fit literature posits that if congruence is reached between the employee and the work environment, positive outcomes ensue.The work adjustment theory (TWA; Dawis & Lofquist, 1984) and job embeddedness theory (Mitchell, Holtom, Lee, Sablynski, & Erez, 2001) suggest that when congruence is achieved between employees and their work environment and when employees establish meaningful relationships in the workplace, they will experience increased satisfaction, which will subsequently affect their turnover decisions.For instance, job satisfaction (an emotional well-being element) plays a central part in various employee turnover models (Hom, Lee, Shaw, & Hausknecht, 2017), suggesting that dissatisfied employees will start to explore other possibilities through a range of evaluation processes.Similarly, the attractionselection-attrition (ASA) model (Schneider, 1987) suggests that employees are attracted to and remain within organisations with which they share similar preferences, as this allows them to achieve their goals.Regarding social well-being, the social identity theory (Tajfel & Turner, 1986) proposes that employees who experience fit with their organisation's values become part of a psychological group, defined as the 'collection of people who share the same social identification or define themselves in terms of the same social category membership' (Turner, 1984, p. 530).Therefore, when employees experience a favourable balance between their personal (values, abilities and needs) and environmental characteristics (values, demands and supplies), it lays the foundation for a positive work environment.Within this environment, employees may experience a sense of acceptance, enjoyment, integration, meaningfulness and relatedness, among others (Janse van Rensburg et al., 2017).Subsequently, employees should experience little to no urge to leave their work setting (Janse van Rensburg et al., 2017).Intention to leave is defined as an employee's cognisant and intentional frame of mind to part ways with his or her organisation (Tett & Meyer, 1993). Apart from ITL, in-role performance and OCB have also been associated with WF (Redelinghuys, Rothmann, & Botha, 2018).In-role performance refers to the undertakings an employee is expected to fulfil as stipulated in his or her formal job requirements (Borman & Motowidlo, 1997;Williams & Anderson, 1991).In contrast, Lambert (2006) defines OCB as employee behaviour that contributes beyond what is expected in the basic job requirements.Four dimensions constitute OCB: helping, loyalty, advocacy, as well as functional participation and obedience (Coyle-Shapiro, 2002;Van Dyne, Graham, & Dienesch, 1994).Helping reflects the extent to which employees offer assistance to others.Loyalty relates to the identification with or loyalty towards the organisation, which involves cooperation and serving the organisational interests.Advocacy refers to behaviour aimed at others in the organisation, which includes the maintenance of high standards, the challenging of others and the proposition of change.Functional participation assumes a more personal stance, while simultaneously contributing to organisational efficiency (Coyle-Shapiro, 2002;Van Dyne et al., 1994). Although the association between WF, in-role performance and OCB has been studied before (Redelinghuys et al., 2018), the indirect effect of PEF on performance (in-role and extra-role) via WF is yet to be investigated.Social exchange theory (Blau, 1964) may offer a valuable framework for these relationships.The latter theory posits that employees and organisations enter into social exchanges with one another when they perceive the other party to be a worthy contributor to the relationship.Therefore, when one party (the organisation) positively impacts another (the employee), the latter should return the favour to honour their part of the exchange.Thus, when organisations positively impact its employees by providing an environment that sufficiently attends to their needs, demands and the things they value and subsequently increase their probability of experiencing positive work-related well-being, employees should upwardly adjust their performance and helping behaviours to express their gratitude. The following hypotheses stemmed from the discussion: H7: Person-environment fit indirectly affects OCB via WF.teaching role in the Sedibeng East and West districts in Gauteng.The teaching profession is a good framework for studies on flourishing, as research that assumes a diseasedriven or dysfunctional behaviour stance (e.g.O'Brennan, Pas, & Bradshaw, 2017), heavily outweighs a health promotion or positive functioning stance (e.g.Li, Wang, Gao, & You, 2017).Therefore, more research is needed regarding positive employee behaviours, which include aspects such as strengths, optimal functioning and flourishing (Youssef-Morgan & Luthans, 2014).This is especially important when considering the demands teachers face (e.g.student illdiscipline, fellow educator absenteeism, overinvolvement or lack of parental involvement) and the ever-increasing list of role-players (e.g.department of education, school-governing body, management, parents and learners) they need to satisfy.Approximately 800 surveys were circulated, while 258 were completed adequately (32% response rate).Table 1 provides the sample characteristics. Measuring instruments The Flourishing-at-Work Scale (FAWS; Rautenbach, 2015) measured WF.It comprises 46 items recorded on a 6-point scale ranging from 1 (never) to 6 (every day).Participants were required to respond to questions concerning the regularity with which they experienced particular symptoms at work during the preceding month.The FAWS encompasses three dimensions: emotional well-being, psychological well-being and social well-being.Emotional well-being comprises three dimensions (three items per dimension): positive affect, negative affect and job satisfaction.A sample item for this dimension includes: 'How often did you feel grateful?'.Psychological well-being comprises six dimensions: autonomy satisfaction (three items), competence satisfaction (three items), relatedness satisfaction (three items), learning (two items), meaningful work (three items) and engagement (seven items).A sample item for this dimension includes: 'How often did you become enthusiastic about your job?' Social well-being comprises five dimensions (three items per dimension): social acceptance, actualisation, coherence, contribution and integration.A sample item for this dimension includes: 'How often did you feel included at your school?' Rautenbach (2015) confirmed the FAWS's three-factor structure, with rho coefficients ranging from 0.77 to 0.95. The Perceived Fit Scale (PFS; Cable & DeRue, 2002) measured PEF.It contains nine items recorded on a 7-point scale, which ranges from 1 (strongly disagree) to 7 (strongly agree).The PFS comprises three dimensions (three items each): personorganisation fit: 'My personal values match my school's values and culture', need-supplies fit: 'The attributes that I look for in a job are fulfilled very well by my present job', and demands-abilities fit: 'My abilities and training are a good fit with the requirements of my job'.Redelinghuys and Botha (2016) confirmed the PFS's three-factor structure, with rho coefficients ranging from 0.85 to 0.88. The Turnover Intention Scale (Sjöberg & Sverke, 2000) measured ITL.It contains three items recorded on a 5-point scale, which ranges from 1 (strongly disagree) to 5 (strongly agree).Covering a solitary dimension, a sample item includes: 'I feel that I could leave this job'.Janse van Rensburg et al. ( 2017) yielded a rho coefficient of 0.71. The In-role Behaviour Scale (Williams & Anderson, 1991) measured in-role performance.It entails seven items scored on a 7-point scale ranging from 1 (strongly disagree) to 7 (strongly agree).Covering a solitary dimension, a sample item includes: 'I adequately complete assigned duties'.Participants were required to rate their own performance, as external evaluation was prohibited.Redelinghuys et al. (2018) yielded a rho coefficient of 0.73. The Organisational Citizenship Behaviour Scale (OCBS; Rothmann, 2010) measured OCB.It entails six items recorded on a 7-point scale, which ranges from 1 (strongly disagree) to 7 (strongly agree).The OCBS comprises two dimensions (three items each): assistance to co-workers: 'I assist others with their duties' and assistance to the organisation: 'I defend the school when other employees criticise it'.Diedericks and Rothmann (2014) confirmed the OCBS's two-factor structure, with adequate reliability coefficients (> 0.70). Research procedure Once ethical permissions had been obtained from the necessary authorities, the researchers communicated with the secondary school principals in the selected districts.The researchers arranged dates and times with probable research participants at their respective schools to discuss the study purpose and to obtain informed consent.Paper questionnaires, with English as instructional language, were distributed to consenting participants; granting them a 2-week period to complete the questionnaires.Arrangements were made for participants to securely return their questionnaires. Statistical analysis Mplus 7.41 (Muthén & Muthén, 1998-2016) was applied to analyse the data.The weighted least-squares with mean and variance adjustment (WLSMV) estimator was utilised as it does not assume normally distributed variables, providing the most suitable selection for categorical data modelling. To evaluate the reliability of the measuring battery, rho coefficients (Raykov, 2009) were utilised.The practical significance of results was determined by effect sizes (Cohen, 1988).The confidence interval (CI) level was set at a value of 95% (p ≤ 0.05) for statistical significance.A measurement model was specified and tested against numerous goodnessof-fit indices.Descriptive statistics were computed with SPSS23 (IBM Corp, 2016). Four opposing measurement models were specified and tested to make model comparison possible as suggested by Wang and Wang (2012).The best fitting model (Model 1) was respecified as a structural model (Model 6) and compared to opposing structural models.The chi-square statistic, root mean square error of approximation (RMSEA), Tucker-Lewis index (TLI), comparative fit index (CFI) and the weighted root mean square residual (WRMR) were utilised. Comparative fit index and TLI values of ≥ 0.90 were considered satisfactory.Root mean square error of approximation values of < 0.08 indicated close model fit. Discriminant validity of the constructs was assessed using a method proposed by Farrell (2010).To establish discriminant validity, three values are important.Firstly, the correlations between constructs (see the values below the diagonal in Table 3), as this is used to compute squared correlations.Secondly, the average variance extracted (AVE) (see the values on the diagonal in Table 3), which is calculated for each construct by adding the R-square values of each construct item, and then dividing it by the number of items the construct has.For example, person-organisation fit has three items; therefore, its AVE is calculated as follows: 0.838 + 0.966 + 0.838/3 = 0.88.Lastly, squared correlation values (see the values above the diagonal in Indirect effects were assessed in Mplus 7.41.Bootstrapping with 10 000 samples was applied to construct two-sided biascorrected 95% CIs (Hayes, 2018).Lower and upper CIs were conveyed. Ethical consideration Authorisation for the study was acquired from the Gauteng Department of Education, the Sedibeng East and West District offices, as well as ethical clearance from the North-West University's Ethics Committee. Measurement model testing Confirmatory factor analyses were carried out with the scales through Mplus 7.41 (Muthén & Muthén, 1998-2016).A hypothesised measurement model (Model 1) was specified and tested against opposing models (Models 2-5) to establish which model fitted the data best. The Descriptive statistics, reliabilities, correlation coefficients and discriminant validity Table 3 reports the descriptive statistics, reliabilities, correlation coefficients and discriminant validity of the constructs. The reliabilities of the measuring instruments were acceptable, ranging from 0.75 to 0.94 (Nunnally & Bernstein, 1994).All the PEF dimensions were practically and statistically significantly related to the three emotional wellbeing dimensions, six psychological well-being dimensions and five social well-being dimensions, ranging from medium to large effects.Person-environment fit dimensions were practically and statistically significantly related to job satisfaction (emotional well-being), autonomy satisfaction (psychological well-being) and social actualisation (social well-being) with a large effect. Most of the flourishing dimensions were practically and statistically significantly related to ITL (large effects), except for negative affect (0.46), competence (−0.43), meaning (−0.48) and learning (−0.48).Most flourishing dimensions were practically and statistically significantly related to inrole performance with a medium effect, except for negative affect (−0.29) and competence (0.28).Most flourishing dimensions were practically and statistically significantly related to OCB (to co-workers) with a medium effect, as well as OCB (to the organisation), ranging from medium to large effects. Discriminant validity is supported.Table 3 shows that the AVE per construct (on the diagonal) was greater than the squared correlation values (above the diagonal).The chi-square values for WLSMV cannot be used for chi-square difference testing (Satorra & Bentler, 2010).Therefore, the Difftest Mplus function was utilised.Table 4 illustrates the difference testing of the opposing structural models and designates Model 6b as the best fitting opposing model. Figure 1 illustrates the standard path coefficients found with PEF as independent variable and WF, ITL, in-role performance and OCB as dependent variables and also WF as an independent variable with ITL, in-role performance and OCB as dependent variables. For the model portion predicting WF, PEF's path coefficient (b = 0.82; p ≤ 0.01) was statistically significant and displayed the anticipated sign.Therefore, Hypothesis 1 is accepted. For the model portion predicting ITL, the path coefficients of PEF (b = -0.60;p ≤ 0.01) and WF (b = −0.22;p ≤ 0.05) were statistically significant and displayed the anticipated sign.Therefore, Hypothesis 2 is accepted. For the model portion predicting in-role performance, WF's path coefficient (b = 0.34; p ≤ 0.01) was statistically significant and displayed the anticipated sign.Therefore, Hypothesis 3 is accepted. For the model portion predicting OCB, WF's path coefficient (b = 0.54; p ≤ 0.01) was statistically significant and displayed the anticipated sign.Therefore, Hypothesis 4 is accepted. Testing indirect effects To establish whether PEF indirectly affected ITL, in-role performance and OCB, the authors used Hayes's (2018) guidelines. Discussion The study aim was to examine relationships among PEF, WF, ITL, in-role performance and OCB.When employees fit, feel well and function well both psychologically and socially, positive outcomes ensue (i.e.lower ITL, higher in-role performance and OCB). Results confirmed WF's three-factor structure, endorsing its construct validity beyond the fast-moving consumer goods industry (Rautenbach, 2015) and tertiary education sector (Janse van Rensburg et al., 2017).Flourishing at work consisted of emotional well-being (job satisfaction, positive affect and low negative affect), psychological well-being (autonomy satisfaction, competence satisfaction, relatedness satisfaction, meaning, engagement and learning) and social well-being (social contribution, integration, actualisation, acceptance and coherence).Similar to previous studies (Janse van Rensburg et al., 2017;Rautenbach, 2015), all the FAWS dimensions yielded acceptable reliability coefficients, varying from 0.75 to 0.92.Discriminant validity was also supported as the AVE per construct was greater than the associated squared correlation values. The results indicated that PEF positively associated with WF.Therefore, when employees perceive high similarity between their own values and the values of their organisation, between the compensation they receive in response to the work they deliver, and similarity between their job demands and their capabilities, they should experience elevated emotional well-being, psychological well-being and social well-being levels at work.The results are consistent with the TWA (Dawis & Lofquist, 1984), the ASA theory (Schneider, 1987) and other PEF theories, indicating that personenvironment congruence equates to positive outcomes.The results also support the notion of cognitive appraisal theories of affect (Roseman, Spindel, & Jose, 1990;Scherer, 1999) that cognitive circumstantial evaluations yield affective responses and the social identity theory (Tajfel & Turner, 1986) which proposes that workers who experience fit with their organisation's values, become part of a 'psychological group'.This also concurs with prior findings (Janse van Rensburg et al., 2017). WF negatively associated with ITL.Numerous theories suggest that WF elements relate to ITL. Lee and Mitchell's (1994) unfolding model of voluntary turnover, as well as Mobley's (1977) turnover model, indicate that dissatisfied employees will start to explore other possibilities through a range of evaluation processes.Consistent with previous findings (Diedericks & Rothmann, 2014;Janse van Rensburg et al., 2017;Rothmann, 2013), employees will be less inclined to consider vacating their job when they flourish in the workplace. WF positively associated with in-role performance and OCB.Numerous theories can explain the flourishingperformance relationship.The happy and/or productive worker thesis suggests that happy (predominantly measured by job satisfaction) employees are productive employees. Quantitative and qualitative reviews of the job satisfactionjob performance relationship have also shown that job satisfaction positively associates with job performance (Judge et al., 2001).Studies have also shown that psychological wellbeing predicts job performance (Cropanzano & Wright, 2001).Furthermore, the social exchange theory (Blau, 1964) proposes that when one party (the organisation) positively impacts another (the employee), the latter should return the favour to honour their part of the exchange.Therefore, employees who perceive that their needs, demands and the things they value are sufficiently attended to are more likely to experience a positive work environment, which should propel them towards better performance (in-role and extra-role).Results showed a direct association between PEF and ITL, proposing that PEF reduces participant intent to leave, regardless of their flourishing levels.The ASA model (Schneider, 1987) suggests that employees who experience work environment incongruence should be more inclined to leave.Similarly, WF significantly associated with ITL, regardless of employee PEF levels.Thus, PEF and WF, both in their own right and independently, significantly associated with individuals' thoughts of leaving their organisation. As individuals experience fit with their school and job aspects, and experience links within the organisation (e.g.social integration), they have much more to sacrifice when leaving the school, resulting in lower ITL.This is inconsistent with previous findings (Janse van Rensburg et al., 2017).A possible explanation for this inconsistency could be ascribed to increased model complexity. With regard to the indirect effect of PEF on in-role performance and OCB via WF, respectively, results confirmed this effect.Hence, PEF's association with in-role performance and OCB is an indirect one, suggesting that PEF increases participant performance (in-role and extra-role), as long as participant flourishing levels remain sufficiently high.Therefore, PEF should first elevate employee flourishing levels to subsequently increase in-role performance and OCB.Although the preceding associations have not been studied before, they seem consistent with Fishbein and Ajzen's (1975) framework which contends that beliefs precede attitudes, intentions and ultimately behaviours. Conclusion Limitations Several study limitations are prominent.Firstly, the crosssectional research approach impedes the assessment of causality among the variables under scrutiny.Secondly, the study did not assess interpersonal PEF aspects such as person-group and person-supervisor fit.Thirdly, because of certain restrictions (e.g.occupation and geographical location), the generalisation of findings to other settings should proceed with caution.Lastly, a specific modelling strategy was used to assess the constructs, another strategy (e.g.bifactor modelling) may possibly yield different results. Recommendations In practice, the probability of employees experiencing different levels (poor, average, and good) of fit and wellbeing is highly plausible.Therefore, it is unrealistic to expect organisations to have a universal blueprint for collectively addressing the fit and well-being of their workforce because of the diversity of employee needs, beliefs, perceptions and attitudes.Although organisations should have generic fit and well-being initiatives embedded within their strategic framework (e.g.change management, organisational development, training and development, recruitment and selection), they should continually update and modify these strategies to ensure a healthy balance is maintained between individual (values, abilities and needs) and environmental (values, demands, and supplies) characteristics.This will lay the foundation for a favourable work environment, an environment where employees can experience a sense of acceptance, enjoyment, integration, meaningfulness and relatedness (among others).When such an environment is institutionalised, talent retention and performance should follow. Future studies should aim to explore additional outcomes and antecedents related to WF. Research should also aim to assess the causality between the constructs, as none of the constructs is static in nature.A bifactor modelling strategy could provide more clarity on the psychometric properties of the FAWS (Rautenbach, 2015) and PFS (Cable & DeRue, 2002) in larger samples. H1: Person-environment fit positively associates with WF.H2: WF negatively associates with ITL.H3: WF positively associates with in-role performance.H4: WF positively associates with OCB.H5: Person-environment fit indirectly affects ITL via WF.H6: Person-environment fit indirectly affects in-role performance via WF. FIGURE 1 : FIGURE 1: The structural model -Standardised solution with standard errors in parentheses. Table 2 presents the goodness-of-fit statistics for the five competing measurement models described above.http://www.sajip.co.zaOpenAccess TABLE 3 : Descriptive statistics, reliabilities, correlation coefficients and discriminant validity., average variance extracted estimates on the diagonal, and squared correlations above the diagonal; OCB, organisational citizenship behaviour; SD, standard deviation; M, mean.Note: All correlations are statistically significant (p < 0.01).Correlations are presented below the diagonal.Bold values indicate that the average variance extracted per construct and create a line of separation between the correlations (below the diagonal) and the squared correlations (above the diagonal). AVE TABLE 5 : Indirect effect of person-environment fit on intention to leave, in-role performance and organisational citizenship behaviour via workplace flourishing. TABLE 4 : Difference testing for competing structural models.
2019-05-08T03:46:45.756Z
2019-01-09T00:00:00.000
{ "year": 2019, "sha1": "ddd518f8eb21481a525bcbc0790205deea8b50b7", "oa_license": "CCBY", "oa_url": "https://sajip.co.za/index.php/sajip/article/download/1549/2332", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "ddd518f8eb21481a525bcbc0790205deea8b50b7", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
256832187
pes2o/s2orc
v3-fos-license
The effectiveness of atrial fibrillation special clinic on oral anticoagulant use for high risk atrial fibrillation patients managed in the community Background Service gaps exist in oral anticoagulant (OAC) use among patients with atrial fibrillation (AF) in primary care. The purpose of this study was to explore the clinical effectiveness of a community dwelling Atrial Fibrillation Special Clinic (AFSC) run by primary care physicians by evaluating its impact on OAC use and the control of modifiable cardiovascular disease (CVD) risk factors in high risk AF patients. Method Quasi-experimental study was conducted in AFSC run by public primary care physicians in Hong Kong. Study subjects were high risk AF patients with CHA2DS2-VASc scores ≥ 2, who had been followed up (FU) at AFSC for at least one year from 01 August, 2019 to 31 October, 2020. OAC usage and modifiable CVD risk factor control were compared before and after one year of FU at AFSC. Drug-related adverse events, emergency attendance or hospitalisation episodes, survival and mortality rates after one year FU at AFSC were also reviewed. Results Among the 299 high risk AF patients included in the study, significant increase in OAC use was observed from 58.5% at baseline to 82.6% after one year FU in AFSC (P < 0.001). Concerning CVD risk factor control, the average diastolic blood pressure level was significantly reduced (P = 0.009) and the satisfactory blood pressure control rate in non-diabetic patients was markedly improved after one year FU (P = 0.049). However, the average HbA1c and LDL-c levels remained static. The annual incidence rate of ischaemic stroke/systemic embolism was 0.4%, intra-cranial haemorrhage was 0.4%, major bleeding episode was 3.2% and all-cause mortality was 4.3%, all of which were comparable to reports in the literature. Conclusion AFSC is effective in enhancing OAC use and maintaining optimal modifiable CVD risk factor control among high risk AF patients managed in primary care setting, and therefore may reduce AF-associated morbidity and mortality in the long run. Introduction Atrial fibrillation (AF) is a common type of arrhythmia encountered in primary care and is a cause of significant morbidity and mortality [1,2]. Globally, 33.5 million patients had AF in 2010 and AF affects approximately 1% of the population in Hong Kong. With the aging of the population, the number of new AF cases was estimated to be 4.7 million per year [3], with greater prevalence in elderly individuals and in patients with comorbidities [4,5]. Patients with AF have five-fold increased risk of stroke compared with non-AF patients [6], and the use of oral anticoagulation (OAC) significantly reduced the risk of stroke in AF patients [7]. Therefore, OACs are an integral part of AF management to prevent the thromboembolic events. Strict control of cardiovascular disease (CVD) risk factors is also an essential part of AF management. For example, studies have shown that early detection and optimal control of modifiable CVD risk factors such as hypertension (HT), diabetes mellitus (DM), obesity, congestive heart failure (CHF), myocardial infarction, valvular heart disease, smoking and alcohol consumption etc. could all effectively prevent the progression of AF and reduce AF related morbidity and mortality [8][9][10][11][12]. Despite all this evidence, service gaps exist in AF management, particularly in the persistently low utilization rate of OACs among AF patients [13][14][15]. For example, a study in U.S. showed that only 11-78.8% of indicated AF patients were put on OACs [16], while a study in China found that a total of 35.6% of indicated AF patients had received OACs and only 11.1% of them were using Novel OACs (NOACs) [17]. Similarly, a local study conducted in hospital setting revealed that only 23% of high risk AF patients had received OACs [18]. At this moment, there is no information on OAC use among AF cases managed in primary care setting and their CVD risk factor control. To address all these service gaps, the AF Special Clinic (AFSC) was established in the Department of Family Medicine and General Outpatient Clinics (Dept. of FM and GOPCs) of Kowloon Central Cluster of Hospital Authority of Hong Kong (HAHK) in June 2019. The aim of setting up this clinic is to provide holistic and comprehensive management to AF patients in the community. This study tried to explore the clinical effectiveness of AFSC by evaluating its impact on OAC use and the control of CVD risk factors among high risk AF patents managed by primary care physicians. We believe that AFSC would help enhance OAC utilization and improve CVD risk factor control, and hence reducing AF related mortality in the long run. Study design A quasi-experimental, pre-and post-test study design was used to compare the outcome parameters. [19]. According to the AF management guidelines from the European Society of Cardiology [20], the CHA 2 DS 2 -VASc score should be calculated for all AF cases to stratify their stroke risk. If the score ≥ 2, the patient is considered as a 'high risk' AF patient and OAC is recommended. If the score is 0 in males or 1 in females, the CVD risk is low and therefore no OAC therapy is recommended. In males whose score is = 1, OACs may be considered, and people's values and preferences should be considered [21]. Study subjects All high risk AF patients coded by the International Classification of Primary Care 2 nd version (ICPC-2)-code of "K78" (atrial fibrillation), whose CHA 2 DS 2 -VASc score was ≥ 2, had been followed up (FU) for at least one year at 5 AFSCs of HAHK from 01 August, 2019 to 31 October, 2020. AF patients were excluded if they had contraindications to NOAC therapy including known hypersensitivity, clinically significant active bleeding, significant inherited or acquired bleeding disorder, hepatic disease associated with coagulopathy, significant risk of major bleeding (such as current or recent gastrointestinal ulceration, presence of malignant neoplasms at high risk of bleeding, recent brain or spinal injury/surgery, recent intracranial haemorrhage), severe renal impairment (calculated creatine clearance < 30 mL/min for dabigatran and < 15 mL/ min for apixaban), pregnancy and breastfeeding. Patients who defaulted FU at AFSC, had incomplete data, transferred to be cared for by other specialists or were certified dead during the study period were excluded from the final data analysis. Management at AFSC The attending doctors at AFSC were experienced Family Medicine (FM) specialists who had received training on AF management via standardized educational talk. Patient epidemiological characteristics such as age, gender, smoking status, drinking status, comorbidities including HT, DM and CHF, past history of ischaemic heart disease (IHD), stroke/transient ischaemic attack (TIA) or intra-cranial haemorrhage (ICH) and type of AF (non-valvular, which confirmed by physical examination and previous echocardiography result) were reviewed. The CHA 2 DS 2 -VASc score and HAS-BLED score (Hypertension, Abnormal renal and liver function, Stroke, Bleeding tendency, Labile INRs, Elderly, Drugs or Alcohol), which predicts bleeding risk were calculated and documented. Baseline blood tests including complete blood picture, clotting profile, serum creatinine, alanine transaminase, glucose, HbA1c and lipid profile were checked. The estimated glomerular filtration rate (eGFR) was calculated by using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation [22]. With the introduction of NOACs to the Drug Formulary of GOPCs in HAHK in July 2019, AF patients whose CHA 2 DS 2 -VASc score was ≥ 5 could obtain NOACs for free in the HA Pharmacy. For those whose CHA 2 DS 2 -VASc score was between 2-4, the patients had to purchase the NOAC as a self-financed item (SFI) from community pharmacy. Two types of NOACs were available in AFSC in KCC, Dabigatran and Apixaban. Patients could also choose other NOACs as SFIs, such as rivaroxaban or endoxaban. If AF patients were found to have moderate to severe mitral valve stenosis or had undergone valvular replacement therapy, they were referred to a specialist setting for warfarin treatment. The potential risks and benefits of anticoagulation therapy were thoroughly discussed with the patients by the attending FM doctor. Updated international guidelines and appropriate local therapeutic instructions were also available on our department website. At each FU visit at AFSC, patient medication adherence and adverse effects were assessed. Blood test results, accident and emergency department (AED) admission or hospitalizations were also documented. Data collection The list of patients fulfilling the inclusion criteria was retrieved from the Clinical Data Analysis and Reporting System (CDARS) of HA. Patient age, gender, smoking status and alcohol status were retrieved from the Clinical Management System (CMS) of HA. Their clinic blood pressure (BP) level on the first AFSC attendance and after one year FU were collected. The biochemical parameters including HbA1c and LDL-c levels before AFSC recruitment and after one year FU at AFSC were compared. Their AED attendance, hospitalization records and mortality data during the study period were also retrieved from the CMS. Outcome measures The primary outcomes include the following: 1) Total number of patients who agreed to NOAC treatment after recruitment in the AFSC, and 2) Modifiable CVD risk factor control, in terms of BP, HbA1c and LDL-c levels at baseline and after one year FU. • For HT patients without DM, BP < 140/90 mmHg was defined as satisfactory control • For HT patients with DM, BP < 130/80 mmHg was defined as satisfactory control • For DM patients, HbA1c < 7% was defined as satisfactory glycaemic control • For patients without history of CVD, LDL-c < 2.6 mmol/L was defined as satisfactory lipid control • For patients with history of CVD, LDL-c < 1.8 mmol/L was defined as satisfactory lipid control The secondary outcomes after one year FU include the following: 1) Drug-related adverse events 2) Major bleeding and non-major bleeding episodes 3) Stroke or systemic embolism events 4) AED attendance or hospitalisation episodes 5) Survival and mortality rates Major bleeding episodes (MBEs) were defined per the International Society on Thrombosis and Hemostasis (ISTH) criteria as one of the following [23]: fatal bleeding, and/or symptomatic bleeding in a critical area or organ, such as intracranial, intraspinal, intraocular, retroperitoneal, intra-articular or pericardial, or intramuscular with compartment syndrome, and/or clinically overt bleeding with a decrease in the haemoglobin level of ≥ 2 g/dl or transfusion of ≥ 2 units of packed red cells. Any reported bleeding episode that did not meet the criteria for major bleeding was defined as a non-major bleeding episode (NMBE). The project terminated when the AF patient had completed one year FU at AFSC or developed serious adverse effects related to intervention with supportive evidence. Sample size calculation Based on the local study of AF prevalence and NOAC utilization [18,24] as well as the level of significance (α = 0.05), the power of the test (β = 0.2 power of the test 80%) and the effect size (d = 0.5), the minimum sample size is 283. To allow room for case exclusion and assume a 15% dropout rate, 325 people were recruited. Statistical analysis All data were entered and analyzed using computer software (Windows version 23.0; SPSS Inc, Chicago [IL], US). Patient characteristics were described using proportions for categorical variables and means with standard deviations for continuous variables. Baseline characteristics are presented as percentages for categorical variables and mean ± standard deviation (SD) for continuous variables. The Chi-square test was used for univariate comparisons of categorical variables between groups. Student's t test was used for continuous variables. All statistical tests were two sided, and a P value of less than 0.05 was considered statistically significant. Ethical approval The study was approved by the Research Ethics Committee of Kowloon Central Cluster of Hospital Authority of Hong Kong, and the approval number was KC/ KE-19-0143/ER-3. Results In total, 325 high risk AF patients had attended AFSC during the study period, among which 194 patients had already taken NOAC whereas 131 patients had not. After thorough discussion with the attending FM specialist doctor in AFSC, 72 patients who did not take NOAC before agreed to start NOAC, whereas only 59 patients still declined it. Among the NOAC group, a total of 19 patients were excluded after a one-year FU, with 6 FU in the Specialist Out-patient Clinic, 2 defaulted FU and 11 patients died. In the non-NOAC group, 7 patients were excluded, with 2 defaulted FU, 3 cases with incomplete data and 2 patients died. Among the 11 patients who died in the NOAC group, 1 patient died of ICH at 6 months after initiation of NOAC with incidence rate of 0.4% and 1 patient died of ischaemic stroke who had already taken NOAC prior to attending AFSC, with an incidence rate 0.4%. The causes of the other 9 deaths were non-NOAC related including pneumonia, IHD and cancer. The one-year all-cause mortality rate in the NOAC group was 4.3%. Regarding the 2 patients who died in the non-NOAC group, both died of pneumonia, with one-year allcause mortality rate of 3.7%, which was not significant (P = 0.85) compared with the NOAC group. After case exclusion, a total of 299 cases including 247 patients on NOAC and 52 patients who declined NOAC were included in the final data analysis. The flowchart of case recruitment for this study is summarized in Fig. 1. Primary outcomes AF patients who agreed to NOAC use after visiting the AFSC showed a statistically significant increase from 58.5% at baseline to 82.6% (P < 0.001) as shown in Table 2. Among them, 105 (35.1%) patients were prescribed dabigatran, 139 (46.5%) were on apixaban, and 3 (1%) were on rivaroxaban as SFI. Table 3 summarizes modifiable CVD risk factor control in patients on NOACs at baseline and after one year FU. Among the 236 patients with HT, their average systolic BP (SBP) was 128.1 (± 13.3) mmHg and their average diastolic BP (DBP) was 71.0 (± 11.5) mmHg. After one year FU, SBP remained static at 126.9 (± 10.9) mmHg (P = 0.30), and the DBP was significantly decreased to 68.3 (± 10.6) mmHg (P = 0.009). For hypertensive AF patients without DM, 81.7% (n = 89) patients achieved satisfactory BP control and the rate was further increased to 90.8% (n = 99) after one year FU (P = 0.049). In hypertensive patients with DM, the BP control rate remained static after one year FU (P = 0.52). Among the 130 AF patients comorbid with DM, their average HbA1c level (6.68% versus 6.65%) and satisfactory glycaemic control rate remained static from baseline to one year after FU (P = 0.71 and P = 0.27 respectively). The average LDL-c level at baseline and one year after FU was also comparable (1.70 mmol/L versus 1.62 mmol/L, P = 0.08) and subgroup analysis showed that the LDL-c control rate remained static in both the with or without history of CVD groups, P = 0.05 and P = 0.72 respectively. Secondary outcomes Upon completion of the 12-month FU, a total of 12 bleeding episodes were observed, of which 8 were MBE at a rate of 3.2%/year, and 4 (1.6%/year) were NMBE. We also observed total 65 AED attendance/ hospitalisation events with incidence rate 26.3%. Causes of admission included pneumonia, CHF, IHD, atypical chest pain, syncope, fall with or without fracture and cancer. 2 patients complained of non-specific general discomfort, tiredness and muscle discomfort after taking NOACs and they consequently declined to use NOACs. There were no serious adverse effects observed. Discussion In our study, there was a significant increase in NOAC utilization after the AF cases were enrolled to be cared for in the AFSC. After one year FU at AFSC, 82.6% of AF patients had been put on NOAC, a rate that was significantly higher than those reported in the literature. Indeed, there are many barriers to initiating OAC treatment among AF patients. For example, overestimation of the bleeding risk and disadvantages associated with advanced age, such as fall risk etc., are other well-known obstacles [25]. Furthermore, lack of reversal agents may also affect patients' decisions to use NOACs [26]. The reasons contributing to the satisfactory utilization rate of NOACs in our study were multi-factorial. First, most of the AF cases referred to AFSC were of high risk or very high risk groups, therefore they were more willing to try NOAC after discussion with the doctor. Second, with the availability of NOACs including apixaban and dabigatran at AFSC of HAHK since March 2019 and the implementation of the HAHK policy that AF patients whose CHA 2 DS 2 -VASc score is ≥ 5 can be provided with NOACs for free have helped eased the financial difficulty of many high risk AF patients, many of whom otherwise have to purchase the NOACs as SFIs before this exercise. Based on these positive results, we would like to propose to the Hong Kong government that free NOACs should be provided for all high risk AF patients whose CHA 2 DS 2 -VASc score is ≥ 2, although balancing the use of public resources and costs is also important. Third, the attending doctors at AFSC are experienced FM specialists who are more skillful in AF management. They provided a comprehensive assessment of AF patients' background characteristics and comorbidities, and provided a thorough explanation and education of NOAC use to AF patients. In recent years, more evidence has supported an integrated multidisciplinary approach with treatments and management of modifiable CVD risk factors and underlying conditions could slow progression and improve the outcomes of AF [27]. Greater reduction in BP and better glycaemic control and lipid profiles were associated with decreased AF frequency, duration and symptoms [12]. AFSC aimed to provide comprehensive care with treatment and tailored information about advice and education on risk factor management to AF patients by targeting their underlying medical conditions. Our study showed a reduction in average DBP and more non-DM hypertensive patients with satisfactory BP control after FU in AFSC. Although HbA1c and LDL-c levels showed no significant change after one year FU, their satisfactory control rate remained consistently high from baseline until one year FU. Therefore, AFSC could help AF patients maintain optimal CVD risk factor control, which may subsequently prevent the development of AF related complications. The safety and efficacy of NOACs for the general population have been well demonstrated by different clinical trials in recent years. For example, a retrospective observational study found that both apixaban and dabigatran had lower incidences of ischaemic stroke (1.3-1.4%) and MBE (3.6%) than warfarin [28]. Our study showed comparable results with the literature, with an annual MBE incidence of 3.2%. The lower incidence of ischaemic stroke (0.4%) of our study might be due to the strict and satisfactory CVD risk factor control among AF patients managed at AFSC. Concerning the mortality rate, our study showed that the all-cause mortality rate of NOAC use after one year was 4.3%, which was also consistent with findings from the UK which showed an allcause mortality rate of 4% in a large cohort study [29]. Therefore, the use of NOACs in AFSC was proven to be safe and effective with comparable stroke risk, bleeding risk and mortality rate to findings in the literature. This study is the first study to assess OAC use and CVD risk factor control among high risk AF patients managed by primary care physicians. It has provided important background information on OAC use in the public primary care setting and helps to identify service gaps and direct future service enhancement strategies. In addition, all parameters including BP, HbA1c and LDL level were based on data of objective assessment retrieved from the CMS, thus recall bias or data entry bias had been minimized. Having said so, this study has several limitations. First, as this study was performed in public general out-patient clinics of a single cluster in HA, selection bias might exist. The results from this study may not be applicable to the private sector or secondary care setting. In addition, most of the AF cases assessed at AFSC had a higher CHA2DS2-VASc score of ≥ 5 (90.3%) due to HAHK Drug Formulary revamping exercise, which might have further confounded the findings of the study. The much higher NOAC utilization rate achieved at AFSC may not be comparable to other settings where most AF patients had a lower CHA 2 DS 2 -VASc score of 2-4. Second, due to the intrinsic limitations of the study design, the quasi experimental design without a control group, acausal temporal relationship could be established. Third, the one-year FU duration may not be long enough to assess the long-term effect of NOAC use among AF patients. In this regard, a randomized-control study design with a control group, and a longer FU study (more than one year) would help evaluate the efficacy of AFSC more comprehensively. Furthermore, a study of underlying obstacles to OAC prescription and subgroup analysis of the safety and effectiveness of NOACs may help physicians make more sensible clinical decision. Conclusion AFSC is effective in enhancing OAC use and maintaining optimal modifiable CVD risk factor control among high risk AF patients managed in primary care setting. With a much higher rate of OAC use and better CVD risk factor control, it is postulated that AF associated morbidities and mortality will be reduced in the long run.
2023-02-14T15:35:29.151Z
2023-02-14T00:00:00.000
{ "year": 2023, "sha1": "71fa07f7059b41c676b640e135c9a401bc007201", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "71fa07f7059b41c676b640e135c9a401bc007201", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220473635
pes2o/s2orc
v3-fos-license
Trends of lymphoma incidence in US veterans with rheumatoid arthritis, 2002–2017 Objective Past epidemiological studies have consistently demonstrated a link between rheumatoid arthritis (RA) and the incidence of lymphoma and it has been posited that high systemic inflammatory activity is a major risk determinant of lymphomagenesis. Given advances in the therapeutic armamentarium for RA management in recent years, the resulting lower level of disease activity could have led to a decline in lymphoma incidence in patients with RA. This study examined recent trends in lymphoma incidence in US veterans with RA. Methods Patients with RA were identified in the Veterans Affairs (VA) Corporate Data Warehouse. Lymphoma incidence was identified through the end of 2018 from the VA Central Cancer Registry and compared among patients diagnosed during 2003–2005, 2006–2008, 2009–2011 and 2012–2014. Results Among persons diagnosed with RA during 2003–2005, the incidence of lymphoma in the next 6 years was 2.0 per 1000 person-years. There was a steady decline in lymphoma incidence during the corresponding 6 years following diagnosis in the subsequent three cohorts, with a rate of 1.5 per 1000 person-years in the 2012–2014 cohort (incidence relative to that in the 2003–2005 cohort=0.79 (95% CI 0.58 to 1.1)). There was no similar decline in lymphoma incidence in VA patients diagnosed with osteoarthritis. Conclusion We observed a decline in lymphoma incidence in recent years among American veterans with RA. Further studies are needed to evaluate the specific factors driving this decline. INTRODUCTION Collectively, lymphoid neoplasms are the fourth most common cancer and the sixth leading cause of cancer death in the USA. Epidemiological studies over the past decades have consistently demonstrated a link between rheumatoid arthritis (RA) and lymphomas, the association being strong for both non-Hodgkin lymphoma (NHL) and Hodgkin lymphoma (HL). 1 2 Mellemkjaer et al reported a relative risk (RR) of 1.7 (95% CI 1.5 to 2) for all lymphatic and haematopoietic cancers in patients with RA, RR of 2.4 for NHL and RR of 3.4 for HL. 3 There has been an ongoing concern regarding whether the elevated risk is from use of immunosuppressive therapies, particularly biologics like tumour necrosis factor inhibitors (TNFi). 4 5 Recent observational studies with varied study settings and designs have not found the risk for lymphoma to be increased by the use of TNFi agents. 5 6 In the largest case-control study reported to date with 378 Swedish patients with RA with lymphoma and 378 controls, Baecklund et al 7 concluded that 'high inflammatory activity, rather than its treatment, is a major risk determinant' for lymphoma. Patients with RA with the high cumulative disease activity had nearly a 60-fold increased risk of lymphoma compared with patients with low disease activity. The management of RA has dramatically improved over the years since the introduction of the first TNFi agent in November 1998 and the subsequent approval of other potent biologic or conventional synthetic disease-modifying antirheumatic drugs (bDMARDs, csDMARDs). 8 In 2010, an international expert consensus panel published treatment recommendations What does this study add? ► To date, there is limited evaluation of the trends in lymphoma incidence in patientswith contemporary rheumatoid arthritis (RA). ► Our study observed a decline in lymphoma incidence inrecent years among patients with RA, but not among patientswith osteoarthritis. How might this impact on clinical practice? ► Further studies looking at specific factors associated with the declining lymphoma incidence rates in patients with RA are needed. for RA 9 that emphasised a treat-to-target (T2T) strategy of individualising and escalating treatment to achieve the lowest disease activity or remission in patients with RA. Studies from the Dutch Rheumatoid Arthritis Monitoring Remission Induction Cohort showed that implementation of T2T strategy in daily clinical practice for very early RA led to a high frequency of remission that was sustained in the majority of subjects. 10 Early diagnosis and treatment with csDMARDs (eg, methotrexate) and subsequently with other bDMARDs (eg, inhibitors of tumour necrosis factor, interleukin 6) improve patient outcomes and prevent RArelated disability. 10 11 Meanwhile, the treatment for osteoarthritis (OA) has not evolved in recent years. Taking into consideration the improvement in RA treatment options and the evolution of RA treatment strategies, we hypothesised that incidence rates of lymphoma in patients with RA have declined over more recent years but not in patients with OA. With respect to lymphoma incidence, patients with OA are likely to represent the general population because of the minimally inflammatory nature of this condition for which care is typically sought. METHODS This study used data gathered during routine care to identify a cohort of patients with a diagnosis of RA and a comparison group of patients with OA, who received care through the nationwide Veterans Health Administration (VHA) healthcare network. Identification of patients and all analyses were performed through the Veterans Affairs (VA) Informatics and Computing Infrastructure (VINCI), an integrated infrastructure system from VHA's electronic medical records. Patients: We identified patients with RA diagnosed from 1 January 2003 and 31 December 2017 using the VA Corporate Data Warehouse (CDW) on VINCI based on the following inclusion criteria: (1) adults >18 years of age with two or more RA diagnostic codes (International Classification of diseases (ICD) 9 (714.XX) or ICD10 (M05.XX, M06.XX)) at least 6 months apart during 2002-2017, with at least one visit in a rheumatology clinic; (2) no history of other autoimmune diseases associated with lymphoma (eg, Sjögren's syndrome, inflammatory bowel disease, celiac disease), based on diagnoses during the 12 months prior to RA diagnosis; and (3) no history of lymphoma diagnosis within 6 months after the first diagnosis of RA in the VA health system. Patients with OA: Patients with OA were identified based on similar criteria as the RA cohort: adults >18 years of age with a diagnosis coded as ICD 9 (715.XX) and ICD 10 (M15-M19) at least twice within 1 year; at least 1 VA encounter 12 or more months prior to OA diagnosis; and no history of autoimmune diseases or lymphoma in the prior 12 months. For each patient with RA, up to two patients with OA were selected, with frequency matching based on initial year of RA or OA diagnosis (categorised as 2003-2005, 2006-2008, 2009-2011, 2012-2014), age (<55, 55-64, 65-74, and ≥75 years old), sex, race (non-Hispanic white, non-Hispanic black, other non-white, missing) and number of VA primary care clinic visits during the 12 months prior to initial RA or OA diagnosis (categorised as 1, 2-4, 5-7, 8+). For each patient, the 'index date' was defined as the date of the initial RA or OA diagnosis. Outcomes: Our primary outcome was lymphoma incidence (ICD9 codes 200.x-202.x and ICD10 C81-85.x) 12 through 31 December 2018. The date of the first lymphoma diagnosis required at least a 6-month interval after the date of initial RA or OA diagnosis. Lymphomas are a heterogeneous group of diseases that can be classified into two broad subtypes: HL and NHL. Due to small numbers of cases in each sub-type of NHL or HL, we chose to restrict our evaluation to these two major types. The VINCI Cancer Module derived from the VA Central Cancer Registry (VACCR), a cancer registry that contains information on newly diagnosed cancers at the VA from 1995 onwards, was used for cancer identification. 13 The VACCR has served as the gold standard of cancer ascertainment for the last decade 14 where cancer registrars at the VA manually abstract case data, conforming to the standards set by the North American Association of Central Cancer Registries (NAACCR). 13 Statistical analysis: Since RA diagnosis is a predictor of risk of malignant lymphoma, 15 we analysed time trends in the incidence of lymphoma in the two groups of patients using a follow-up period limited to 6 years after the index dates. In a sensitivity analysis, we evaluated the trends with maximum follow-up truncated to 3 years, given that in the 6-year analysis, some members of the most recent year-ofdiagnosis cohort (2012-2014) did not have the full 6 years of follow-up. Proportional hazards regression was used in which the dependent variable was the number of days between the index date and date of first lymphoma diagnosis (for patients with lymphoma), or the number of days between the index date and first censoring event (for patients without lymphoma). Censoring events included development of lymphoma (if applicable), death (from the VA Vital Status file) or end of the follow-up period (31 December 2018). For analysis of lymphoma subtypes, we also used Pearson product-moment correlation to measure the strength of association between the lymphoma incidence/1000 patient-years and categorised time intervals, as well as linear regression to fit the trend in lymphoma incidence over time. RESULTS We identified 43 776 VA patients with RA meeting our eligibility criteria and 79 772 eligible, matched patients with OA. Patient characteristics for these two cohorts were similar, as expected from the matching protocol (table 1). Twenty-five per cent of the patients in each group were aged <55 years and 61% were between 55 and 74 years. Ninety per cent of patients with OA were male and 76% RMD Open were non-Hispanic white. Among patients with RA, 88% were men and 77% were non-Hispanic white. In our primary analysis that limited follow-up to 6 years since diagnosis for each cohort, there were 417 lymphomas in OA group and 347 in the RA cohort (table 2), with a mean follow-up duration of 4.5 and 4.7 years, respectively. Among patients with RA, there was a steady decline in lymphoma incidence during the period of study. The incidence among patients diagnosed with RA during 2012-2014 was 1. DISCUSSION During the first decade and a half of the 21st century, those VA patients with RA who were more recently diagnosed experienced a lower subsequent incidence of lymphoma than those diagnosed in earlier years. During this same time period, there was no corresponding decline in lymphoma incidence in patients with OA. The declining rate appeared to be largely driven by a decreasing incidence of NHL; the numbers for other lymphoma subtypes (eg, HL, follicular lymphoma) were too small to draw meaningful conclusions about trends in their incidence. In Swedish patients with RA diagnosed between 2004 and 2012, the risk of lymphoma was not lower than in patients with RA diagnosed in 1997-2003. 16 There are several factors that can potentially explain the observed differences in our results. Of importance, our study extends the ascertainment of lymphoma incidence through 2017, whereas Hellgren et al studied patients with RA up to 2012 only. The two cohorts of patients with RA were also different demographically, as our VA cohort was predominantly male with some racial diversity compared to patients in the Swedish Rheumatology Quality register (approximately 70% female and nearly all white). Given that systemic inflammation has been postulated to be a strong risk factor for lymphomagenesis in RA, 7 17 we speculate that the declining lymphoma incidence in RA might be related to better disease control by early and intensive treatments in recent years. Ng et al evaluated trends in the use of DMARDs in patients with RA in the VA medical system and found that use of methotrexate as the first DMARD increased from 39.9% in 1999-2001 to 57.2% in 2008-2009 (p<0.001). 18 They also showed that patients with RA diagnosed in 2008-2009 had a 74% higher chance of an earlier start on biologics than those diagnosed in 1999-2001 and that the time interval between RA diagnosis and treatment with DMARDs and biological agents decreased over time (median: 51 days in 1999-2001 to 28 days in 2006-2007). In a more recent evaluation, Walsh et al found that the percentage of RA veterans receiving DMARD treatment (non-biologic or biologic) increased between 2007 and 2015 (50.4% (95% CI 47.5 to 53.2) to 68.6% (95% CI 65.6 to 71.4)). 19 Also, the possible contribution of direct lymphoma suppression-as opposed to indirect suppression via reduced inflammatory biology-with use of anti-CD20 monoclonal antibody such as rituximab could be another possible explanation that requires further exploration. There have been several studies evaluating the association between immunosuppressive medications used for RA treatment and the lymphoma incidence. Baecklund et al observed a reduction in lymphoma incidence among persons who were given oral steroids (OR 0.6 ()) and intraarticular steroids (OR 0.4 (95% CI 0.2 to 0.6)), after adjustment for disease activity and DMARD use. 7 They further observed that a total duration of oral steroid treatment of <2 years was not associated with lymphoma risk (OR 0.87, 95% CI 0.51 to 1.5), whereas total treatment >2 years was associated with a lower lymphoma risk (OR 0.43, 95% CI 0.26 to 0.72). 20 Several studies have not found an increased risk of NHL in methotrexate-treated patients with RA, 21 22 whereas Mariette et al found an increased risk of HL, but not NHL, in a prospective 3-year study of methotrexatetreated French patients with RA compared to the general population. 23 Even though TNFi have been a topic of scrutiny regarding lymphoma, most of the evidence in multiple large robust registry-based studies from different countries, including Sweden, the USA and the UK, suggest that the risk of lymphoma is not increased in TNFi users compared to patients with RA on csDMARDs. 6 16 24 Mercer et al recently reported results from a large collaborative effort of multiple European registries, including >120 000 patients with RA, and found no evidence of any modification of the distribution of lymphoma subtypes in patients with RA treated with TNFi compared with bionaïve patients. 25 Data regarding some of the newer bDMARDs and targeted synthetic DMARDs (tsDMARDs) like baricitinib are currently limited. There remains a need for studies evaluating the safety of multiple bDMARDs, and especially, the tsDMARDs. The strengths of this study include use of the large national VA database, use of the VACCR for identifying lymphoma cases and longitudinal follow-up. Having a large cohort of patients with RA allowed us to look at the trends of a relatively rare outcome like lymphoma. The VACCR strives to maintain the NAACCR standards whereby the cancer registrars at VA medical facilities across the country abstract case data. If VA patient's cancer diagnosis is made outside the VA, VACCR will capture those cases as well if they subsequently receive care within the VA. 13 One limitation of the study is the use of administrative codes-based algorithm for diagnosis of RA, and OA that introduces the possibility of misclassification bias. However, the algorithms we used have been shown to have high sensitivity and positive predictive value for RA. 26 VACCR ascertains~90% of all VA cancers. We assume the missingness to be similar during the period of the study and so is unlikely to affect our results to any appreciable degree. Another limitation is that the most contemporary cohort (2012-2014) in the study did not have follow-up for the full 6 years. Given that the incidence of lymphoma rises with increasing time since RA diagnosis, this will lead to some confounding and will tend to exaggerate a reduction in incidence among that cohort. In addition, the need to restrict follow-up for lymphoma to the first 6 years following diagnosis forbids any conclusion regarding incidence in later years. Another consideration is the fact that many veterans enrolled in the VA also receive healthcare in other systems. Schwab et al have shown that most US veterans with RA who access VA care use the VA as their primary source of arthritis care and only 6% of dual care users in their study had non-VA haematologist/oncologists. 27 We did not adjust for disease activity or disease-modifying antirheumatic drugs use in these analyses. Finally, the relatively small number of women in our study population did not allow for a separate analysis to be performed in them and limit the generalisability of our results. To conclude, we observed a trend towards reduced incidence of lymphoma among US veterans diagnosed with RA in recent years relative to those diagnosed in earlier years. Further studies looking at specific factors associated with the declining lymphoma incidence rates in patients with RA are needed.
2020-07-11T13:02:13.868Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "c13b09f58619bf278ea2938fa08c7b5396a2a626", "oa_license": "CCBYNC", "oa_url": "https://rmdopen.bmj.com/content/rmdopen/6/2/e001241.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1147b5e633abd22e2e3772d6643aa1d53f20bfa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16858854
pes2o/s2orc
v3-fos-license
Clinical Characteristics Associated With Aspiration or Penetration in Children With Swallowing Problem Objective To evaluate demographic characteristics of children with suspected dysphagia who underwent videofluoroscopic swallowing study (VFSS) and to identify factors related to penetration or aspiration. Methods Medical records of 352 children (197 boys, 155 girls) with suspected dysphagia who were referred for VFSS were reviewed retrospectively. Clinical characteristics and VFSS findings were analyzed using univariate and multivariate analyses. Results Almost half of the subjects (n=175, 49%) were under 24 months of age with 62 subjects (18%) born prematurely. The most common condition associated with suspected dysphagia was central nervous system (CNS) disease. Seizure was the most common CNS disorder in children of 6 months old or younger. Brain tumor was the most important one for school-age children. Aspiration symptoms or signs were the major cause of referral for VFSS in children except for infants of 6 months old or where half of the subjects showed poor oral intake. Penetration or aspiration was observed in 206 of 352 children (59%). Subjects under two years of age who were born prematurely at less than 34 weeks of gestation were significantly (p=0.026) more likely to show penetration or aspiration. Subjects with congenital disorder with swallow-related anatomical abnormalities had a higher percentage of penetration or aspiration with marginal statistical significance (p=0.074). Multivariate logistic regression analysis revealed that age under 24 months and an unclear etiology for dysphagia were factors associated with penetration or aspiration. Conclusion Subjects with dysphagia in age group under 24 months with preterm history and unclear etiology for dysphagia may require VFSS. The most common condition associated with dysphagia in children was CNS disease. INTRODUCTION The prevalence of feeding problems has been estimated to be from 33% to 80% in children with developmental disorders [1,2]. Feeding and swallowing disorders during childhood are increasing. They typically occur in conjunction with multiple and complex medical and developmental conditions [3,4]. Approximately 37% to 40% of children assessed for feeding and swallowing disorders were born prematurely at less than 37 weeks of gestation [5,6]. Increased survival rates of children with histories of prematurity, low birth weights, and complex medical conditions might explain the recent increase of pediatric dysphagia [7]. Swallowing is a dynamic process. Videofluoroscopic swallowing study (VFSS) is generally considered as a reliable and safe method to evaluate dysphagia in pediatric population [8]. Nonetheless, the procedure exposes children to radiation. In addition, it provides only a brief sample of swallowing performance which can be an unpleasant and frightening experience for some children. Increased use of VFSS in the pediatric population has led to queries regarding the overuse of VFSS in evaluating swallowing difficulties [9]. The benefits from the use of the test should be weighed against its risks, particularly if its use involves infants. Therefore, before considering a VFSS referral, a review of medical, developmental, and feeding history should be conducted to comprehensively evaluate children with dysphagia. At present, standard clinical assessment tool for pediatric dysphagia has yet to be validated. Indicators have not been established to determine when a VFSS is necessary. Consequently, evaluating the severity of dysphagia is more challenging in children than in adults. Moreover, there is little understanding on the characteristics of children with severe dysphagia, especially those with etiologies other than neurologic disorders. The purpose of this study was to evaluate demographic data of children with suspected dysphagia referred for VFSS, including age, prematurity, underlying medical conditions, reason for referral, and diet status at the time of the study to identify clinical factors related to penetration or aspiration confirmed by VFSS. Subjects The study included 352 children under 19 years of age who were referred for VFSS from January 2006 to December 2011. Their developmental and medical histories in medical charts were reviewed in details. Subjects included 197 boys and 155 girls with a mean age of 49.3 months (standard deviation, 56.8 months). For subjects who underwent more than one VFSS, data from the first VFSS were used. Subjects were divided into five groups based on normal developmental feeding behavior. Generally, infants under 6 months are breastfed or bottlefed. Progression to a transitional diet (weaning food) is completed between 7 months and 2 years. At the age of 6 years, coordination of mastication matures [10]. Children begin elementary school when they are 7 years old, and those between 13 and 18 years of age are classified as adolescents. We used adjusted age during the first 24 months of life in preterm infants with less than 37 weeks' gestational age [11]. For example, a child born at 33 weeks' gestational age whose chronological age was 12 months was assigned an adjusted age of 11 months. For children older than 25 months, age was described in years. Assumption for feeding behavior and swallowing safety was estimated based on age and overall developmental status of the subject. Videofluoroscopic swallowing study VFSS was performed using a Hitachi Medix 3000 table unit fluoroscope (Hitachi Medical Corp., Tokyo, Japan). Frame-by-frame images were acquired as digital imaging files using a computer-based image processing system equipped with a digital frame grabber board (Pegasus HD/SD Board; Grass Valley Inc., Honorine, France). A clinician experienced in feeding and swallowing disorders conducted each study. Subjects were placed in upright sitting position during the studies. For those who were unable to maintain a sitting posture, a reclining position was adopted. VFFS was performed using protocol initially described by Logemann [12] with modifications. Parents prepared bottles or weaning food, if indicated. All bottled milk or weaning diet was mixed with liquid barium immediately prior to VFSS. For infants, bottled milk mixed with liquid www.e-arm.org barium was presented initially. For infants older than 6 months, curd-type yogurt or prepared weaning food was presented subsequently. For children older than 1 year, 2 and 5 mL of diluted barium (35% wt/vol), pudding, curdtype yogurt, rice porridge, and steamed rice in a spoon were used. Food was given to the children by a physiatrist or family caregiver. Images were analyzed by a clinician under the supervision of a senior clinician who had at least 2 years' experience with VFSS. Common VFSS findings in pediatric patients included delayed pharyngeal swallow and the presence of supraglottic penetration, nasopharyngeal reflux, and aspiration [13]. Penetration and aspiration, a major finding equally applied in variable pediatric age groups, was chosen as the primary outcome among those with pharyngeal phase abnormalities. Penetration was defined as the passage of contrast materials into the larynx but not through vocal cords. Aspiration was defined as the passage of contrast materials below true vocal cords [14]. Statistical analysis Frequency analysis and descriptive statistics were used to summarize demographic data. Chi-square test and univariate analysis were used to identify the relationship between demographic factors and penetration or aspiration. Multivariate logistic regression analysis was used to identify independent factors related to penetration or aspiration. All statistical analyses were performed using SPSS ver. 20.0 (IBM SPSS Inc., Armonk, NY, USA). A p-value of less than 0.05 was considered statistically significant. Demographic characteristics About half of the subjects (n=175, 49%) who underwent VFSS were younger than 24 months. Aspiration symptoms or signs (174/352, 49.4%) were the major cause of referral for VFSS in most age groups except for those younger than 6 months. Almost half of the subjects (35/72, 48.6%) younger than 6 months had poor oral intake. In groups with age over 7 years, swallowing difficulties, such as odynophagia and the sensation of food stuck in the throat, formed a certain percentage (17% and 13%, respectively) ( Fig. 1). At the time of the study, 160 subjects (46%) were dependent solely on non-oral feeding methods, including nasogastric and oroesophageal tubes, gastrostomy and jejunostomy tubes, and total parenteral nutrition ( Table 1). Among subjects who were on regular diets at the time of the study, aspiration symptoms (60%) were the most common cause of referral for VFSS. Sixty-two patients (18%) were born prematurely at less than 37 weeks' gestational age, including 33 patients .4%) complained of aspiration symptoms or signs. In age groups younger than 6 months, poor oral intake was the major complaint. www.e-arm.org (53%) who were born less than 34 weeks of gestation. The most prevalent underlying medical condition of dysphagia was central nervous system (CNS) disease (53%). The proportion of CNS disease increased from 29% to 81% as the age of subjects increased. In subjects younger than 6 months old, congenital disorder with swallow-related anatomical abnormalities and neuromuscular disease formed a certain percentage (15.3%) of underlying conditions. In the age group between 7 and 24 months, congenital disease without swallow-related anatomical abnormalities was the second leading cause of dysphagia (23%) (Fig. 2). Brain tumor was found in 51 of 185 subjects (28%) with CNS disorder. The percentage showed a tendency of increase from 10% to 49% as the subjects reached school age. Distinctively, seizures (43%) were the first major cause of dysphagia in subjects younger than 6 months (Fig. 3). The subsets of underlying medical conditions predisposing to dysphagia are summarized in Table 2. Clinical characteristics related to penetration or aspira tion A total of 206 of 352 patients (58.5%) experienced penetration or aspiration during VFSS (Table 3). Three fac- www.e-arm.org Table 3. Videofluoroscopic evaluation of swallowing difficulties Underlying medical condition Total ≤24 months >24 months p<0.05, significant association. b) p=0.074, marginal significant association. www.e-arm.org tors were significantly (p<0.05) associated with penetration or aspiration in chi-square test. Age groups under 24 months of age were more likely to show penetration or aspiration (p<0.001). The groups of subjects born prematurely were more likely to show penetration or aspiration (p=0.019). Of subjects born prematurely, children less than 34 weeks of gestational age were significantly more likely to experience penetration or aspiration (p=0.026) through subgroup analysis. Throughout various underlying medical conditions, patients with congenital disorder and swallow-related anatomical abnormalities had a higher percentage of penetration or aspiration with marginal statistical significance (p=0.074) ( Table 4). Multiple logistic regression analysis showed that age younger than 24 months and miscellaneous underlying medical conditions were independent factors associated with penetration or aspiration findings in VFSS (Table 5). DISCUSSION The objective of the present study was to identify the demographic characteristics and clinical factors related to penetration or aspiration in children with dysphagia who underwent VFSS. About 50% of the children in this study were less than 24 months of age, which corresponded well with the results of an earlier study by Rommel et al. [6]. The reason for the percentage of children younger than 24 months of age might be the essentiality of feeding for survival in those age groups. In this study, penetration or aspiration was observed in more than 70% of subjects with dysphagia who were younger than 24 months of age, which was similar to the penetration or aspiration rate of 74.8% in infants with swallowing difficulty reported by Uhm et al. [15]. However, Lee et al. [16] and Mercado-Deane et al. [17] reported aspiration rates in infants at 17% and 52%, respectively, which were lower than the rate in the present study. The discrepancy could be due to the difference in study populations. This study included subjects from a tertiary hospital where there are a large number of critical patients with serious underlying medical conditions. In addition, premature groups were more likely to show penetration or aspiration in univariate analysis, especially those born earlier than 34 weeks' gestational age. This fits well with the fact that coordination of sucking, swallowing, and breathing is established at 34 weeks of gestation [18]. However, prematurity was not significantly associated with aspiration in multivariate analysis. Of the 62 patients with prematurity history, 44 (71%) were younger than 2 years, 25 (40%) had CNS disease, 22 (35%) had miscellaneous disease, 12 (19%) had congenital disease, 3 (5%) had neuromuscular diseases. Ninety-two percent of premature patients performed VFSS at the corrected age of 4 months and older, indicating that most patients had dysphagia for a perceptible time. There is a high probability that aspiration of prematurity was influenced by other factors, including underlying medical conditions and age. The results of this study demonstrate that CNS disease, especially brain tumor, is the most common underlying medical condition in children with dysphagia. Lefton-Greif [13] and Love et al. [19] reported that cerebral palsy was the most frequent neurologic condition associated with dysphagia in children. Rommel et al. [6] reported gastrointestinal problem was the most frequent medical diagnosis of children presenting feeding problem in a tertiary care center. The difference between our result and results by others could be explained by high ratio of critical patients in our hospital. In multiple logistic regression analysis, patients with miscellaneous underlying medical conditions were more likely to show penetration or aspiration in VFSS. Miscellaneous conditions included various heart diseases (n=9, 3%), gastroesophageal reflux disease (GERD) (n=16, 5%), post-intensive care condition (n=3, 1%), and unknown causes. Infants with congenital heart disease are at risk of potential dysphagia, secondary to postoperative vocal cord dysfunction [20]. Patients with GERD often show aspiration pneumonia, masquerading oropharyngeal dys phagia. GERD can also occur in conjunction with laryngeal dysfunction or swallowing incoordination [17]. Patients with prolonged intubation are at increased risk of dysphagia mostly reported in adults [21]. Endotracheal tube can cause remodeling of palate due to pressure of the tube on the hard and soft palate [22]. Despite several tests, 16 subjects failed to reveal the exact cause of dysphagia. This group needed continuous consultation and periodic follow-up of changes in dysphagia. Patients with dysphagia of unknown cause need further research. For all age groups except for infants younger than 6 months, aspiration symptoms or signs were the major causes of referral for VFSS. In patients younger than 6 www.e-arm.org months, poor oral intake was the primary cause of referral for VFSS. This finding correlated well with the results of Newman et al. [5] who reported that infants were less likely to show aspiration symptoms such as cough due to anatomical structures. However, our study showed higher penetration or aspiration rates during VFSS in younger age groups. This suggests that younger children without aspiration symptoms or signs can have silent penetration or aspiration which can be revealed by VFSS. The existence of silent penetration or aspiration displays the usefulness of VFSS in children with underlying medical conditions with ambiguous symptoms of dysphagia [8]. However, VFSS findings should be interpreted and integrated with other information, including feeding history, underlying medical conditions, and nutritional status. A physical examination including posture and position of feeding, respiratory pattern, alertness, response to sensory stimulation, self-regulation ability, oral structure, and growth should also be conducted. In addition, symptoms, such as vomiting, coughing, choking, drooling, wet voice, and feeding observation, must be included. Children are very sensitive to taste, texture, and temperature of food [23]. Therefore, using subject's usual meal for VFSS can change the result of the study, especially in those with poor oral intake. There were several limitations to this study. First, this was a retrospective study. Second, the study was a singlecenter study. Therefore, referral bias might be an inherent consequence. In addition, a high ratio of preterm infants with variable underlying medical conditions was used in this study. In conclusion, the results of this study showed that penetration or aspiration was more often observed (approximately 70%) in children younger than 24 months of age. The penetration or aspiration rate was higher in those born prematurely at less than 34 weeks of gestation. Underlying medical conditions causing dysphagia differed by age group. This study revealed that younger children born prematurely are at higher risk to develop feeding disorders. Therefore, performing VFSS is beneficial for these children. The findings from this study will improve our understanding on the usefulness of VFSS in children with dysphagia of unknown etiology, particularly for those younger than 24 months with a history of preterm birth.
2018-04-03T05:23:55.994Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "5472c6ed6b03bdb1e1d3749ebdad13a5f88e0ea8", "oa_license": "CCBYNC", "oa_url": "http://www.e-arm.org/upload/pdf/arm-38-734.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5472c6ed6b03bdb1e1d3749ebdad13a5f88e0ea8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1279035
pes2o/s2orc
v3-fos-license
Knockout of Insulin-Like Growth Factor-1 Receptor Impairs Distal Lung Morphogenesis Background Insulin-like growth factors (IGF-I and -II) are pleiotropic regulators of somatic growth and development in vertebrate species. Endocrine and paracrine effects of both hormones are mediated by a common IGF type 1 receptor (IGF-1R). Lethal respiratory failure in neonatal IGF-1R knockout mice suggested a particular role for this receptor in pulmonary development, and we therefore investigated the consequences of IGF-1R inactivation in lung tissue. Methods and Findings We first generated compound heterozygous mutant mice harboring a hypomorphic (Igf1rneo) and a null (Igf1r−) allele. These IGF-1Rneo/− mice express only 22% of normal IGF-1R levels and are viable. In adult IGF-1Rneo/− mice, we assessed lung morphology and respiratory physiology and found normal histomorphometric characteristics and normal breathing response to hypercapnia. We then generated homozygous IGF-1R knockout mutants (IGF-1R−/−) and analyzed their lung development during late gestation using histomorphometric and immunohistochemical methods. IGF-1R−/− embryos displayed severe lung hypoplasia and markedly underdeveloped diaphragms, leading to lethal neonatal respiratory distress. Importantly, IGF-1R−/− lungs from late gestation embryos were four times smaller than control lungs and showed markedly thickened intersaccular mesenchyme, indicating strongly delayed lung maturation. Cell proliferation and apoptosis were significantly increased in IGF-1R−/− lung tissue as compared with IGF-1R+/+ controls. Immunohistochemistry using pro-SP-C, NKX2-1, CD31 and vWF as markers revealed a delay in cell differentiation and arrest in the canalicular stage of prenatal respiratory organ development in IGF-1R−/− mutant mice. Conclusions/Significance We found that low levels of IGF-1R were sufficient to ensure normal lung development in mice. In contrast, complete absence of IGF-1R significantly delayed end-gestational lung maturation. Results indicate that IGF-1R plays essential roles in cell proliferation and timing of cell differentiation during fetal lung development. Introduction Insulin-like growth factors (IGF-I and -II) control tissue homeostasis by regulating essential cell functions including proliferation, differentiation and survival, through their cognate tyrosine kinase receptor IGF-1R. IGF-II also interacts with a second receptor (M6P-R, or IGF-2R) that reduces IGF-II signaling through lysosomal degradation. During pre-and postnatal development and in the adult, IGF ligand and receptor expression are tightly regulated in a cell type-specific and spatiotemporal manner. Targeted mutation of IGF genes in the mouse showed that IGF signaling is relevant for development, homeostasis and repair of lung tissue [1][2][3][4][5][6]. Mutant mice completely lacking IGF-1R (IGF-1R 2/2 ) reach only 45% of normal birth size, are unable to expand their lungs and die shortly after birth [1]. Similarly, mice lacking IGF-I are strongly growth retarded and show high postnatal mortality due to hypoplastic lungs marked by increased cellularity and collapsed alveoli [1,2]. Prenatal lungs from IGF-I 2/2 mice display abnormal cell proliferation, as well as altered alveolar epithelium and capillary differentiation [3]. IGF-II knockout mice, which show a less pronounced fetal growth deficiency, develop thickened pulmonary alveolar septa and altered alveolar organization [4]. IGF-1R mRNA expression is highest around embryonic day 18 (E18), and ex vivo stimulation of lung development by IGF-I and -II, shows that IGF signaling induces alveolar and vascular maturation in the late stages of fetal lung development [5]. Finally, IGF-1R signaling is also involved in vascularization and angiogenesis of human fetal lungs [7]. Recently, several heterozygous IGF-1R mutations have been identified in humans presenting with various degrees of intrauterine and postnatal growth retardation [8][9][10][11][12]. One patient with deletion of the distal long arm of chromosome 15, which includes the IGF-1R gene, was reported with lung hypoplasia [13]. These data are consistent with IGF-1R being an essential mediator of respiratory organ development. Although IGF-1R knockout mice die from respiratory distress at birth, no study has so far focused on lung development in IGF-1R mutant mice. Here, we studied the role of IGF signaling in lung development and respiratory physiology using two different IGF-1R mutant models. First, we used IGF-1R knock-down mice (IGF-1R neo/2 ) [14,15] that express only 22% of wild type IGF-1R levels in lung tissue, and that we found resistant to hyperoxia in a previous study [6]. In the present study, we showed that young adult IGF-1R neo/2 mice present with normal lung morphology and breathing. However, when we used mice with complete IGF-1R knockout (IGF-1R 2/2 ), embryos showed conspicuous retardation of lung development, marked by increased cell proliferation and apoptosis. Young Adult IGF-1R neo/2 Mice Show Normal Lung Morphometry and Normal Lung Ventilation In compound heterozygous IGF-1R neo/2 mice, IGF-1R expression is substantially decreased in all tissues. Using in vitro receptor ligand binding assay, we showed previously that IGF-1R neo/2 mice have four times less IGF binding sites than control mice in brain tissue [15]. Here we showed by western blot that in lung tissue, IGF-1R levels were diminished to 22% of control values (Fig. 1A, B). Nevertheless, comparing lung histology from 5-week-old IGF-1R neo/2 mice with control IGF-1R +/+ littermates, we found that lung architecture of IGF-1R neo/2 mice was not distinct from controls with respect to alveolar airspace, boundary length density and alveolar wall thickness ( Fig. 1C-F). To assess respiratory function in these mutants, we recorded ventilation in conscious IGF-1R neo/2 and IGF-1R +/+ mice in ambient air and in response to hypercapnia. Baseline minute ventilation in ambient air was similar between groups (Fig. 1G). We then challenged the mice with 6% and 8% CO 2 , which markedly increased minute ventilation, tidal volume and breathing rate in both groups (Fig. 1H, I; P,0.005). However, the ventilatory responses to hypercapnia did not differ between IGF-1R neo/2 and IGF-1R +/+ mice. This suggested that low levels of IGF-1R in IGF-1R neo/2 mice are sufficient to ensure development of normal lungs and respiratory physiology. Analysis of lung morphology showed normal anatomical organization of lobes in IGF-1R 2/2 mutants ( Fig. 3A-F), but higher magnification revealed a densification of the lung parenchyma in IGF-1R 2/2 embryos ( Fig. 3G-L). Histology of lung tissue showed conspicuous differences between IGF-1R 2/2 and IGF-1R +/+ embryos, which were most prominent in the distal ducts ( Fig. 3M-X). In IGF-1R 2/2 embryos, the intersaccular mesenchyme was markedly thicker and the number of acinar buds lower as compared with controls, at E17.5 ( Fig. 3Q-T) and at E19.5 ( Fig. 3U-X). These characteristics were still similar in both genotypes at E14.5 ( Fig. 3M-P). Together, this indicated that in the complete absence of IGF-1R, the normal physiological process of mesenchymal thinning was impaired, and suggested that lung development was delayed during the canalicular stage. Indeed, morphometry of lung tissue at E17.5 revealed that in lungs from IGF-1R 2/2 embryos the saccular airspace was noticeably smaller (Fig. 4A) and the saccular wall thickness significantly increased (Fig. 4B), in accordance with the above histological observations. Using epithelial marker NKX2-1 (TTF-1), a transcription factor involved in lung organogenesis and epithelial differentiation, we determined the relative size of epithelial and mesenchymal compartments of lung parenchyma. This revealed that at E17.5, knockout lungs contained significantly less epithelium and less alveolar space than controls (epithelium: 50.166.8 versus 67.362.6%, P,0.05; alveolar space: 6.562.1 versus 13.161.3%, P,0.05; n = 3 and 5). In contrast, E17.5 knockout lung tissue harbored twice the amount of mesenchymal tissue compared with control lungs (43.469.0 versus 19.663.8%; n = 3 and 5, P,0.05). To find out whether incomplete development of the lung and possibly also neonatal death can be rescued by prolonging the gestational period beyond full term, we treated pregnant females with progesterone and recovered E21.5 embryos by caesarian section. However, we found no evidence for catch up growth of the IGF-1R 2/2 lungs during this period of extended gestation. Instead, lung development regressed when gestation was prolonged, in sharp contrast to heart and kidney that continued to increase organ weight (Table 2). Consistently, we found no histological evidence for lung maturation between E19.5 and E21.5 in IGF-1R 2/2 embryos ( Fig. 4C-F), and none of the E21.5 IGF-1R 2/2 embryos was able to breathe. Lung Hypoplasia in IGF-1R 2/2 Embryos is Marked by Increased Cell Proliferation and Delayed Differentiation To further investigate the role of IGF-1R during saccular stages of development, we assessed histoanatomy of embryonic lung tissue. At E17.5, cell density revealed by DAPI staining was similar in IGF-1R 2/2 and IGF-1R +/+ embryos ( Fig. 4G-I). Next, we assessed cell proliferation in IGF-1R 2/2 lungs using anti-phosphohistone H3 IHC, and found the number of proliferating cells significantly increased in IGF-1R 2/2 lungs at E17.5 (18876119 versus 1269669, n = 3 and 5 per group, P,0.01; Fig. 4J-L). In addition, the percentage of cleaved caspase-3-positive lung cells at E17.5 was significantly higher in IGF-1R 2/2 embryos (59.765.6 versus 41.065.5, n = 5 and 7 per group, P,0.05; Fig. 4M-O). Similar results were obtained using Ki67 and TUNEL staining (not shown). Proliferation and cell death concerned epithelial, vascular endothelial and mesenchymal cells ( Fig. 4P-AA), but it was not clear whether IGF-1R inactivation affected all compartments to the same extent. However, since most of the proliferating cells superpose with mesenchyme, it can be deduced that cell turnover is increased also among mesenchymal cells. To investigate whether the delayed development in IGF-1R 2/2 embryos was due to alterations in cell differentiation, we performed IHC on lung tissue from E17.5 and E19.5 IGF-1R 2/2 and control embryos using cell type-specific markers of differentiation. We first assessed microvascular organization and capillary complexity at E17.5 using an antibody against CD31. We observed that the density of endothelial cells was significantly diminished in IGF-1R 2/2 lung parenchyma compared with controls ( Fig. 5A-C). Growth retardation in IGF-1R 2/2 embryos affects lung more than other tissue. Values represent organ weight relative to body weight (mean 6 SEM), normalized to the stage-specific mean of the control (IGF-1R +/+ ) group. Organ/body weight ratio was calculated from data in Table 1. * P,0.05; ** P,0.01; *** P,0.001, compared with normalized IGF-1R +/+ data (Norm) of the same developmental stage; Student's t-test; ND, not determined. doi:10.1371/journal.pone.0048071.g002 Mutant tissue also showed a less developed microvascular network than controls, as revealed by the diminished number of capillary junctions (Fig. 5D-F). We then used an antibody against von Willebrand factor to monitor the development of blood vessels. At E17.5, smaller blood vessels were still scarce in both genotypes, while large blood vessels were strongly labeled in both knockout and controls (Fig. 5G, H). However, at E19.5, smaller blood vessels were clearly visible in controls while they just started appearing in IGF-1R 2/2 lung tissue (arrows in Fig. 5I, J). We then evaluated epithelial expression of NKX2-1. At E17.5 we observed in both genotypes a distal-to-proximal difference of NKX2-1 expression between cuboidal cells of the acinar bud (distal) and columnar epithelial cells of the acinar tubule (proximal), with NKX2-1 being less abundant in the acinar tubule cells (Fig. 5K-M). This positive bud-to-tubule ratio was similar in IGF-1R 2/2 and control mice. NKX2-1 IHC analysis demonstrated that epithelial cell organization changed profoundly in the transition between E17.5 and E19.5, in both genotypes ( Fig. 5N-Q). Importantly, IGF-1R 2/2 lungs displayed at E19.5 a distribution of NKX2-1 positive cuboidal epithelial cells that is typical for normal lung tissue at E17.5. Finally, to monitor type 2-cell differentiation, we analyzed IHC of surfactant protein pro-SP-C. Again, IGF-1R 2/2 tissue exhibited at E19.5 a differentiation pattern that normal (IGF-1R +/ + ) lungs already showed at E17.5 ( Fig. 5R-Y). Taken together, analysis of cell markers demonstrated that major differentiation processes in the developing lung parenchyma of IGF-1R 2/2 mice were significantly delayed and in fact arrested in the canalicular stage. Collectively, our data suggest a key role for IGF-1R in regulating proliferation, apoptosis and timing of differentiation in developing lung tissue. Increased proliferation with increased apoptosis and incomplete epithelial differentiation are hallmarks of late canalicular stages, and results presented here clearly illustrate the substantial delay in lung maturation prevailing in the late stages of IGF-1R knockout embryogenesis. Development of the Diaphragm is Markedly Affected in IGF-1R 2/2 Embryos With respect to possible extra-pulmonary causes of delayed lung development, we noticed that the diaphragm of IGF-1R 2/2 embryos, although intact across the abdominal cavity and with no evidence for hernia, was significantly thinner than that of IGF-1R +/+ embryos, as shown in transverse sections of the trunk (42.560.9 versus 72.162.4 mm, P,0.002; Fig. 6A, B). At the same time, rib diameter was relatively less affected in IGF-1R 2/2 embryos (12266 versus 174613 mm, P,0.01; Fig. 6C), such that the diaphragm/rib ratio was significantly smaller in IGF-1R 2/2 embryos as compared with IGF-1R +/+ (0.3560.01 versus 0.4260.01, P,0.002; Fig. 6D). This suggested an over-proportional reduction in IGF-1R 2/2 diaphragm muscle mass. Discussion The aim of this study was to examine the consequences of IGF-1R inactivation on lung development. We first investigated the effects of a substantial reduction of receptor levels in the lungs of IGF-1R neo/2 mice on respiratory physiology and morphology. Although IGF-1R neo/2 mice displayed significant growth retardation at birth and thereafter, we observed no evidence for respiratory distress. Likewise, histomorphology at embryonic stages (data not shown) and at adult stages was not different between IGF-1R neo/2 and IGF-1R +/+ mice. Moreover, the normal ventilatory response to hypercapnia observed in IGF-1R neo/2 mice was indicative of intact respiratory physiology, and we conclude that as little as 22% of wild type IGF-1R protein levels are sufficient to ensure normal lung development and function. These findings are consistent with observations in humans, where heterozygous mutation of IGF-1R is associated with delayed growth and sometimes with retarded mental development, but rarely with altered respiratory function [13]. Liu et al. [1] and Holzenberger et al. [14] reported a generalized organ hypoplasia in IGF-1R 2/2 embryos. Here, we focused on lung development and observed pulmonary hypoplasia in IGF-1R 2/2 embryos as early as E14.5, a phenotype that is in line with the pulmonary hypoplasia observed in IGF-I and IGF-II gene knockout mice. IGF-I and IGF-II both act through IGF-1R, and deletion of IGF-I or IGF-II genes results in severe embryonic growth retardation, affecting all organs and tissues [1,16,17]. These mice suffer from lung and muscle hypoplasia, which explains their respiratory distress and high mortality [1,4,16]. Here we showed that IGF-1R knockout affected in particular the lungs, suggesting that the IGF system plays an exceptionally strong role in the development of fetal lung. The main consequence of complete IGF-1R inactivation for mouse lung development is failure of progression from canalicular to saccular structures and increased proliferation in perinatal fetuses (stages E17.5 to E19.5). Our data showed that IGF-1R 2/2 lungs at E17.5 are extremely hypoplastic and retained in the pseudoglandular stage. Eventually, by E19.5, the lungs of the IGF-1R 2/2 mice had moved through the canalicular stage and were transitioning into the early saccular stage, but were still severely hypoplastic. Importantly, thinning of the alveolar septa is necessary for subsequent perinatal maturation and development of efficient gas exchange of the lungs. In fact, compared with the control mice at E19.5, IGF-1R 2/2 lungs exhibited thickened primary alveolar septa, which may be the principal cause of neonatal respiratory failure. Immunostaining of pro-SP-C, NKX2-1, CD31 and von Willebrand factor demonstrated that major differentiation processes were delayed in the developing lung parenchyma of IGF-1R 2/2 mice. Meanwhile, our study did not reveal any significant difference between genotypes in the number of NKX2-1 positive proximal and distal cuboidal lung epithelial cells, nor in the distal-to-proximal ratio of NKX2-1 expression that may have suggested an altered pattern of alveolar maturation. Paradoxically, the progressive lung hypoplasia observed in endgestation IGF-1R 2/2 mice was associated with increased cell proliferation. In normal lungs, the mitotic rate drops and differentiation prevails before birth. We reasoned that sustained high proliferation rates observed in the IGF-1R 2/2 mice could result from retarded pulmonary differentiation. Moreover, loss of IGF-1R signaling must cause either increased cell death or shifted cell fate choices that could explain the lack of airway expansion. Delayed pattern of cell differentiation together with increased Figure 3. Lung development in late gestation IGF-1R 2/2 mice. A-L, Lungs prepared from IGF-1R +/+ and IGF-1R 2/2 embryos at developmental stages E14.5, E17.5 and E19.5. A-F, Ventral view of whole lungs. G-L, Rim of lung lobe. Abbreviations: AL, apical lobe; AzL, azygous lobe; CL, cardiac lobe; DL, diaphragmatic lobes; LL, left lobe. M-X, Lung histology of IGF-1R +/+ versus IGF-1R 2/2 embryos. H&E stained lung sections at developmental stages E14.5 (M-P), E17.5 (Q-T) and E19.5 (U-X), showing that saccular walls are thicker and acinar buds smaller in IGF-1R 2/2 embryos as compared with controls of the same stage. Note that histomorphological appearance is similar when comparing E19.5 IGF-1R 2/2 (V, X) with two days younger E17.5 IGF-1R +/+ lungs (Q, S). doi:10.1371/journal.pone.0048071.g003 apoptosis indicates that both processes are indeed altered. The generalized mitotic activity observed in prenatal IGF-1R 2/2 mice may contribute to diminished airway space and alveolar collapse, which is similar to the findings in IGF-I deficient lungs [3]. Lung hypoplasia observed in IGF-1R 2/2 mice at E17.5 was accompanied by an increase in cell death that can also be explained by the fact that the structural remodeling of IGF-1R 2/2 lungs is not complete, so that mutant lungs might be retained in earlier stages of lung development. As we showed here, these earlier developmental stages are marked by a dominant mesenchymal compartment, less epithelial structures and smaller alveolar spaces. Since many of the proliferating cells reside in the mesenchymal compartment, part of the observed increase in cell turnover in mutants can be explained by developmental differences in tissue composition. Alternatively, anti-apoptotic effects of IGF-1R signaling have been demonstrated in immature lungs, where more than 20% of interstitial fibroblasts undergo apoptosis after the period of bulk alveolarization, resulting in a substantial reduction in interstitial volume [18]. IGF-1R mRNA expression and protein levels were both down-regulated in lipid-filled interstitial fibroblasts (LIF) after alveolar formation on postnatal days 16 to 18, suggesting a role for IGF-1R in lung fibroblast survival [18]. Thus, lack of IGF signaling could explain increased apoptosis and reduced lung size in mutants. On a more speculative note, one could imagine that the impaired differentiation process resulting from the loss of IGF-1R may lead to reduced fluid secretion into the developing airways, or interfere with continuous drainage of airway fluid into the amniotic space. Clearly more data on pathophysiology and gene expression during late developmental stages are needed in these mutants. We and others reported previously that intercostal muscles were drastically underdeveloped in IGF-1R 2/2 mice [1,15]. Severe muscle hypoplasia in thoracic muscles may be responsible for a decrease in fetal breathing movements and thereby cause secondary lung hypoplasia. In fact, several clinical case reports indicate that infants without fetal breathing movements in utero suffer from newborn pulmonary hypoplasia, and rarely survive the neonatal period [19,20]. Similarly, knockout of myogenin, a major regulator of skeletal muscle differentiation, leads to marked defects in skeletal muscle development, in particular retarded fiber development, but also abnormal lung morphology. As myogenin is only expressed in skeletal muscle, but not in lung tissue, it was postulated that lung hypoplasia in that model is secondary to abnormal development of skeletal muscle [21]. These observations suggest that diaphragm hypoplasia observed in IGF-1R 2/2 mice may account for a significant part of the pulmonary hypoplasia phenotype and the delayed saccular development. A combination of direct effects (due to lack of IGF-1R in lung tissue) and indirect effects (secondary to marked muscle hypoplasia) could explain the exceptionally strong developmental delay observed in IGF-1R 2/2 lungs. In conclusion, we found that lung development progresses to completion in knock-down IGF-1R neo/2 mutants, demonstrating that partial IGF-1R inactivation is well tolerated. Complete IGF-1R inactivation, in contrast, produced severe delay in lung maturation in utero leading to lung hypoplasia and entailing neonatal death. Delayed lung development in end-gestational IGF-1R 2/2 embryos was characterized by high cell proliferation and incomplete differentiation that are the hallmarks of the canalicular stage of lung development. Mice All experiments were conducted according to the European Communities Council Directive (86/609/EEC) for the care and use of animals for experimental procedures and complied with the regulations of the Comité d'Ethique pour l'Expérimentation Animale 'Charles Darwin', registered at the Comité National de Réflexion Ethique sur l'Expérimentation Animale (Ile-de-France, Paris, Nu5). All experiments were supervised by MH (agreement no. 75-444 to MH, specifically approved by the Direction des Services Vétérinaires, Paris, France). All efforts were made to minimize suffering. Macroscopic Analysis of Lungs and Preparation of Tissues The uterus of pregnant females was removed under deep anesthesia and embryos prepared eliminating all extra-embryonic tissues. Embryos were immediately weighed on a fine balance, sacrificed and skin biopsies taken for genotyping. Heart, liver, kidneys and lungs were removed and weighed. Lungs were observed using a Leica MZ125 with SPOT v3.2.0 digital camera (4x to 100x magnification) and fixed in 4% paraformaldehyde, dehydrated and embedded in paraffin. Morphometry Adult mice. Lung morphometry was performed in IGF-1R neo/2 and IGF-1R +/+ mice as described [24][25][26]. Mice were anesthetized, chest opened after exsanguination, trachea exposed and lungs perfused in situ with 10% neutral-buffered formalin. Under constant pressure of 500 Pa the trachea was ligated, lungs excised and placed in fixative for 5 d prior to paraffin embedding. Serial 4 mm sections of both lungs were stained with H&E. Sections with maximum cross-section of parenchyma were selected for morphometry using digitized image analysis [27]. Micrographs were captured using a Laborlux D Leitz (Leica SA, Rueil Malmaison, France) with monocad Sony videocamera, and analyzed with Biocom (Biocom Imaging Division, Les Ulis, France). Digital images were filtered, binarized, and segmented into background and structures of interest. Total length of tissueair-interface, representing the alveolar boundary length, was measured in 12 different 5610 5 mm 2 non-overlapping parenchymal fields. We selected areas devoid of large conductive airways, arteries, or veins from 4 different tissue sections. Measurements evaluated the alveolar boundary length density expressed per unit of parenchymal surface. Mean alveolar airspace was determined from the sum of the lumen divided by the number of identified alveoli. Thickness of the alveolar walls was determined from .10 linear measurements per field. Embryos. Thickness of diaphragm and diameter of the tenth rib were measured at E19.5 in frontal sections of the chest in 4 individuals per genotype. Saccular wall thickness and airspace were measured in E17.5 embryos (4 individuals per group) using on average 20 measurements per mouse. To determine the proportion of epithelial, mesenchymal and vascular cell compartments in lung parenchyma (expressed as relative surface area), representative 7 mm sections stained for NKX2-1 or CD31were analyzed using ImageJ software (version 1.45, NIH, Bethesda, MD). Phospho-histone H3 and cleaved caspase-3 positive cells were counted in three representative microscope views from three different tissue sections per individual. The most apical and basal regions, and the area close to the lung hilus were not considered. All cell counts were performed using 400x magnification. For CD31, the surface area occupied by stained cells was determined using ImageJ software. For DAPI cell count, micrographs were analyzed using ImageJ algorithms for segmentation. For NKX2-1 morphometry, we compared immunostaining in cuboidal epithelial cells of acinar buds (distal ducts) with staining in columnar epithelial cells of acinar tubules (proximal ducts) using sections from 5 IGF-1R 2/2 and 6 IGF-1R +/+ mice. All slides were processed in one experiment as a single batch. Micrographs were taken with 40x objective using a Leica DM5000B microscope equipped with DFC300FX CCD camera. Care was taken not to saturate the signal. To avoid that potential differences in NKX2-1 expression between apical and basal lung influence the results, we chose regions that were not apical or basal. Since the branching generation of bronchioles influences NKX2-1 expression, we considered only the recent acinar tubules and buds in the outer zone of the lung. We excluded structures located adjacent to the lung surface, at the rim of the tissue sections, where specific IHC signal could overlay with staining artifacts. For each mouse, we analyzed a minimum of 10 structures, consisting of acinar tubule and adjacent bud, from three different tissue sections. In each pair of acinar bud and tubule identified, NKX2-1 signal intensity was quantified from at least 10 epithelial cells per structure using ImageJ software. Results were averaged and compared between bud and tubule cells. From these data we computed the average ratio for each individual.
2018-04-03T00:10:44.311Z
2012-11-06T00:00:00.000
{ "year": 2012, "sha1": "3164810db2f8c7fc0189882e272a181456411259", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0048071&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3164810db2f8c7fc0189882e272a181456411259", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
59583999
pes2o/s2orc
v3-fos-license
Impact of distance determinations on Galactic structure. II. Old tracers Here we review the efforts of a number of recent results that use old tracers to understand the build up of the Galaxy. Details that lead directly to using these old tracers to measure distances are discussed. We concentrate on the following: (1) the structure and evolution of the Galactic bulge and inner Galaxy constrained from the dynamics of individual stars residing therein; (2) the spatial structure of the old Galactic bulge through photometric observations of RR Lyrae-type stars; (3) the three\--dimensional structure, stellar density, mass, chemical composition, and age of the Milky Way bulge as traced by its old stellar populations; (4) an overview of RR Lyrae stars known in the ultra-faint dwarfs and their relation to the Galactic halo; and (5) different approaches for estimating absolute and relative cluster ages. easy to identify, they have been extensively used in the study of especially the Galactic bulge. It should be noted though that, according to the prescriptions of Salaris & Girardi (2002), when the age-metallicity relation and star formation history of a given composite stellar system are unknown the derived distance modulus by using RC can have an error up to ∼0.3 mag. Girardi (2016) provides an overview on the advantages and caveats of using RC stars for distance determination. RRLs, on the other hand, are unequivocally old (≥ 10 Gyr). They are low-mass (∼ 0.6 − 0.8M ) horizontal branch (HB) stars that experience radial pulsations because of periodic variations of the atmosphere opacity, in partial ionized regions (H, He). This causes cyclic variations of luminosity and effective temperature, with periods ranging typically from ∼ 0.3 to ∼ 1 days. Their typical mean luminosity in the V -band is in the range M V ∼ 0.5 − 1 mag, making them moderately bright objects, while effective temperatures range from ∼ 7200 K to ∼ 5600 K, which transform in typical mean colors between (B − V ) ∼ 0.2 mag and (B − V ) ∼ 0.4 mag. Historically, RRLs have been used as standard candles since their mean luminosity in the V −band is almost constant, with some dependency on their metallicity and evolutionary status (e.g. Bono et al., 2003;Clementini et al., 2003). Moreover, they are moderately bright (∼ 40L ), easy to detect from their light variations and practically ubiquitous. Also, the occurrence of the so-called Oosterhoff dichotomy allowed people for decades to disentangle the old component of the Galaxy in two distinct groups, putting strong constraints on the Galaxy formation mechanisms (e.g. Fiorentino et al., 2015Fiorentino et al., , 2016Martínez-Vázquez et al., 2016a). Their role as standard candles and stellar population tracers has been widely investigated also from the theoretical point of view, on the basis of extensive and detailed nonlinear convective pulsation models (see e.g. Marconi et al., 2015, and references therein). RRLs are much less numerous and also more time-consuming to identify than the RC. Both the RC and the RRLs have their own observational and theoretical advantages and disadvantages, however, together, they have shown to be the invaluable available tracers of old population for the study of the Milky Way galaxy. In §1, Andrea Kunder discusses how RRLs and RC giants have shaped our view of the kinematics of the inner Galaxy. Similarly, Pawel Pietrukowicz focuses on using these stars to understand the spatial structure of the inner Galaxy in §2. The chemical composition and age of the inner Galaxy is then described in §3 by Elena Valenti, again utilizing RRLs and RC giants. In §4 and §5, RRLs and RCs are also discussed within the context of the Milky Way satellites and globular clusters, to further place into context the build-up of the Milky Way. Dwarf spheroidal (dSph) satellites of the Milky Way, and the low brightness tail of the dSph, the ultra-faint dwarf (UFD) galaxies, are old, metal-poor, gas poor and dark matter-dominated systems. Comparing their RRL populations with that of the MW places allows a more complete picture of galaxy formation to emerge, as shown in §4, by Massimo Dall'Ora, Giuseppe Bono, Giuliana Fiorentino, Marcella Marconi, Clara E. Martinez-Vazquez, Matteo Monelli, Ilaria Musella and Vincenzo Ripepi. Similarly, globular clusters harbor some of the oldest stars in our Galaxy and their distances, distribution throughout the Galaxy, and ages have long been used to as pillars to understand the early Galaxy. In §5, new observations that have improved the accuracy and precision of stellar populations within globular clusters, as well as better stellar models, have advanced our ability to use these old tracers to understand the early formation of the components of Galaxy. This section is contributed by Giuseppe Bono, Vittorio Braga, Giuliana Fiorentino, Massimo Dall'Ora, Ivan Ferraro, Giacinto Iannicola, Matteo Monelli, Maurizio Salaris and Peter B. Stetson. Kinematics of the Galactic bulge The internal kinematics of the bulge using a statistical sample of stars was first analyzed by the Bulge Radial Velocity Assay (BRAVA) survey Kunder et al. 2012). BRAVA targeted M-giants toward the Galactic bulge in a grid covering three strips of latitude, at b = −4 • , −6 • , −8 • , that span across −10 • < l > 10 • . From a total of ∼10,000 stars, they showed that the bulge is in cylindrical rotation. Kinematic models allow at most only ∼10% of the original model disk mass to be in the form of a "classical" spheroid formed by dissipational collapse. Subsequent kinematic bulge surveys, probing closer to the plane and/or different stellar populations, have confirmed this result. However, a more complicated kinematic view of the bulge than was first able to be disentangled by the original BRAVA results has emerged, which is reviewed here by Andrea Kunder. The Abundances and Radial velocity Galactic Origins Survey (ARGOS; Freeman et al., 2013) probed ∼17,400 red-clump giants in the bulge -fainter stars than probed by BRAVA, but having temperatures more favorable for the determination of metallicities. Probing the CaT at R∼11,500, the AR-GOS stars could be separated into metallicity sub-samples, which Ness et al. (2013a) believe to represent different populations in the bulge. Sample "A" consists of stars with [Fe/H] ∼ +0.15 dex, which are proposed to belong to a relatively thin and centrally concentrated part of the boxy/peanut bulge. Sample "B" consists of stars with [Fe/H] ∼ −0.25 dex, belonging to a thicker boxy/peanut bulge. Compared to "A", this sample is hotter and less compact. Sample "C" consists of stars with [Fe/H] ∼ −0.7 dex and kinematically differs from component A and B in that it does not appear to have a latitudeindependent velocity dispersion rotation. Sample "D" is the most metal-poor with [Fe/H] ∼ −1.0 dex. It is the least understood, due to the paucity of AR-GOS stars with such metallicities. For example, there are only two stars with [Fe/H] ∼ −1.0 dex in the ARGOS field at (l, b) = (−20 • , −5 • ), so the velocity dispersion provided by Ness et al. (2013a) for this field is not well constrained. In all sub-samples cylindrical rotation was seen, although the most metal-poor red clump giants (sample D) rotated slower than their metal rich counterparts (Ness et al., 2013a). But as this signature was seen at latitudes 10 degrees from the plane, they associated the slower rotation as the consequence of contamination from the halo and metal-weak thick disk populations creeping into the bulge. The GIRAFFE Inner Bulge Survey (GIBS; Zoccali et al., 2014) is targetting red-clump giants closer to the plane than both BRAVA and ARGOS. Most of the fields are at a resolution of R=6500, but a handful of fields were observed at R=22,500. Metallicities for their ∼5000 surveyed stars (Gonzalez et al., 2015;Zoccali et al., 2017) were derived, as well as elemental abundances for ∼400 red clump giants. They confirmed cylindrical rotation also at latitudes b=−2 • , and found that throughout most of the bulge, a narrow metal-rich ([Fe/H] = +0.26) population of stars and a broad more metal-poor ([Fe/H] = −0.31) component appears to exist. Both components rotate cylindrically, although the metalpoor stars are kinematically hotter and less bar-like. Lastly, the APOGEE survey has probed ∼19,000 red giant stars at positive longitudes close to the plane (Zasowski et al., 2016). The high-resolution (R=22,500), makes it feasible to obtain elemental abundances and the nearinfrared wavelength regime (λ=1.51-1.70 µm) allows the plane of the bulge to be probed, where dust and reddening is severe, but minimized by longer wavelengths. They find that the transition from cylindrical to non-cylindrical rotation occurs gradually, and most notably at higher latitudes. At a longitude of l ∼7 • the signature of cylindrical rotation fades, which is expected, as this longitude is near the end of the boxy bulge. Despite their large chemo-dynamical sample, they are not able to find distinct and separable bulge populations, although their measures of skewness is consistent with different evolutionary histories of metal-rich ([Fe/H] = +0.26) and metal-poor ([Fe/H] ∼ −0.31) bulge populations. All of these large surveys have shown that the bulge consists of a massive bar rotating as a solid body; the internal kinematics of these stars are consistent with at least 90% of the inner Galaxy being part of a pseudobulge and lacking a pressure supported, classical-like bulge. N-body barred galaxy models (boxy peanut bulge models) can explain the global kinematics, so our bulge appears to have formed from secular evolution of a massive early disk. However, some finely detailed behavior of stars remain unexplained. For example, it is not clear how the kinematically cooler bulge stars (which are more metal-rich) fit together with the kinematically hotter (more metal-poor) bulge stars. Also, some bulge locations have shown no evidence for a metal-rich and metal-poor population, despite being at the same latitude as other fields which do clearly separate chemically (e.g., Zoccali et al., 2017). Figure 1 shows the distributions of the targets in the survey's mentioned above. Most survey's have focussed on the Southern bulge, where the crowding is not as extreme. The APOGEE survey, in contrast, using a telescope in the North, probes more of the Northern bulge, and has not yet been able to reach the negative longitudes. With exception of the ARGOS survey, all data has been publicly released, improving the quality and value of these surveys, and providing the wider scientific community the ability to productively use data for further research with the potential to advance developments. Targeted Kinematic Studies Notable recent high resolution studies (R∼20,000-30,000) of bulge field stars have been meticulously obtained by Johnson et al. , 2012Johnson et al. , 2013aJohnson et al. , 2013bJohnson et al. , 2014. In these papers, along with radial velocities, numerous individual elemental abundances, for some stars 27 elements ranging from oxygen to erbium, are derived for a sample of ∼500 bulge giants. Therefore, not only can the kinematics, [Fe/H] and [α/Fe] ratios of bulge stars be compared to those in the thin and thick disks, but also the light odd-Z and Fe-peak (and also neutron-capture) elements are touched on, which also provide discriminatory power between models and other stellar populations. These more detailed and targeted observations indicate that at [Fe/H] > −0.5, the bulge exhibits a different chemical composition than the local thick disk in that the bulge [α/Fe] ratios remain enhanced to a slightly higher [Fe/H] than the thick disk, and the Fe-peak elements Co, Ni, and Cu appear enhanced compared to the disk. Further, these studies point to a bulge that formed rapidly (<1-3 Gyr), because of the enhanced [α/Fe] abundances coupled with the low [La/Eu] ratios of the bulge stars (see also . This confirms a very fast chemical enrichment in the bulge put forth by the very first detailed abundance studies of red giants in the Milky Way bulge (e.g., McWilliam & Rich, 1994;Zoccali et al., 2006;Fulbright et al., 2007). Babusiaux et al. (2010Babusiaux et al. ( , 2014 compared the velocities of metal-rich and metal-poor bulge stars and found that higher metallicity stars in the bulge show larger vertex deviations of the velocity ellipsoid than more metal-poor stars. They also found that metal-rich stars show an increase in their velocity dispersion with decreasing latitude (moving closer to the Galactic plane), while metal-poor stars show no changes in the velocity dispersion profiles. They concluded that the more metal-rich stars are consistent with a barred population and the metal-poor stars with a spheroidal component. However, other high-resolution studies of bulge stars have not confirmed such trends and instead find consistent decrease in velocity dispersion with increasing [Fe/H] (e.g., Johnson et al., 2014;Uttentahler et al., 2012;Ness et al., 2013a). Perhaps the greatest limitation in finding possible differences between a metal-rich and a metal-poor population in the bulge is the difficulty of finding metal-poor stars in the bulge. For example, within the ARGOS survey (Ness et al., 2013a), 0.1% of the stars identified as lying in the bulge have [Fe/H] < −2.0 dex. The first metal-poor stars found close to the Galactic center was presented by Schultheis et al. (2015), who find 10 stars with [M/H] ∼ −1.0 dex within ∼200 pc from the Galactic center. García Pérez et al. (2013) used infrared spectroscopy of 2400 bulge stars to uncover five new metal-poor stars with −2.1 < [Fe/H] < −1.6, and using optical photometry to first select metalpoor candidates, and Schlaufman & Casey (2014) uncovered three stars in the direction of the bulge with −3.0 < [Fe/H] < −2.7. The Extremely Metal-poor BuLge stars with AAOmega (EMBLA) survey, dedicated to search for metalpoor stars in the bulge, has uncovered ∼40 metal-poor stars (Howes et al., 2014(Howes et al., , 2015(Howes et al., , 2016, including a handful with [Fe/H] < −3.0. Five stars in the very metal-poor regime, at −2.7 < [Fe/H] < −2.0 are presented in Koch et al. (2016), where they find that the metal-poor stars are a broad mix, and no single, homogeneous "metal-poor bulge" can yet be established. Figure 2 shows the kinematics of these metal-poor stars compared to "normal" bulge giants from the BRAVA survey. Though the number statistics are still small, their velocity dispersion suggests either that the metal-poor stars in the bulge have different kinematics than the more metal-rich stars, or that the metal-poor stars discovered are a halo population. Lacking a statistical sample of metal-poor stars in the bulge, understanding their kinematics and placing them in context within the Galaxy is nontrivial. Especially since the oldest and most metal poor stars (which may trace the howes_rot_curve.eps (Kunder et al. 2012). The individual metal-poor star measurements are given in the top panel, and the large red circles indicate the mean Galactocentric velocity (top) and velocity dispersion (bottom). The metal-poor stars have kinematics suggesting they are different from the bulge giants, although the sample size is small (∼50). It has been put forward that these metal-poor stars are actually halo interlopers (e.g., Howes et al. 2014, Kunder et al. 2015 dark matter) are thought to be found in the center of the Galaxy -in the bulge but not sharing its kinematics and abundance patterns (Tumlinson, 2010), the metal-poor "bulge" stars could provide a big piece of the puzzle in understanding the formation and subsequent evolution of the Galactic bulge. Perhaps the easiest identifiable old, metal-poor bulge population are those horizontal branch stars that pulsate as RR Lyrae stars. Their progenitors formed long ago (∼10 Gyr), so that the RRLs we see today tell us about conditions when the halo of the Galaxy was being formed (e.g., Lee, 1992). The bulge RRLs were shown to be on average ∼1 dex more metal-poor than the majority of bulge stars residing in the bar (Walker & Terndrup, 1991), although some of the bulge RRLs do appear to have metallicities that overlap in abundance with the bar population. The ongoing Bulge Radial Velocity Assay for RR Lyrae stars, BRAVA-RR survey (Kunder et al., 2016), aims to collect spectrographic information for RRLs located toward the inner Galaxy. Their sample of RRLs are selected from the Optical Gravitational Lensing Experiment (OGLE), so the periods, ampli-rot_curve_china.eps Fig. 3 The velocity dispersion profile (bottom) and rotation curve (top) for the ∼1000 RR Lyrae stars observed in the bulge compared to that of the BRAVA giants at b = −4 • , −6 • , and −8 • strips (Kunder et al. 2012, Kunder et al. 2016). The bulge model showing these observations are consistent with a bulge being formed from the disk is represented by the dashed lines (Shen et al. 2010). The RRLs have kinematics clearly distinct from the bulge giants, and are a non-rotating population in the inner Galaxy. tudes and magnitudes are already precisely known. Multi-epoch spectroscopy (typically 3 epochs per star) is used to obtain center-of-mass radial velocity with uncertainties of ∼5-10 km s −1 . From radial velocities of about sim1000 RRLs surveyed by BRAVA-RR in four 2-degree fields covering approximately a Galactic latitude and longitude of −3 • < b > −6 • and −4 • < l > 4 • , it is evident that these old and metal-poor stars are kinematically distinct from the more metal-rich red giants in the BRAVA, GIBS, ARGOS and APOGEE surveys. The RRLs show null rotation and hot (high-velocity dispersion) kinematics. In the ARGOS survey one also observes a slowly-rotating metal-poor population, but these stars are believed to be contamination from disk and halo stars (as it is only seen at high Galactic latitude). In contrast, the RRLs are at low Galactic latitudes (|b| <7 • ) and have more certain distance estimates, and the larger number statistics of the RRLs makes this result quantifiable. The RR Lyrae stars trace an older, more spheroidal component in the inner Galaxy. The mass of this 'old' bulge is estimated to be ∼1% of the total central mass, broadly consistent with current bulge formation models, which predict that no more than ∼5% of a merger-generated bulge (Shen et al., 2010;Ness et al., 2013a;Di Matteo et al., 2015). It may be that the RRL stars toward the bulge are actually an inner halo-bulge sample, as originally speculated in the early 1990s (e.g., Minniti, 1994) and as at least one RRL orbit toward the Galactic bulge seems to indicate (Kunder et al., 2015). Prompted by the results from the RRLs, Pérez-Villegas et al. (2017) carried out N -body simulations for the Milky Way to investigate the kinematic and structural properties of the old metal-poor stellar halo in the barred inner region of the Galaxy. They showed that the RR Lyrae population in the Galactic bulge may be the inward extension of the Galactic metal-poor stellar halo, and that especially the radial velocities of RRLs in the outer Galactic longitudes constrain a bulge/halo scenario. Unfortunately, the RRLs investigated in Kunder et al. (2016) are confined to the innermost 500 pc. This is where a slow-rotating component has the smallest velocity difference compared to the metal-rich bulge giants (∼25 km/s), and hence where population contamination from e.g., the halo or thick disk could more easily mask the effects of rotation. Observations of RRL at further longitudes in the bulge would allow us to distinguish between a bulge or halo. Future: Gaia Gaia has begun collecting six-dimensional space coordinates for more than 1 billion stars in the Milky Way. The bulge is a difficult target for Gaia, due to the crowding and extinction, but Gaia will still impact bulge kinematics significantly. The Radial Velocity Spectrometer (RVS), which is the spectroscopic instrument for all objects down to G ∼16 mag, can cope with a crowding limit of 35,000 stars deg −2 (Reylé et al., 2005). In denser areas, only the brightest stars are observed and the completeness limit will be brighter than 16 mag. Therefore, we can expect the brighter giants to be surveyed throughout the bulge with RVS, but most of the red clump stars and RRLs will be lacking Gaia radial velocities. Never-the-less, the astrometric instrument has been designed to cope with object densities up to 750,000 stars per square degree and down to G ∼20 mag. Therefore, for a large area of the bulge, at least some of the horizontal branch will be reachable for useful proper-motions, although no useful (5 σ) parallaxes are expected for these stars. At end of mission, Gaia will have ∼54 Gaia transits covering the bulge (Clementini et al., 2016). Figure 4 shows the proper motions of the giants in the direction of the bulge surveyed in APOGEE DR13 post-and pre-Gaia DR1. Before Gaia DR1 was available, the proper motions were not of the quality that allowed one to easily distinguish between a field and bulge stellar population. Color-magnitude diagrams can help in differentiating bulge stars from field stars, since bulge stars tend to be redder, but the large and variable extinction and the sheer number of field stars along the line of sight toward the bulge, makes color cuts not always reliable. PMs_bul.eps The color magnitude diagram of ∼2500 APOGEE giants that have UCAC5 proper motions with uncertainties smaller than 2 mas/yr. Bottom: The UCAC5 (left) and UCAC4 (right) proper motions of the APOGEE giants with proper motion uncertainties smaller than 2 mas/yr. Already with Gaia DR1, a sharper kinematic view of the bulge than previously feasible, is possible. With the precise positions of stars measured in Gaia DR1, significant improvements in the astrometric solutions were able to be obtained, leading to the release of UCAC5. For over 107 million stars, now proper motions with typical accuracies of 1 to 2 mas/yr (R= 11 to 15 mag), and about 5 mas/yr at 16th mag, exist. With Gaia, we are approaching the possibility of using proper motion information to separate the high degree of disk contamination from the bulge. 2 Spatial structure of the RR Lyrae star population toward the Galactic bulge RR Lyrae-type variable stars can be found everywhere in the Milky Way, but they are particularly numerous in the Galactic bulge. Historically, van Gent (1932, 1933 was the first who noticed that RRLs observed close to the central regions of the Milky Way concentrate toward the Galaxy center. More than a decade later, Baade (1946) in a relatively unobscured area, today called "Baade's Window" in his honor, found a strong predominance of RRLs indicating a presence of Population II stars in the central area of the Milky Way. He assumed that the center of this population coincides with the Galactic center and assessed the distance to the center of the Galaxy using RRLs, obtaining a distance of 8.7 kpc (Baade, 1951). Until the early 1990s about one thousand RRLs inhomogeneously distributed over the Galactic bulge were known. Following the advent of massive photometric surveys, particularly focused on searches for microlensing events, the number of new RRLs toward the bulge has increased. 215 such objects were discovered during the first phase of the OGLE (Udalski et al., 1992), conducted on the 1.0-m Swope telescope at Las Campanas Observatory, Chile, in years 1992-1995. About 1800 RR Lyrae pulsators were detected by the MACHO microlensing survey (Alcock et al., 1995) which used the 1.27-m Great Melbourne Telescope at the Mount Stromlo Observatory, Australia, in years 1992 examined mean magnitudes and colors of the new pulsators and found that the bulk of the population is not barred. Only stars located in the inner fields closer to the Galactic center (l < 4 • , b > −4 • ) seem to follow the barred distribution observed for intermediate-age red clump giants . Minniti et al. (1998) used this sample to show that between about 0.3 kpc and 3 kpc from the Galactic center, the spatial density distribution of RRLs can be represented by a power law of an index of −3.0. Analysis of the data from the second phase of the OGLE project (OGLE-II), conducted on the dedicated 1.3-m Warsaw Telescope at Las Campanas Observatory in years 1997-2000, brought a much larger set of 2713 RRLs (Mizerski, 2003). Based on the sample of 1888 fundamental-mode RRLs from OGLE-II, Collinge et al. (2006) robustly detected the signature of a barred structure in the old population within the inner ±3 • of Galactic longitude. Later, about 3000 fundamental-mode RRLs from the MACHO database were used to investigate the metallicity distribution of these stars and interstellar extinction toward the Galactic bulge searched for the evidence of the Galactic bar and found a marginal signature of a bar at Galactic latitudes |b| < 3.5 • . The absence of a strong bar in the RR Lyrae population clearly indicated that they represent a different population than the metal-rich bulge. However, the shape of this population was far from being fully known. More data covering preferably the whole bulge area were needed. In 2001, the OGLE project started its third phase with a new mosaic eight-CCD camera attached to the Warsaw Telescope. One of the OGLE-III results was the release of a collection of 16,836 RRLs found in an area of 69 deg 2 mostly south of the Galactic equator (Soszyński et al, 2011). The collection composed of 11,756 fundamental mode (RRab) stars, 4989 overtone pulsators (RRc), and 91 double-mode (RRd) stars. OGLE provided time-series photometry in two standard filters: V and I. This sample was promptly analyzed by Pietrukowicz et al. (2012) who demonstrated that the bulge RRLs form a metal-uniform population, slightly elongated in its inner part. The authors found that the photometrically derived metallicity distribution for RRab stars is sharply peaked at [Fe/H]= −1.02 ± 0.18 dex with a dispersion of 0.25 dex, on the Jurcsik (1995) metallicity scale. This result agreed very well with the one from , since the Jurcsik (1995) scale is shifted roughly by +0.24 dex with respect to the Zinn & West (1984) scale. Pietrukowicz et al. (2012) also estimated the distance to the Milky Way center based on the bulge RRLs to be R 0 = 8.54 + / − 0.42 kpc. Here, the theoretical period-luminosity-metallicity (PLZ) relations in V and I bands published by Catelan et al. (2004) were used; the zero points of these relations were calibrated to the data obtained for the well-studied representative globular cluster M3 (Catelan, 2004). In their analysis, Pietrukowicz et al. (2012) made a simple assumption on a linear relation between I-band extinction A I and reddening E(V −I). At that time it was the only reasonable way to unredden mean magnitudes of the RRLs. They showed that, for RRab stars as well as for RRc stars, in the inner regions (|l| < 3 • , |b| < 4 • ) the old population indeed tends to follow the barred distribution of the bulge red clump giants. A year later Dékány et al. (2013) combined optical and near-infrared data for the OGLE-III bulge RRLs to study the RRL spatial distribution. The authors used mean I-band magnitudes from OGLE and K s -band magnitudes from the near-infrared VISTA Variables in the Vía Láctea (VVV) survey (Minniti et al., 2010). VVV was one of the ESO (European Southern Observatory) public surveys carried out on the 4.1-m Visible and Infrared Survey Telescope for Astronomy (VISTA) in years 2010-2015. Observations were taken in ZY JHK s filters and included the Milky Way bulge and an adjacent section of the Galactic plane, covering a total area of about 562 deg 2 (Saito et al., 2012). The monitoring campaign was conducted only in the K s band. The approach applied by Dékány et al. (2013) is expected to bring more precise results than the ones based on optical data alone. That is because PLZ relations have decreasing metallicity dependence toward longer wavelengths and measurements in near-infrared wavebands are much less sensitive to interstellar reddening than optical ones. Dékány et al. (2013) concluded that the population of RRLs does not trace a strong bar, but have a more spheroidal, centrally concentrated distribution with only a mild elongation in its very center at an angle i = 12.5 • ± 0.5 • with respect to the line of sight from the Sun to the Galactic center. The fourth phase of the OGLE project (OGLE-IV) , which was launch in 2010, covers practically the whole bulge in V and I passbands. With the installation of a 32-CCD mosaic camera of a total field of view of 1.4 deg 2 OGLE became a truly wide-field variability survey. The OGLE-IV collection of RRLs toward the Galactic bulge was released by Soszyński et al. (2014). This collection contains data on 38,257 variables detected over 182 deg 2 : 27,258 RRab, 10,825 RRc, and 174 RRd stars. The survey also includes the central part of the Sagittarius Dwarf Spheroidal Galaxy with the globular cluster M54 in its core. Analysis of this set of data was undertaken also by the OGLE team and presented in Pietrukowicz et al. (2015). Due to some practical reasons the analysis was based only on RRab-type pulsators, and the part closest to the Galactic plane (|b|<3) was avoided. RRab stars are more numerous. On average, they are intrinsically brighter in the I-band and have higher amplitudes than RRc stars. What is extremely important, RRab variables with their characteristic saw-tooth-shaped light curves, in comparison to nearly sinusoidal light curves of RRc stars, are harder to overlook. This makes the searches for these type of variables more likely to yield better completeness ratios. Another very practical property of RRab stars is that based on the pulsation period and shape of the light curve one can assess metallicity of the star (Jurcsik, 1995;Jurcsik & Kovács, 1996;. Pietrukowicz et al. (2015) found again that the spatial distribution of the inner bulge RRLs trace closely the barred structure formed of intermediateage red clump giants. According to the most recent models of the Galactic bar, it is close to being a prolate ellipsoid. Based on OGLE-III data Cao et al. (2013) found the following axis ratios and inclination of the major axis: 1.0:0.43:0.40, i = 29.4 • . A similar result was obtained by Wegg & Gerhard (2013) using VVV red clump data: 1.0:0.63:0.26, i = 27 • ± 2 • . This time in their analysis, Pietrukowicz et al. (2015) dereddened mean I-band magnitudes of RRLs using a new relation derived by Nataf et al. (2013). The relation was based on optical measurements from OGLE-III and near-infrared measurements from 2MASS and VVV for bulge red clump giants. After this correction the obtained distance distribution to the bulge RRLs turned out to be smoother in comparison with the previously used simple linear relation. They found the maximum of the distribution or distance to the Galactic center at R 0 = 8.27 ± 0.01(stat) ± 0.40(sys) kpc, which is in very good agreement with estimates from other measuring methods. Pietrukowicz et al. (2015) showed that the spatial distribution of the bulge RRLs has the shape of a triaxial ellipsoid with proportions 1:0.49:0.39 and the major axis located in the Galactic plane and inclined at an angle i = 20 • ± 3 • to the Sun-(Galactic center) line of sight (see Figure 5). Differences between the Dékány et al. (2013) results and those from OGLE-IV are the following: (1) Dékány et al. (2013) used a smaller sample of about 7700 RRab stars from the previous phase of the OGLE survey (OGLE-III), while the OGLE-IV collection (Soszyński et al., 2014) contain nearly 27,300 pulsators of this type. (2) The Dékány et al. (2013) results are based on a lower number of collected K s -band measurements per light curve (about 40 by 2013), which could affect the average infrared brightness of the variables and could slightly smear the observed structure. However, the amplitude variation in the K s -band is a factor of ∼3 smaller than that in the I-or V -band. (3) Because the Dékány et al. (2013) results uses also infrared magnitudes, the de-reddening process of the RRLs differers from that adopted in Pietrukowicz et al. (2015). Resolution of the discrepancy between the OGLE-IV RRLs presented in Pietrukowicz et al. (2015) and the VVV RRLs presented in Dékány et al. (2013) is ongoing. The obtained sharp ellipsoidal shape does not depend on the real final distance to the studied objects. The true inclination angle as well as the axis bulgeRRstructure.eps (Pietrukowicz et al., 2015). Upper panel: Constant surface density lines in the sky are well represented by ellipses with a mean flattening f = 0.33 ± 0.03. Middle panel: The maxima of the density distributions along four selected lines of sight clearly get closer to us with the increasing Galactic longitude. This strongly indicates the presence of a tilted axis in the plane. Lower panel: In the projection onto the Galactic plane, points of the same density level form inclined ellipses. Conclusion: the old bulge population has the shape of a triaxial ellipsoid with the major axis inclined to us at an angle i = 20 • ± 3 • . ratios may be slightly different than reported. These values may also change with the galactocentric distance. This will be known once almost all RRLs (or at least RRab-type variables) are detected from the inner bulge to the outer Galactic halo. Unfortunately, searches for RRLs in obscured Galactic plane regions are very difficult. That is because near-infrared light curves of these pulsating stars have often symmetric, nearly sinusoidal shape. If the number of data points per light curve is small and the time coverage too short, RRLs can be easily confused with other variables, particularly with contact eclipsing binaries and spotted variables. However, the first detection of RRLs in the vicinity of the Galactic center, in the so-called nuclear bulge has been made Dong et al., 2017). A clear result from the analysis of the OGLE-IV bulge RRab variables is that their spatial density distribution in the galactocentric distance range from about 0.2 kpc to 2.8 kpc can be described as a single power law with an index of −2.96±0.03. Pietrukowicz et al. (2015) also was not able to see an X-shaped structure in the RRLs as it is observed in the case of bulge red clump giants (Nataf et al., 2010;. This is expected, as it was found that only metal-rich bulge populations have this feature (Ness et al., 2012). Another discovery by Pietrukowicz et al. (2015) is that RRab stars form two (or even more) very close sequences in the period-amplitude (or Bailey) diagram. This is interpreted as the presence of multiple old populations being likely the result of mergers in the early history of the Milky Way. So far, there are no hints that the two major old populations have different structure. They seem to be well mixed together. Lee & Jang (2016) suggest that the observed period shift between the sequences can be explained by a small difference in the helium abundance. Recently, used N-body simulations to investigate the structural and kinematic properties of the old population in the barred inner region of the Galaxy. They showed that the RR Lyrae population in the bulge is consistent with being the inward extension of the Galactic metal-poor stellar halo, as suggested by Minniti et al. (1998) and Pietrukowicz (2016). followed the evolution of the metal-poor population through the formation and evolution of the more massive bar and boxy/peanut bulge and found the density distribution to change from oblate to triaxial. They found that at the final time of the simulations (after 5 Gyr), the axis ratios of the triaxial old stellar halo in its inner part reached b/a ∼ 0.6 and c/a ∼ 0.5. The ratios increase with the distance from the center to roughly 1.0 and 0.7, respectively, at a distance of 5 kpc. These results are very consistent with the observations of the bulge RRLs from OGLE-IV and also spectroscopic observations of old stars in the Milky Way halo by the SDSS survey. According to the latter studies, the Galactic halo has an oblate shape (Jurić et al., 2008;Carollo et al., 2008;Kinman et al., 2012). Ongoing photometric surveys, such as Gaia, VVV eXtended (VVVX) and OGLE-IV with extended coverage of the Galactic bulge and disk will continue in completing the picture of the old bulge population drawn by RRLs. 3 The Galactic bulge: 3D structure, chemical composition, and age traced by its old stellar population The formation and evolution of galaxies is still a heavily debated question of modern astrophysics. As one of the major stellar components of the Milky Way, the bulge provides critical and unique insights on the formation and evolution of the Galaxy, as well as of the external galaxies. Indeed with the current observational facilities, it is the only bulge in which we are able to resolve stars down to the old main sequence turnoff, hence providing accurate studies of the stellar populations in almost every evolutionary stage. The physical, kinematics and chemical properties of the stellar populations in the bulge allow us to discriminate among various theoretical models for the formation and evolution of bulges at large, setting tight constraints on the role that different processes (i.e. dynamical instabilities, hierarchical mergers, gravitational collapse) may have taken place. However, this advantage comes with the need of covering a large area of the sky (∼ 500 deg 2 ). Therefore, to understand the global properties of the bulge one should look for reliable tracers of distance and age that can be easily observed in any region of the sky in the bulge direction. In this framework, Red Clump stars and RRLs play a crucial role, and indeed over the decades studies of these stars have been essential to build our current knowledge of the Galactic bulge. The present section put together by Elena Valenti is not intended to be a complete review of the Galactic bulge properties but an overview of the bulge structure, chemical composition, and age based on the observational results on RC stars and RRLs. The age-metallicity relation and star formation history of the RCs in the bulge is not well-known, so an error of up to ∼0.3 mag can be introduced when trying to use these stars to derive a distance. At the distance of the bulge this translates to ∼1 kpc. However, considering that overall the bulge is metal rich and old ( 10 Gyr), one could use the corresponding population effect correction and therefore reduce the error on the distance. RRLs represent, on the other hand, much more accurate distance candles. Unlike the RC stars, they trace univocally the oldest stellar population of any given complex system (see §2 above by P. Pietrukowicz). In § 3.1, a global view of the three-dimensional structure of the bulge addressing the observational evidences is presented, that leads to determination of the bar properties, the stellar density and the mass. The chemical composition and age of the bulge stellar populations are reviewed in § 3.2, and 3.3, respectively. Finally, § 3.4 summarizes the main stellar properties that combined define our current knowledge of the Milky Way bulge, and the pieces of evidence that are still lacking in order to improve and possibly complete the global picture are highlighted. Note that occasionally up to date versions of relevant figures obtained using state of the art observational data are presented. Today it is well known that the Milky Way is a barred galaxy. Although the first observational evidence of the presence of a bar in the innermost region of the Galaxy was presented by Blitz & Spergel (1991) by using the stellar density profile at 2.4µm by Matsumoto et al. (1982), its existence was hypothesized nearly 20 years earlier. Indeed, to explain the departures from circular motions seen in the HI line profile at 21 cm, de Vaucouleurs (1964) suggested for the first time that the Milky Way could host a bar in its inner regions. After then, over the decades, many different tracers -e.g. gas kinematics (Binney et al., 1991), stellar surface profile (Weiland et al., 1994;Dwek et al., 1995;Binney et al., 1997), microlensing experiments (Udalski et al., 2000;Alcock et al., 2000), and OH/IR and SiO maser kinematics (Habing et al., 2006) -have been used to confirm the presence of the bar and to constrain its properties. Still, the strongest observational evidence for the presence of the bar comes from the use of RC stars as standard candles to deproject the stellar density distribution in the Galaxy inner region. By using the color-magnitude diagram (CMD) derived from the OGLE (Udalski et al., 1992) photometry in Baade's Window (l = −1 • , b = −3.9 • ) and in two additional fields at (± − 5 • , −3.5 • ), found that the mean RC magnitude at positive longitudes was brighter than the one observed at negative longitude. Under the assumption that there is no continuous metallicity and age gradient along the longitude, the observed change in mean RC magnitude across the field was interpreted in terms of distance. Stars at positive longitudes are brighter, hence closer, than those at negative longitudes. By using a triaxial model for the bulge, derived a bar pivot angle of Θ = 45 • . Following this pioneering work, many studies (see Stanek et al., 1997;Bissant & Gerhard, 2002;Babusiaux & Gilmore, 2005;Benjamin et al., 2005;Nishiyama et al., 2005;Rattenbury et al., 2007;Lopez-Corredoira et al., 2007;Cabrera-Lavers et al., 2008;Cao et al., 2013;Wegg & Gerhard, 2013) have used RC stars to constrain triaxial bar models. Table 1 lists the axis scale lengths and orientation angles of the bar as derived by Rattenbury et al. (2007); Cao et al. (2013); Wegg & Gerhard (2013), which among all similar studies, are those considering the largest bulge area, therefore possibly providing more accurate estimates of the bar physical properties. Owing to the RC stars distribution across the bulge, we now know that the bar has a boxy/peanut/X-shape structure in its outer regions: a typical characteristic of all barred galaxies when seen edge-on (Laurikainen et al., 2014;Laurikainen & Salo, 2017, for a recent review). and Zoccali et al. (2010) were the first who noticed that the distribution of the RC stars in some fields in the outer regions (|b| > 5 • ) along the bulge minor axis (l = 0 • ) was bimodal, suggesting the presence of a double RC. The observed split in the RC mean magnitude was then confirmed by and Nataf et al. (2010) by using 2MASS and OGLE photometry, respectively. The authors explained the split in the RC as signature of two southern arms of an X-shape structure crossing the line of sight. An alternative explanation is that the double RC phenomenon is a manifestation of multiple populations observed in globular clusters (GCs) in the metal-rich regime (Lee et al., 2015;Joo et al., 2017). Shortly after, the 2MASS-based 3D map of the RC distribution over a bulge area of 170 deg 2 by Saito et al. (2011) confirmed the presence of the Xshape and showed that the two RC over-densities were only visible in the outer bulge (i.e. b < −5 • , and b > 5 • ) along the deprojected minor axis (|l| ≤ 5 • , see also Figure 6). Thanks to the superior quality, in terms of photometric depth and spatial resolution, of the near-IR Vista Variable in the Via Lactea (VVV, Minniti et al., 2010;Saito et al., 2012) Wegg & Gerhard (2013 modelled the observed RC distribution across the whole bulge area (∼ 300 deg 2 ) thus providing the first complete map of its X-shape structure. Although not specifically obtained through the study of RC stars, it is worth mentioning in this context the latest work by Ness & Lang (2016) based on WISE images, in which the X-shape nature of the Milky Way bulge revealed itself with unquestionable doubts (see their Figure 2). There is now a general consensus that the majority of the observed Milky Way bulge structure is a natural consequence of the evolution of the bar. The bar heats the disk in the vertical direction giving rise to the typical boxy/peanut shape. Dynamical instabilities cause bending and buckling of the elongating stellar orbits within the bar, resulting in an X-shape when seen edge-on (Raha et al., 1991;Merritt & Sellwood, 1994;Patsis et al., 2002;Athanassoula, 2005;Bureau & Athanassoula , 2005;Debattista et al., 2006). However, the possible presence of a metal-poor spheroid embedded in the boxy/peanut bulge seems to be suggested by a number of fairly recent studies investigating the correlation between RC stars chemical and kinematics properties, as well as the spatial distribution of other stellar tracers, such as RRL, Mira and Type II Cepheids (T2C) variables. While the readers are referred to §1 above for an overview on the bulge kinematics, here it is however worth mentioning that metal-poor ([Fe/H] 0) stars in the Baade's window show negligible vertex deviation (l ν ∼ 0) consistent with those of a spheroid. Conversely, metal-rich ([Fe/H] 0) stars exhibit significant (l ν ∼ 40) elongated motions typical of galactic bars (Babusiaux et al., 2010). In addition, based on the spectroscopic data provided by the ARGOS survey (i.e. ∼ 14, 000 RC stars; Freeman et al., 2013), Ness et al. (2012) demonstrated that only the distribution of metal-rich stars shows the split in RC, a univocal signature of the bar X-shape. On the other hand, metal-poor stars show only a single RC peak. The scenario in which the metal-poor bulge stars, and therefore possibly the oldest population, do not trace the bar structure is also supported by the observed distribution of Mira (Catchpole et al., 2016), RRLs (Dékány et al., 2013), and T2C (Bhardwaj et al., 2017) variables. In particular, by using a combination of near-IR and optical data from VVV and OGLE-III, Bhardwaj et al. (2017) found that their sample of T2C population in the Galactic bulge shows a centrally concentrated spatial distribution, similar to OGLE-IV counterparts of metal-poor RRLs from the VVV. Mira stars are also consistent with belonging to a boxy bulge(López-Corredoira, 2017). It should be noted, however, that the spatial distribution of RRLs in the bulge is somehow still debated given that Pietrukowicz et al. (2015) by using OGLE-IV data did confirmed the presence of a bar in their spatial distribution, although the extension, pivot angle and ellipticity of the structure traced by the RRLs are significantly smaller than that traced by RC stars (see §2 above). The stellar density map More than two decades ago, Weiland et al. (1994) presented the first low angular resolution map at 1.25, 2.2, 3.5 and 4.9 µm of the whole Milky Way bulge based on the COBE/DIRBE data. After correction for extinction and subtraction of an empirical model for the Galactic disk, the derived surface brightness profile of the bulge was then used to study its global morphology and structure. However, more detailed investigations of the bulge innermost region (i.e. |b| ≤ 5 • , |l| ≤ 10 • ) have been possible only recently thanks to the VVV survey. Valenti et al. (2016) presented the first stellar density profile of the bulge (see Figure 7) reaching latitude b = 0 • . Specifically, by counting RC stars within the CMD as obtained from accurate PSF-fitting photometry of VVV data and previously corrected for extinction by using the reddening map of , they derived a new stellar density map that allowed to investigate the morphology of the innermost regions with unprecedented accuracy. As seen from Figure 7, the vertical extent of the isodensity contours is larger at l > 0 • . This is an expected consequence of a bar whose closest side points towards positive longitude. The high stellar density peak in the innermost region (i.e. |l| ≤ 1 • and |b| ≤ 1 • ) spatially matches the σ-peak found by GIBS, the kinematics survey of RC (Zoccali et al., 2014) and by Valenti et al. (2018, in prep). The stellar density maximum is found in the region |l| ≤ 1 • and |b| ≤ 0.5 • , and slightly asymmetric with respect to the bulge minor axis. The observed overall elongation of the density contours towards the negative longitudes is, nevertheless, found to be progressively less pronounced when moving closer to the Galactic plane. Indeed, as shown in Figure 8 the deprojected density maps become more and more spherically concentrated when RC stars at lower latitudes are considered, hence suggesting the presence of a quasi-axisymmetric structure in the innermost region. This evidence supports the claim by Gerhard & Martinez-Valpuesta (2012) that: the variation in the RC slope at b = ±1 • and |l| ≤ 10 • observed by Nishiyama et al. (2005, by using OGLE data) and by Gonzalez et al. (2011, based on VVV photometry), and interpreted by these authors as evidence for the presence of a nuclear bar in the inner bulge, is instead caused by a variation of the stellar density distribution along the line of sight. By using a combination of UKIDSS, VVV, 2MASS and GLIMPSE data, Wegg et al. (2015) presented the so far largest (i.e. ∼ 1900 deg 2 ) density map of the Milky Way and long bar based on RC stars (see Figure 9). A particularly interesting result of this study is that the orientation angle of the long bar constrained by the best-fit model to the observed density map is consistent with that of the triaxial bulge (i.e. the main bar, 28 • − 33 • ). In other words, unlike several previous studies suggesting the presence of a long bar tilted by ∼ 45 • with respect to the Sun-Galactic centre line (i.e. in addition to the main bar; Benjamin et al., 2005;Lopez-Corredoira et al., 2007;Cabrera-Lavers et al., 2007Vallenari et al., 2008;Churchwell et al., 2015;Amôres et al., 2013), the long bar with a semimajor axis of ∼ 4.6 kpc in length as modelled by Wegg et al. (2015) appears to be the natural extension of the bulge main bar at higher longitude. This result nicely fits the scenario proposed by Martinez-Valpuesta & Gerhard (2011) same structure. According to this, the boxy/peanut shape bulge would then simply be the central vertical extension of a longer and flatter single bar. The mass One of the fundamental question of Galactic astronomy is the determination of the mass distribution in the Milky Way because in general the mass of a given system is more likely the key element driving its evolution (see Courteau et al., 2014, for a detailed review). In this context, over the past three decades many studies addressed this specific question, deriving the dynamical mass of the Galaxy bulge either by matching the galactic rotation curve inside ∼1 kpc, or by measuring the kinematics (i.e. velocity and velocity dispersion) of a variety of different tracers (i.e. stellar and gas). Then by using an observed luminosity profile (generally in K-band) one can derive the M/L ratio, which ultimately leads to the mass of the bulge (Sellwood & Sanders, 1988). Historically, the M/L ratio derived from fitting the rotation curve has been often found to be ∼ 2−3, while the M/L ratio derived from stellar kinematics ∼ 1. Such discrepancy has been often explained using the argument that the mass derived from the rotation curve is overestimated because of the presence of large non-circular motions that distort the rotation curve (Sofue, 1990;Yoshino & Ichikawa, 2008). Indeed, the accuracy on the measurement of the rotation curve strongly depends on the accuracy on the distance to the Galactic centre and the solar circular velocity. In addition, the rotation curves as derived from Hα and, in general from other gas tracers (i.e. HI, HII) are often influenced by non-circular components (i.e. inflow, outflow, streaming motions), rather than an ordered (regular) circular motion. This inevitably led to very different results for the mass of the bulge. Chemin et al. (2015) recently reviewed the uncertainties and bias affecting the determination of the Milky Way rotation curves and the consequent effects on the derived mass distribution. Table 2 lists a number of studies, together with the adopted observables/diagnostics, that over the years tackled the problem of deriving the bulge mass. Although the reader should refrain from considering Table 2 a complete compilation, it is evident that the large spread in the listed values make the mass of the bulge still poorly constrained. Most estimates cluster to 1.5 × 10 10 M , however a few authors found values as large as 3 × 10 10 M (Sellwood & Sanders, 1988) or as small as 0.6 × 10 10 M (Robin et al., 2012). In this context, a special mention is deserved for the two most recent works by Portail et al. (2015) and Valenti et al. (2016) which, although following different methodologies, they both use the distribution of the RC stars as derived by the VVV photometry. Portail et al. (2015) used made-to-measure dynamical model of the bulge, with different dark matter halo to match the stellar kinematics from BRAVA Kunder et al., 2012) and the 3D surface brightness profile derived by Wegg & Gerhard (2013). Their best-fit model is consistent with the bulge having a dynamical mass of 1.8 ± 0.07 × 10 10 M , with a dark matter content that varies with the adopted IMF. When the observed IMF of Zoccali et al. (2000) is considered about 0.7 × 10 10 M (i.e. 40%) of dark matter is required in the bulge region. In addition, they estimated that the total stellar mass involved in the peanut shape accounts for ∼20% of the total stellar bulge mass. On the other hands, by scaling the observed VVV RC stellar density map (see Figure 7) with the observed bulge luminosity function from Zoccali et al. (2000), and Zoccali et al. (2003), Valenti et al. (2016) provided the first empirical, hence no model-dependent, estimate of the bulge stellar mass. From the observed stellar mass profile shown in Figure 10, the authors estimated that the mass in stars and remnants of the Milky Way bulge in the region |b| < 9.5 • and |l| < 10 • is 2.0 ± 0.3 × 10 10 M . These two latest estimates are found compatible within the quoted errors, and they might be even more close when considering that the empirical estimate by Valenti et al. (2016) refers to a larger volume that is not limited along the line of sight. The chemical composition Because the chemical content of any given stellar system retains crucial information to unveil its origin, formation and evolution (McWilliam, 2016), after the pioneering works of Frogel et al. (1984) and Rich (1988) several studies over the decades focussed on the determination of the bulge stars metallicity and abundances distribution to understand how the bulge formed. What follows is not meant to be a comprehensive compilation of all such studies for which one should dedicated a entire single review, but rather a summary of our current knowledge of the chemical composition of the bulge based on the latest results from RC and RRLs. The metallicity distribution As emphasized by Matteucci et al. (1999); Ferreras et al. (2003), the peak and shape of the metallicity distribution functions (MDF) provide important constraints on the IMF, star formation efficiency, as well as to the possible gas infall timescale. However, until less than a decade ago, accurate MDF based on high-resolution spectroscopy were available only for a handful number of sparse fields mainly located along the bulge minor axis (see i.e. McWilliam & Rich, 1994;Fulbright et al., 2007;Rich et al., 2007;Hill et al., 2011;Rich et al., 2012, , and reference therein). The derived MDFs were consistent across various studies, which all agreed in finding the bulge population to be on average metal-rich, although spanning a fairly broad metallicity range (e.g. −1.5 [Fe/H] +0.5). Our comprehension of the Table 3 For each spectroscopic survey, the total number of stars, the total number of targeted fields and the region within the bulge covered by the observations are given. Survey Total RC stars Number of fields Bulge region ARGOS 14,000 27 MDF of the bulge has improved tremendously thanks to three spectroscopic surveys, namely ARGOS (Freeman et al., 2013), GIBS (Zoccali et al., 2014), and ESO-Gaia (Rojas-Arriagada et al., 2014), that all together have provided spectra for more than 20,000 RC stars across most of the inner and outer bulge regions (see Table 3 for further details). The MDF derived by these surveys confirmed previous results although extending them on a much larger area. The mean bulge population across all fields is metal-rich with a small fraction of stars with [Fe/H]> +0.5 dex and [Fe/H]< −1.5 dex. Only in the outermost fields (b > −7 • , |l| > 10 • ) observed by ARGOS the MDF reaches metallicity as low as ∼-2.5 dex. In addition, a mild vertical gradient is found when considering the mean metallicity of each field, such as the metallicity increases moving inwards along the bulge minor axis, hence confirming what suggested previously by Minniti (1995), and Zoccali et al. (2008). However, thanks to statistically robust target samples a detailed study of the MDF shape has been possible for the first time, revealing the presence of multiple components. The observed overall metallicity gradient is therefore explained as a consequence of the presence of two (see Zoccali et al., 2017;Rojas-Arriagada et al., 2014) or more (see components with different mean metallicity. As evident from Figure 11, the variations of the relative contribution of these components across the fields (i.e. metal-rich stars component becoming progressively less prominent towards the outer region) mimic the observed gradient. However, Zoccali et al. (2017) found also that at latitudes smaller than |b| = 3 • the metal-poor component becomes important again (see first 2 top panels of Figure 11), its relative fraction increases again close to the plane. To further investigate the spatial distribution of the two components they mapped their distribution by coupling the relative fractions derived by GIBS with the bulge stellar density from Valenti et al. (2016). The result, shown in Figure 12, demonstrates that the metal-poor component has a spheroid-like spatial distribution, versus a boxy distribution of the metal-rich component. In addition, the metal-poor component shows a steeper radial density gradient. Although as mentioned before the bulge kinematics is the subject of another paper in this volume (see §1 above by A. Kunder), here I will only briefly mention that the two components were found to have also different kinematics. Indeed, as already found by the BRAVA (Kunder et al., 2012) and ARGOS survey (Ness et al., 2013a), in the outer bulge (|b| > 4 • ) the metal-poor component has a higher radial velocity dispersion compared to the metal-rich one, at all longitudes. However, Zoccali et al. (2017) showed that such behavior is re- Galactic longitude (deg) versed in the inner bulge. Specifically, the velocity dispersion of the metal-poor stars at b = −3.5 • , −2 • becomes similar to that of the metal-rich counterpart, and progressively becomes smaller at b = −1 • . While the chemical abundances of the RC stars in the bulge is one of the topics that received more attention in the recent years, the number of studies addressing the chemical content of the oldest bulge population, such the RRLs, is still very limited. Perhaps mostly due to the observational challenges that spectroscopic observations of RRL face, as of today there is no high-medium resolution spectroscopic measurements of a sizeable sample of RRLs in the bulge. In K-band RRLs are in general about 0.5 mag fainter than RC stars, hence their brightness makes them suitable targets at high resolution only with 4 m-class telescope or above, depending on the bulge region. In addition, because they are much less numerous than RC stars, and so more sparsely distributed RRLs are not even suitable targets for the vast majority of the current multiplexing spectrograph facilities. An additional complication is the fact that the metallicity derived from the line equivalent width measurements strongly depends upon the pulsation phase at which the star was observed. This necessarily implies a good knowledge of the variables. All of these factors make their observations very telescope time consuming. As of today the only spectroscopic study of a sizeable sample of bulge RRLs has been presented by Walker & Terndrup (1991), who derived the MDF of 59 RRLs in the Baade's window. The individual star metallicities were derived through the ∆S method (i.e. low-resolution, see Suntzeff et al., 1991, for a detailed description of the ∆S method) and their distribution is found to cluster around [Fe/H] = −1 dex. Although the MDF is relatively broad, spanning a range of about 1 dex, −1.7 [Fe/H] −0.5, its very sharp peak accounts for ≈80% of the entire sample. Based on the derived MDF, the authors concluded that the RRL are being produced by the metal-poor tail of K giants distribution (see also Figure 11). Recently, Pietrukowicz et al. (2015) provided a photometric MDF based on more than 27,000 RRLs from the OGLE-IV catalogs and located in the bulge region between |l| 10 • and −8 The photometric MDF is much broader than the spectroscopic one, as it spans mostly the range −2.5 [Fe/H] +0.5, although the peak is found at the same metallicity, [Fe/H]=-1 dex. The authors showed that there is no correlation between the distance and the shape of the MDF, however they find a very mild, but statistically significant, radial metallicity gradient (i.e. the metal-rich population increases towards the centre). Based on the analysis of the Bailey diagram (i.e. period-amplitude diagram) the authors argue for the existence of two different population of RRLs with likely different metallicity, similar to the bulge RC counterparts. However, it should be mentioned that unlike what is observed in the MDF of RC stars, these 2 populations of RRLs with different metallicity do not probably change in relative fraction given that the global MDF conserves its shape throughout the total covered bulge area. Moreover, because of the lack of RRLs spectroscopic measurements in the metal-rich regime, [F e/H] > −0.5 dex (see Walker & Terndrup, 1991), one should refrain from drawing any firm conclusion from the available RRL MDF. The α-elements abundances The detailed study of the chemical abundances and abundance patterns in bulge stars provides a unique tool to understand the chemical evolution en-richment of the bulge, and therefore to set tight constraints on its formation scenario. The elemental abundance distributions, and the abundance ratio of certain critical elements such as Fe-peak, CNO, and α-elements (i.e. those synthesized from α particles as O, Ne, Mg, Si, Ti, Ca and S) are particularly suitable for this purpose. Indeed, these elements are synthesized in stars of different masses, hence released into the interstellar medium on different timescales. Because most of the chemical information on RC stars comes from the analysis of the α-elements, what follows is a summary of the picture built upon those measurements. It is not meant to be a comprehensive review of the global chemical composition of the bulge, for which the readers are instead encouraged to refer to McWilliam (2016). As mentioned above a particularly useful abundance ratio is [α/Fe]. Due to the time delay in the bulk of Fe and iron-peak elements production (mostly due to SNe Ia, see Nomoto et al., 1984) relative to α-elements (due to SNe II, see Woosley & Weaver, 1995), the [α/Fe] abundance ratio can be efficiently used as a cosmic clock (see e.g. McWilliam, 1997;Wyse, 2000, and references therein). For this reason many studies in the past addressed this question providing [α/Fe] ratios for relatively small sample of K and M giants in few bulge regions (see McWilliam & Rich, 1994;Rich et al., 2005;Cunha & Smith, 2006;Fulbright et al., 2007;Lecureur et al., 2007;Rich et al., 2007;Meléndez et al, 2008;Alves-Brito et al., 2010;Hill et al., 2011;Rich et al., 2012;Bensby et al., 2013;Johnson et al., 2014;Bensby et al., 2017, and references therein). As it has been the case for the MDF, the advent of the recent spectroscopic surveys ARGOS and GIBS (Gonzalez et al., 2015) provided α-element abundances for thousands of RC stars over a large area, hence allowing to study the [α/Fe] trends as a function of the position in the bulge. All previous and very recent studies agree on finding the bulge to be α-elements enhanced with respect to the solar value, thus suggesting a fast bulge formation scenario. As shown in Figure 13, the α-element abundances of bulge stars with [Fe/H]< −0.3 are enhanced over iron by ∼ 0.3 dex, whereas metal-rich stars show a decrease in [α/Fe] reaching 0 for metallicity above the solar values. However, as discussed in the direct translation of this trend to absolute timescales is not easy because the SNe Ia delay time can depend on different production channels. This is the reason why a relative approach through the comparison of [α/Fe] trends observed in different Galactic components turns to be more reliable. From the comparison between bulge RC and giants in the thin and thick disk (see Figure 13, right panels), the α-elements enhancement of the bulge with respect to the thin disk is evident across most of the entire metallicity regime. At solar metallicity, the bulge and thin disk are both α-poor. On the other hands, the thick disk giants are found to be as α-enhanced as the bulge, although they never reach the high metallicity tail of bulge stars. A possible interpretation of this relative trends is that the metal-poor bulge population experienced a fast formation scenario similar to the thick disk, whereas the metal-rich bulge population underwent a more extended (i.e. longer) star formation, on a timescale similar to that of the thin disk. Figures 8 and 9). The age An accurate dating of the bulge stellar component allows one to gauge at which lookback (i.e., at which redshift) one should look for possible analogs of the Milky Way, when their bulge formation processes were about to start, well on their way, or even already concluded. Indeed, with an age of ∼10 Gyr or older, it is at z 2 that such analogs can be searched, or at lower redshift if significant fraction of the stellar component is found to be several Gyr younger (see Valenti et al., 2013, for detailed discussion). However, dating bulge stars is a very complicated task, challenged by the stellar crowding, the patchy and highly variable extinction, the uncertainties in the distance modulus, the distance spread due to the spatial depth of the bulge/bar along the line of sight, the metallicity dispersion and finally the contamination by foreground disk stars. The different contribution of all these factors prevents accurate location in terms of magnitude and color of the main sequence turnoff (MSTO) of the bulge population, so far among the most reliable age diagnostics (see Renzini & Fusi Pecci , 1988, and §5 by G. Bono). Historically, the earliest age constraint by van den Bergh & Herbst (1974) in the Plaut field along the bulge minor axis at b = −8 • (∼1 kpc) indicated a globular cluster (GC) like age. Terndrup (1988) fit the photometry of other bulge fields at a range of latitudes with GC isochrones of varying metallicity, but because lacking a secure distance for the bulge he derived only a weak age constraint (11-14 Gyr). Ortolani et al (1995) solved the problem of contamination and distance uncertainties by comparing the bulge population with the NGC 6528 and NGC 6553 clusters. Forcing the bulge field and cluster luminosity function to match the HB clump luminosity level, it was possible to show for the first time that the relative ages of the bulge and metal-rich cluster population could not differ by more than 5%. Feltzing & Gilmore (2000) used HST-based photometry of Baade's window and another low extinction field known as the Sgr-I (i.e. at l = 1.25 • and b = 2.65 • ) to argue that while the density of the bulge MSTO stars increases for field closer to the centre, the foreground population does not change. They concluded that the bulk of the bulge population must therefore be old. The case for an old bulge has been further strengthened by later and more accurate photometric studies of different bulge fields, and by tackling the problem of contamination by foreground disk stars either kinematically by using proper motions, or statistically by considering control disk fields. Table 4 lists the location of the each observed field together with the adopted decontamination approach. As all previous studies, Valenti et al. (2013) found that the bulk stellar population of the Milky Way bar edges is over ∼ 10 Gyr old (see Figure 14), with no obvious evidence of younger population. This age is indistinguishable from the one reported for more inner bulge fields, a few degrees from the Galactic centre or lying along the bulge minor axis. From the analysis of the MSTO in the HST-based CMD kinematically decontaminated, Clarkson et al. (2011) concluded that once the blue stragglers population is taken into account a significantly younger ( 5 Gyr) population in the bulge must be at most 3.4%. However, there is a clear discrepancy between the ages inferred from the determination of the MSTO location in the observed CMDs and those derived by the microlensed project of Bensby and collaborators, which estimates single star age from its effective temperature and gravity (i.e. from isochrones in the T eff , log g plane) as obtained from high resolution spectra. Indeed, based on a sample of 90 F and G dwarf, turnoff and subgiant stars in the bulge (i.e. |l| 6 • and −6 • < b < 1 • ) observed during microlensing, Bensby et al. (2017) found that about 35% of the metal-rich star ([Fe/H]> 0) are younger than 8 Gyr, whereas the vast majority of metal-poor ([Fe/H] −0.5) are 10 Gyr or older. In addition, from the derived age-metallicity and age-α elements distribution the authors concluded that the bulge must have experienced several significant star formation episodes, about 3, 6, 8 and 12 Gyr ago. As discussed by Valenti et al. (2013), each of the two approaches has its own pros and cons. The microlensing approach depends more heavily on model atmospheres that may introduce systematics especially in the metal rich regime, and it deals with small number statistics. At the same time, it has the advantage that the metallicity of each individual stars is very well constrained. Conversely, by dealing with a statistically significant number of stars, the traditional CMD method should in principle be able to reveal the presence of young populations. However, in this case the metallicity of individual stars are unknown, therefore one does not know if, for instance, some of the stars above the MSTO of the Z = 0.060 isochrones (see Figure 14) are old and have lower metallicity, or whether they are metal-rich stars younger than 10 Gyr. The effect of the age metallicity degeneracy, specifically in terms of the color spread of the MSTO in the observed CMDs has been used by Haywood et al. (2016) to argue in favor of the scenario suggested by the microlensing results. In particular, Haywood et al. (2016) compared the MSTO color spread observed in the CMD of Clarkson et al. (2011) with that of synthetic CMDs, obtained by using two scenarios corresponding to different age-metallicity relation (AMR). In scenario I a simulated CMD was obtained by using the AMR presented by Bensby et al. (2013) (i.e. based on a total sample of 59 micorlensed drawf), whereas for scenario II an AMR that extends from [Fe/H]= −1.35 dex at 13.5 Gyr to [Fe/H]= +0.5 dex at 10 Gyr was adopted. When taking into account distance, reddening and metallicity effects, Haywood et al. (2016) showed that the MSTO color spread of a purely old stellar population would be wider than what observed, which in turn appears to be consistent with the simulation obtained from the scenario I. Unfortunately, what the Haywood et al. (2016) paper does not address is the fact that the simulation using the AMR of Bensby et al. (2013) produced a CMD that not only has a smaller MSTO color spread like the observed one but also show a remarkable number of stars just above (i.e. brighter) the MSTO, which are not matched by the observations (see their Figure 8). In this respect, the comparison between observations and simulations presented by Zoccali et al. (2003) to infer the age of the bulge population would seem more appropriate because the synthetic CMD was obtained by using the observed luminosity function, and therefore the comparison was done such as to match not only the location and spread in color of the MSTO, but also the number of stars at the MSTO level. Summary and conclusions Owing to the systematic and detailed study of RC star properties performed in the last decade by using wide area photometric surveys we have finally reached a good and complete comprehension of the 3D structure of the Milky Way bulge. The bulge, as referred as the region in the inner ∼3 kpc is a bar with an orientation with respect to the Sun-Galactic centre line of sight of ∼ 27 • , and whose near side points in the first Galactic quadrant. The bar has a boxy/peanut/X-shape structure in its outer regions, a characteristic morphology of bulges formed out the natural evolution of disk galaxies as the consequence of disk dynamical instabilities and vertical buckling of the bar. The observed split in the RC mean magnitude distribution in the outer regions is interpreted by the dynamical models as bar growing. In the innermost region (|l, b| < 2 • ), rather than a nuclear bar, there seems to be an axisymmetric high stellar density peak, which instead may be responsible for the observed change in the bar pivot angle. In addition, RC stars trace a thinner and longer structure with a semimajor axis of ∼ 4.6 kpc, known as the long bar, which according to the latest study appears to be the natural extension of the bulge main bar at higher longitude. The bulge is the most massive stellar component of the galaxy, with a mass (M B = 2 − 1.8 × 10 10 M ) close to 1/5 of the total stellar mass of the Milky Way, and about ten times larger than the mass of the halo. The recent spectroscopic surveys (ARGOS, GIBS, Gaia-ESO) of RC stars, together with the ongoing that targets K and M giants ( i.e. APOGEE-North) provided a comprehensive and detailed view of the chemical content of the stellar population over an area that corresponds to more than 80% of the entire bulge. The emerging picture is that the bulge MDF as traced by the RC is much more complex that previously thought, and it hosts two populations with different mean metallicity (i.e. metal-poor and metal-rich), spatial distribution and kinematics. The metal-poor population as traced by RC, RRLs and T2C is more spherically concentrated, whereas the RC metal-rich component traces the boxy/peanut bar. The observed properties of such metal-poor population possibly older (i.e. spatial distribution and kinematics) do not necessarily implies the presence of a classical bulge (i.e. a merger-driven structure dominated by gravitational collapse) embedded in the boxy bulge. Indeed, the recent N-body simulations model of Debattista et al. (2017) accounts for the presence of a metal-poor population spherically concentrated, as well as for other observed trend of densities, kinematics and chemistries, without invoking the need for a composite bulge scenario (i.e. the coexistence of two structures, one merger-driven and one boxy shaped formed out of disk and bar evolution). According to Debattista et al. (2017), the observed properties of the Milky Way bulge stellar populations are consistent with a bulge formed from a continuum of disk stellar populations kinematically separated by the bar. Based on accurate abundances analysis of RC stars, the bulge show α-element enhancement typical of fast formation process. In particular, a possible interpretation of the observed relative trends of α-elements in the bulge, thin and thick disk is that the metal-poor bulge population experienced a fast formation scenario similar to the thick disk, whereas the metal-rich bulge population underwent a more extended (i.e. longer) star formation, on a timescale similar to that of the thin disk. The innermost and still poorly unexplored regions (i.e. |b| 1 • ) will be soon probed by new IR surveys planned for the near future (i.e. APOGEE-South, Multi-Object Optical and Near-infrared Spectrograph at VLT -MOONS) hence allowing us, for the first time, to complete the puzzle with a clear understanding of the chemical properties of the bulge as a whole with unprecedented accuracy. There is no doubt that the central regions of the Milky Way hosts an old stellar population. The strongest evidence being the presence of a prominent population of RRLs and T2C found by OGLE and VVV (Dékány et al., 2013;Pietrukowicz et al., 2015;Gran et al., 2016, and Bhardwaj et al), which are by far the largest photometric campaigns of variable stars. Furthermore, an old age is also guaranteed by the existence of a bulge GCs system (see e.g. Valenti et al., 2010;Bica et al., 2016, and reference therein). However, what still remains to be firmly assessed is the contribution of intermediate-young (i.e. 5 Gyr) stars to the global bulge stellar population. The AMR proposed by Bensby et al. (2017) should be either confirm on much statistically robust sample, or by using a methodology for the reconstruction of the star formation history more sophisticated than the approaches adopted so far. In particular, the comparison between observations and simulations should be performed by using as many as possible features of the CMDs (i.e. Gallart et al., 2005). In the coming years, the exquisite astrometry provided by the next Gaia data releases will most probably allow us to further refine the global picture of the bulge structure. Even though a large fraction of the bulge RC population is out of GAIA reach because of the crowding and high extinction, the information derived from RC stars in the low reddening regions can be used to obtain a very accurate distances map of the bulge outer regions. This can be used then as the reference frame upon which, through a differential analysis with the most obscured regions, we can build the entire bulge distances and structure maps. Finally, further efforts should be put to characterize the chemical content of the RRLs and T2C, which among all tracers are those representing univocally and purely the oldest stars in the Bulge. Indeed, accurate MDF and elemental abundances from high-or medium-resolution spectroscopy for these type of stars are still missing, or largely insufficient. The future LSST project will provide the position, magnitude and colors for thousands of variable stars, spanning a variety of ages. The spectroscopic follow up of a sizeable sample of variables would literately open new frontiers of our knowledge by allowing for the first time an accurate study of the metallicity trends as a function of the stellar ages. If such analysis would be extended also outside the bulge regions we could be in position to understand the interaction among different Galaxy structures, such for instance a clear view of the transition between disk and bulge. RR Lyrae variables in the Ultra-Faint satellites of the Milky Way In the Λ-Cold Dark Matter (Λ-CDM) scenario, large galaxies are the result of the assembling of smaller fragments, cold-dark matter dominated (e.g. Diemand et al., 2007;Lunnan et al., 2012). The baryonic component of these fragments may eventually collapse, forming small galaxies. This idea is appealing when applied to the MW, since it echoes the early scenario envisioned by Searle & Zinn (1978), in which the outer halo of the MW may have formed by a continuous infall of protogalactic fragments onto the Galaxy, for some time after that the collapse of its central part was completed. Indeed, first attempts to link the Λ-CDM cosmology with the Galactic environment foresaw the assembling of the Galactic halo starting from a number of satellites, and producing a number of fragments and streams, which are actually observed (e.g. McConnachie, 2012;Grillmair & Carlin, 2016). For decades, the survivors of such a process have been identified with the dwarf spheroidal (dSph) satellites of the MW, since they are old, metal-poor, gas poor and dark matterdominated systems. However, it was soon realized that the observed number of observed dSph was one or two order of magnitude smaller than that expected from theory. This mismatch, dubbed the "missing satellites problem" (Klypin et al., 1999;Moore et al., 1999), has been for several years a major problem in the comparison between theory and observations. A second problem, pointed out in the last few years, is that the circular velocities of the known dSph are too low, when compared to the expected values from their simulated substructures. In other words, the predicted densities of the massive subhaloes are too high, to host any of the bright dSphs. This mismatch, called the "too big to fail problem" (Boylan-Kolchin et al., 2012), has heavy implications, since it means either: i) massive dark subhaloes exist as predicted, but they host faint (L < 10 5 L ) satellites; ii) massive dark subhaloes does not exist as predicted, for instance they may be less concentrated than predicted. As a matter of fact, in the last ten years a considerable number of new and faint MW satellites has been discovered (e.g. Belokurov et al., 2007;Mc-Connachie, 2012), most of them on the basis of the SDSS data and, more recently, thanks to the ongoing large surveys conducted with OMEGACAM@VST, DECAM@CTIO and Pan-STARRS (e.g. Koposov et al., 2015;Laevens et al., 2015). These systems, called the ultra-faint dwarfs (UFDs), have integrated luminosities similar or even lower than those of the Galactic globular clusters, and are apparently dark matter dominated(see McConnachie, 2012). The large number of systems currently available (dSphs + UFDs), allowed to trace a statistically significant analysis of their spatial distribution, leading to the discovery that they actually populate a relatively thin ring, perpendicular to the MW plane, and possibly rotationally supported (Pawlowski & Kroupa, 2013). Moreover, several of the recently discovered candidate MW satellites also seem to be clustered around the Magellanic Clouds, hinting that they may have fallen in as a group (e.g. Sales et al., 2015), in line with the theoretical predictions (Wetzel et al., 2016). Similar aligned structures, showing a kinematic coherence, have been discovered around the Andromeda galaxy and, outside the Local Group, around NGC 5557 (Duc et al., 2014). Similar structures, but without a clear kinematic coherence, have been reported in the literature around NGC 1097, NGC 4216, NGC 4631 (Pawlowski & Kroupa , 2014, , and references therein), and possibly around the M81 and Cen A groups (Mülleret al., 2016;Müller et al., 2018). The MW structure, dubbed the Vast POlar Structure (VPOS), opens a wide scenario of cosmological problems, since at the present time it is not clear if it is made of primordial (dark matter-dominated) systems, or tidal (dark matter-free) galaxies. Interestingly, when the halo Galactic globular clusters are grouped in young halo (YH) and old halo (OH) on the basis of the variation of their HB morphology at constant [Fe/H], which is a rough approximation of the cluster age, they show up also a division by kinematics and spatial distribution (e.g. Zinn, 1993;Lee et al., 2007). In particular, YH clusters span a wide range in ages (∼ 5 Gyr Dotter et al., 2011) and are characterized by a hotter kinematics than the OH clusters. These occurrences suggest that YH clusters may be debris from accretion events. Finally, the discovery that YH clusters are part of the VPOS (Pawlowski & Kroupa, 2013;Zinn et al., 2014), strengthens the debris hypothesis. Moreover, it also suggests that a fraction of the accreted halo may have been originated in a number of moderately massive satellites that formed GCs, similar to Sagittarius, Fornax, or even the Magellanic Clouds (but see Fiorentino et al., 2016, for new insights on the contribution of Fornax-like systems). The role of the RR Lyrae stars A fraction of the problem can be settled by carefully comparing the photometric and spectroscopic properties of the stellar populations of the halo of the MW and of its companions. Moreover, since their pulsational properties such as periods and amplitudes are a function of their structural and evolutive parameters, a detailed comparison of the pulsational properties of the RR Lyrae stars can add valuable information. In particular, the ensemble pulsational properties of the RR Lyrae stars can give important hints. Indeed, it is well known that cluster and field Galactic RR Lyrae stars are affected by the so-called Oosterhoff (Oo) dichotomy, where in the Oo I group the fundamental mode variables show mean periods of < P ab >∼ 0.55 days, while in the OO II group they have < P ab >∼ 0.65 days. In fact, the bright MW companions have < P ab >∼ 0.6 days and are generally classified as Oo-intermediate, which is difficult to reconcile with the dichotomy of the Galactic halo. On the other side, the RRLs hosted in the UFDs suggest an Oo II classification (Dall'Ora et al., 2012), consistent with an older population of the Galactic halo, possibly produced by an early dissipative collapse or merging (e.g. Miceli et al., 2008). The ultra-faint dwarfs UFD galaxies are, at first glance, the low brightness tail of the dSph. From this point of view, there is no structural difference between the "classical" dSphs and the low luminosity UFDs. However, a careful comparison of the central surface brightness as a function of the total luminosity, shows a "knee" around M V ≈ −8 mag (see McConnachie, 2012, Figure 7). The galaxies brighter than M V ≈ −8 mag follow a linear trend, with the brightest galaxies having a higher central brightness, while galaxies weaker than this limit follow a horizontal distribution, with a constant central brightness no matter what is the total luminosity. In this work, we will therefore consider UFDs all the galaxies that follow such a horizontal distribution. Stated in a different way, UFDs are characterized by low luminosities and projected densities. This means that it is difficult to recognize them as stellar f3.eps overdensities in the field, and the problem becomes even more severe when one wants to detect their possible tidal tails. For these reasons, RRLs become a powerful tool to study the stellar populations of the UFDs and their spatial extent. Indeed, as suggested by Baker & Willman (2015), RRLs could be the only method to unveil very faint satellites, with M V < 3.5, especially at low Galactic latitudes when both extinction and field contamination can be important. All the UFDs searched for variability so far show at least one RRL. This is not surprising, since they are composed by (at least) old, metal-poor stellar populations, which are known to produce RRLs. The small statistics must not be misleading, since if one normalizes the observed number of RRLs by the integrated luminosity (i.e. a proxy of the baryonic mass), the fraction of RRLs is even higher than that observed in the bright dSphs. Indeed, adopting the specific frequency as parametrized by Mackey & Gilmore (2003) S RR = N RR × 10 0.4(7.5+M V ) (1) one finds that UFDs tend to have higher specific frequencies than dSphs, as shown in figure 3 of Baker & Willman (2015), here reproduced by kind permission. However, as suggested in Baker & Willman (2015), this could be due to incompleteness effects, being the census of the RRLs in the bright dSphs still not complete. Table 1 collects all the positional and pulsational parameters of the RRLs discovered in the UFDs so far. For each variable, we list the position, the period, the mean magnitudes and the luminosity amplitude in the BV I bands (when available). In some cases, we merged the information on the same variable coming from different studies. We discuss these cases in the individual notes. Here, we point out that B-band photometry for the RRLs in Bootes I comes from Siegel (2006), and the V I-bands from Dall' Ora et al. (2006). We also explicitly note that, when a star was listed in both studies, we adopt the coordinates listed in Dall'Ora et al. (2006). This table does not include the RRLs hosted in CVn I, whose structural properties suggest a classification as a "classical" dSph instead as a UFD, and those hosted in Leo T, since it contains gas and a young stellar population, and in this sense is not a typical old, gas poor UFD. For these two galaxies, we refer the reader to the specific papers -CVn I, Kuehn et al. (2008); Leo T, Clementini et al. (2012). Vivas et al. (2016) give practically the same period. However, here we present the mean magnitude and pulsational amplitude proposed by Vivas et al. (2016). V5 Boo I: this star was classified as a RR c type star by Siegel (2006), with a period of P = 0.3863158 days, and as a RR ab star by Dall'Ora et al. (2006), with P = 0.6506 days. We used the Siegel (2006) period estimate to phase the data of this variable available in our database, but unfortunately we were not able to achieve a satisfactorily phased light curve. Therefore, we keep the Dall'Ora et al. (2006) estimate. It is worth noting that this variable is quite peculiar, since in the Dall'Ora et al. (2006) photometry it appears redder and brighter than the HB, and it could be blended with a companion. V12 Boo I: this RRL was classified as RR ab by Siegel (2006), with a period of P = 0.6797488 days, and by a double-pulsator RR d by Dall'Ora et al. (2006), with periods of P 1 = 0.3948 days and P 0 = 0.5296 days. since the light curve shown by Dall'Ora et al. (2006) convincingly shows a typical double-mode behavior (see their Fig. 2), here we will keep their classification. Our RRL sample V1 Hyd II: the photometry of this variable was presented by Vivas et al. (2016) in the gri system. Here, for consistency we present its pulsational properties in the BV i bands, where gr magnitudes were transformed in B, V magnitudes following Jester et al. (2005). Discussion A glance at the data listed in Table 5 shows that, except for a few number of galaxies (namely UMa I, Boo I, Her), UFDs host a very small number of RRLs. Of course, with such small statistics, if a distance estimate can be reliable (especially in presence of a color-magnitude diagram, to check the robustness of the measured mean magnitudes and colors), some caveats must be recognized when using RRLs as population tracers. Indeed, since the Oo type is an ensemble feature, this should be declared only when a substantial number of fundamental mode RRL is available, in order to properly put them on an amplitude-period diagram (known as Bailey's diagram). Nevertheless, a comparison with the classic Oo I and Oo II lines in the Bailey's diagram can give interesting insights. The Oosterhoff classification and the Galactic halo In Figure 16, we show the positions on the Bailey's diagram of the listed RRLs. For reference, we plot the loci of the Oo I and Oo II clusters, according to Zorotovic et al. (2010). At first glance, the positions of almost all the RRLs of the UFDs are compatible with a Oo II classification. The only apparent exception is UMa I, which was classified as Oo intermediate in Garofalo et al. (2013). However, when we compare the positions of its variables with those of the Galactic globular clusters M3 (left panel) and M15 (right panel, data made available by Clement et al., 2001), we favor a Oo II classification also for this system. Indeed, from the left panel it appears that the distribution of the UMa I RR c variables is in good agreement with those of the other UFDs and with those of the Oo II cluster M15. Also, the mean period of the fundamental pulsators of UMa I is < P ab >= 0.628 ± 0.063 days, which is in agreement with the values of other Oo II systems, such as M15 (< P ab >= 0.643 ± 0.063 days), M92 (< P ab >= 0.631 ± 0.048 days) and M68 (< P ab >= 0.627 ± 0.062 days), where the listed values are computed on the basis of the compilation published by Clement et al. (2001). However, it should also be noted that, when discarding the variable V4, significantly brighter than the others, the mean period of the fundamental pulsators lowers to < P ab >= 0.599 ± 0.032 days, as discussed in Garofalo et al. (2013). This suggest that, in general, when dealing with systems with a small number of RRLs, a correct Oo classification is a risky business, and we may consider not only to compare the positions of the RR ab stars with respect to the mean Oo I and Oo II loci, but to consider the whole plane instead, with the actual distributions of the RRLs belonging to some reference Oo I and Oo II clusters. Taken at face value, the Oo II classification could suggest a major contribution of UFD-like objects in assembling the Galactic halo. However, as pointed out in Fiorentino et al. (2015), a detailed comparison of the pulsational properties of the RRLs of the Galactic halo and of the set dSphs + UFDs, shows that the latter lack the so-called high-amplitude, short-period (HASP) variables. Fiorentino et al. (2015) argue that the HASP region is filled only when RRLs are more metal-rich than [Fe/H] = −1.5 dex. Thus, present-day dSphand UFD-like objects seem to have played a minor role, if any, in assembling the Galactic halo. A homogeneous distance scale In Table 6, we propose a homogeneous distance scale for the MW UFDs, by using the same M V − [F e/H] relation and the same reddening calibration from Schlafly & Finkbeiner (2011). In particular, for the RRLs luminosity we adopt the absolute magnitude of M V = 0.54 ± 0.03 mag at [Fe/H] = −1.5 dex (based on a LMC distance of 18.52 ± 0.09 mag from Clementini et al., 2003), with a slope of ∆M V [F e/H] = 0.214 ± 0.047 mag dex −1 (Clementini et al., 2003). We do not make any attempt to homogenize the metallicity scale, where all the values have been taken from the collection listed in McConnachie (2012), except of Boo III (Carlin et al., 2009), and Hyd II (Kirby et al., 2015). The uncertainties of the distance are split in intrinsic error (the standard deviation of the mean, when at least two RRLs are available, or the typical photometric error when only one variable is present), plus a contribution due to the uncertainty in the M V −[Fe/H] relation. As a matter of fact, there is also Clementini et al. (2003). A V absorptions are based on the calibration by Schlafly & Finkbeiner (2011). another source of uncertainty, that is the internal metallicity spread observed in several UFDs, but it is difficult to manage. The spread can be of the order of ∼ 0.6 dex (UMa II, Kirby et al., 2008), which means an additional uncertainty up to ∼ 0.15 mag. In computing the distances of the individual galaxies, we dropped the variables V1 in Coma, V4 in UMa I e V5 in Boo I, since they are significantly brighter (∼ 0.2 mag) than the others and/or of the HB, and may be evolved variables not representative of the zero-age HB level. The current radial distribution of both UFD and classical dwarf spheroidals (see Monelli contribution this book) seems to be quite similar. However, the uncertainties affecting the individual distances of UFDs are still too large. Individual distances based on the use of optical and/or near Infrared PL relations will be crucial to further constrain the similarities in radial distribution of gas poor stellar systems. On the absolute and relative ages of globular clusters The early estimates of the ages of globular clusters date back to more than half a century ago thanks to the pioneering papers from Alan Sandage (Sandage, 1958) and Halton Arp (Arp, 1962) for the observational aspects and from Fred Hoyle (Hoyle, 1959) and Martin Schwarzschild (Schwarzschild, 1970) for theoretical analyses. The reader interested in a more detailed discussion concerning the dawn of cluster age determination is referred to the seminal presentations and discussions of the 1957 Vatican Conference (O'Connell, 1958). Particularly enlightening was the empirical evidence brought forward by Walter Baade (Baade, 1958) concerning the age difference among the different stellar populations belonging to the Galactic components (Halo, Disk, Bulge). Absolute cluster age estimates In the following we will focus on the most reliable classical methods used to estimate absolute ages of Galactic Globular Clusters (GGCs). The first two rely on deep photometry of individual stars of a GGC down to the Main Sequence Turn Off (MSTO) and the white dwarf cooling sequence features, respectively. The second is observationally based on the detection of radioactive heavy elements in individual stellar spectra in order to use direct cosmochronometry. We will highlight strengths and weaknesses of their application. The Main Sequence Turn Off The MSTO of a cluster is identified as the bluest point along its Main Sequence. This is the most important clock to date for both open and globular stellar systems. The key advantages of this diagnostic are the following: i) The anti-correlation between cluster age and brightness of MSTO stars is linear over a broad range of stellar ages (see e.g., Di Cecco et al., 2015;Valle et al., 2013). ii) Stars in this evolutionary phase are burning hydrogen in the core. This means that they evolve on a long nuclear time scale, and therefore, the number of stars per unit magnitude tracing this evolutionary phase is quite large compared with evolved phases. iii) Accurate apparent optical magnitudes of MSTO stars in GGCs are within the capability of 2-4 m class telescopes equipped with CCD detectors and can be easily measured. The main cons of the MSTO are the following: i) The MSTO is prone to uncertainties on cluster distance and on cluster reddening. An uncertainty of 10% in the error budget (reddening plus true distance modulus) of the MSTO, implies an uncertainty of about 1 Gyr in cluster age. The problem becomes even more severe if we are dealing with stellar systems either affected by large or by differential reddening. ii) The identification of the MSTO is not always trivial. In a broad range of stellar ages and chemical compositions, stars across the MSTO attain in optical bands similar colors and magnitudes. In some traditional broad-band color-magnitude systems there is nearly a vertical distribution of MSTO stars (e.g., Salaris & Cassisi, 2005). Fig. 17 shows the optical (UBVRI) CMDs of the Galactic globular M4 (Stetson et al., 2014;Braga et al., 2015). The shape of the MSTO changes from "cuspy" in the U-I,U CMD (top left panel) to "almost vertical" in the V-I,I CMD (bottom right panel). Fortunately the variation in the color gradient of the region across the MSTO becomes more evident in the NIR and in the MIR regime. Data plotted in the top panel of Fig. 18 show that cluster stars display not only a well defined bending in the region across the MSTO, but also a sharp change in the slope of the lower main sequence across the Main Sequence Knee (MSK, see § 5.4). A glance at the data plotted in the bottom panels shows that the bending across the MSTO is also in NIR/MIR CMDs. However, the photometric accuracy of the MIR bands do not allow us to clearly identify the MSK. The empirical scenario concerning the shape of both the MSTO and the MSK becomes even more interesting in dealing with optical/NIR/MIR CMDs, since they bring forward several advantages: a) They display a well defined bending across the MSTO and a sharp change in the MS slope across the MSK; b) The broad range in central wavelengths among optical and NIR/MIR bands means also a strong sensitivity to the effective temperature. The consequence is that optical/NIR/MIR CMDs display tight stellar sequences not only along the MS, but also in advanced evolutionary phases (RGB, HB, AGB). iii) Optical CMDs are affected in the region across the MSTO by field star contamination. This problem is less severe in optical/NIR CMDs, since the MSTO stars attain colors that are systematically bluer than typical field stars. The bottom left panel of Fig. 19 shows that field stars are typically redder (V − K = 2.8 − 3.5) than MSTO stars (V − K 2.6). However, field star contamination severely affects age and structural parameters of nearby stellar systems. Accurate and deep optical CMDs based on images collected with ACS on board of HST are less affected by the contamination of field stars. The pointing is typically located across the center of the cluster and in these regions cluster stars outnumber field stars. This rule of thumb does not apply to clusters either located or projected onto crowded Galactic regions such as the Bulge and Galactic thin disk (Zoccali et al., 2001a;Ferraro et al., 2009;Lagioia et al., 2014) or nearby dwarf galaxies (Kalirai et al., 2012). A significant step forward in dealing with this problem was provided by HST photometry. The superb image quality of HST optical images provided the opportunity to measure the proper motion using images collected on a time interval of the order of ten years. This approach provided the opportunity to split not only field and cluster stars, but also to clearly identify stellar populations belonging to Sagittarius (Anderson, 2002;King et al., 2005;Massari et al., 2013;Milone et al., 2014). The great news in this context is that similar results can also be obtained using NIR images as a second epoch collected with AO systems available at 8-10 m class telescopes (e.g., NGC 6681, Massari et al., 2016b). The results mentioned above are based on images that only cover a few arcminutes around the center of each cluster. The separation between cluster and field stars is much more challenging away from cluster centers, since field stars outnumber cluster stars in these regions. The reason why we are interested in tracing cluster stars in external cluster regions is twofold: a) There is mounting empirical evidence that stellar populations change (chemical composition, age) as a function of radial distance -e.g., 47 Tuc, (Kalirai et al., 2012); Omega Cen, (Calamida et al., 2017); b) The estimate of structural parameters depends on the star counts in the outskirts of the clusters. These are the reasons why new photometric approaches for the separation between field and cluster stars are required. This approach does require photometric catalogs based on at least three photometric bands. To accomplish the goal we have used either multi-dimensional ridge lines as in dealing with griz SDSS photometry of the metal-rich globular M71 (Di Cecco et al., 2015) or the difference in the spectral energy distribution in dealing with the ugri photometry of the complex globular ω Cen (Calamida et al., 2017). The latter approach was developed to deal with stellar systems characterized by multiple stellar populations (see e.g., Martínez-Vázquez et al., 2016a,b). Initially, the ridge lines of the different sub-populations in ω Cen along the cluster evolutionary sequences (MS, SGB, RGB) were estimated. They are based on a 3D CMD (magnitude, color index, star counts) and the ridge lines trace the peaks of the stellar distribution. The horizontal branch (HB) stars are typically neglected, since they are either bluer (hot and extreme HB) or they can be easily distinguished (RR Lyrae stars, red HB) from the field stars. These ridge lines were estimated in an annulus neglecting stars located in the innermost and in the outermost cluster regions and using several cuts in radial distance and in photometric accuracy. Once the ridge lines were estimated we performed a linear interpolation among them and generated a Fig. 20 Left -Optical CMD in B-R,R for stars covering a sky area of 35×35 arcmin squared around M4. Middle -Same as the left panel, but for candidate cluster stars. Right -Same as the left panel, but for candidate field stars. continuous multi-dimensional surface. Subsequently, the separation between field and cluster stars was performed in two steps: a) We estimated the total standard deviation among the position of individual stars and the reference surface; b) We estimated the distance in magnitude and in color among the individual stars and the reference surface. The approach discussed above relies on a conservative assumption, i.e. we do prefer to possibly lose some of the candidate cluster stars instead of including possible candidate field stars. Note that this assumption is fully supported by the fact that the age diagnostics we use for estimating cluster ages depend on the shape of MSTO and/or of the MSK. Stellar completeness mainly affects cluster ages based on cluster luminosity functions (Zoccali et al., 1998). Fig. 20 shows the separation between field and cluster stars for the Galactic globular M4. This cluster is an acid test for the selection criteria based on photometry, since it is projected onto the Galactic Bulge and it is also affected by differential reddening. To overcome some of these problems we decided to follow the same approach we adopted for ω Cen, but the initial ridge lines were derived candidate field stars located outside the tidal radius of the cluster (Ferraro et al. 2018, in preparation). Data plotted in the left panel display stars located across the sky region covered by M4 in the B-R,R CMD. Field stars can be easily identified both along the MS and the RGB. The middle panel of the same figure shows the CMD, but for candidate cluster stars. The "cleaning" appears quite good across the cluster sequences. Note that the sequence running parallel to the cluster MS is due either to binaries or to photometric blends. The plausibility of the criteria adopted to separate field and cluster stars are further supported by the CMD of candidate field stars plotted in the right panel. Once again there is evidence that a minor fraction of candidate field stars were misidentified, but the main peak of field stars even across the MSTO was properly identified. To further support the photometric criteria adopted to separate field and cluster stars, Fig. 21 shows a color-color-magnitude diagram of the selected candidate cluster stars. It is worth mentioning the smoothness of the cluster sequences when moving from the RGB to the MSTO and to the MSK. The error budget The global error budget of absolute ages of globular clusters based on the MSTO includes theoretical, empirical and intrinsic uncertainties. Theoretical uncertainties -The precision of the clock adopted to date stellar systems depends on the precision of the input physics adopted to construct evolutionary models and, in turn, cluster isochrones. The main sources of uncertainties can be split into micro-physics (nuclear reactions, opacity, equation of state, astrophysical screening factors) and macro-physics (mixing length, mass loss, atomic diffusion, rotation, radiative levitation). The uncertainties affecting the nuclear reactions, such as the pp and the CN O cycle, have already been discussed by degl 'Innocenti et al. (2004) and Valle et al. (2013). They suggest that the uncertainties on age range from 3% for the 1 H(p, ν e , e + ) 2 H nuclear reaction to roughly 10% for the 14 N (p, γ) 15 O reaction. The same authors also suggest an uncertainty of the order of 5% for the radiative opacities adopted for constructing MS and HB models. Moreover, they also suggest a similar uncertainty (5%) for the conductive opacities affecting the energy transport in electron degenerate isothermal helium cores typical of RGB structures and, in turn, on the HB luminosity level (e.g., Marta et al., 2008;Chaboyer et al., 2017). Finally, to transform theory into the observational plane, we also need predictions based on stellar atmosphere models (Gustafsson et al., 2008). We have to take account of uncertainties affecting bolometric corrections and color-temperature transformations. The impact that the quoted ingredients have on cluster isochrones has been discussed in detail in the literature (e.g., Pietrinferni et al., 2004;Cassisi et al., 2008;Pietrinferni et al., 2009;Sbordone et al., 2011;VandenBerg et al., 2013;Salaris, 2016). As a whole, the typical theoretical uncertainty in the adopted MSTO is of the order of 10%. In this context it is worth stressing that theoretical uncertainties mainly affect the zero point of the absolute age determinations. The relative age determinations are minimally affected by these uncertainties (see e.g., Dotter et al., 2011;VandenBerg et al., 2013;Chaboyer et al., 2017). In this context, it is also worth mentioning that the helium to metal enrichment ratio (∆ Y /∆ Z), a fundamental ingredient in evolutionary prescriptions, is poorly known. This is used to derive the current stellar helium content through the following relation: Y = Y P +∆ Y /∆ Z ×Z, where Y P is the primordial helium content. However, empirical estimates range from ∆ Y /∆ Z=3±2 Pagel & Portinari (1998) to ∆ Y /∆ Z=5.3±1.4 Gennaro et al. (2010). Moreover and even more importantly, there are no solid reasons why this relation should be linear over the entire metallicity range and that the current local estimate is universal (Peimbert et al., 2010). Empirical errors -The main uncertainties in the absolute age estimate of globulars are the cluster distances. Indeed an error of 10% in true distance modulus (∆µ 0 ∼0.1 mag) implies an uncertainty of ∼1 Gyr in the absolute age. The error budget becomes even more severe once we also account for uncertainties in reddening corrections. An uncertainty of 2% in color excess implies an uncertainty of 6% in visual magnitude. The impact of this limitation becomes even more stringent if the cluster is also affected by differential reddening. In this context, it is worth mentioning that more metal-rich globular clusters are mainly distributed across the Galactic Bulge, i.e., a region of the Galactic spheroid affected by large reddening and by differential reddening (Stetson et al., 2014). The ongoing effort in trying to quantify the systematics affecting the measurements of the MSTO of a large fraction of GGCs also faces the problem of the absolute photometric calibration. In handling this thorny problem, two independent approaches have provided the opportunity to limit and/or to overcome the uncertainties associated with the zero-points of different photometric systems: i) Cluster photometry collected either with HST/WFPC2 or HST/ACS has provided the unique opportunity to collect accurate and deep photometry in a single well-calibrated photometric system for a sizable sample of GGCs (∼60 out of 160, e.g., Zoccali et al., 2001b;Sarajedini et al., 2007;Marín-Franch et al., 2009;VandenBerg et al., 2013); ii) The major effort to provide local standard stars in GGCs also has significantly improved the accuracy of photometry and saved significant amount of telescope time (Stetson, 2000). The age diagnostic adopted to estimate the cluster age also depends on the iron content. The massive use of multi-object fiber spectrographs paved the way for the definition of a firm metallicity scale including a significant fraction of GGCs (e.g., Carretta et al., 2009). This translates to a systematic decrease in the uncertainties on the iron and on α-element abundance scale. Intrinsic uncertainties -Dating back to more than forty years ago, spectroscopic investigations brought forward a significant star-to-star variation in C and in N among cluster stars (e.g., Osborn, 1971). This evidence was complemented by discoveries of variations in Na, Al, and in O (e.g., Cohen , 1978;Pilachowski et al., 1983;Leep et al., 1986) and by anti-correlations in CN-CH (e.g., Kraft, 1994) as well as in O-Na and in Mg-Al (e.g., Suntzeff et al., 1991;Gratton et al., 2012). The light element abundance variation was further strengthened by the occurrence of multiple stellar populations in more massive clusters (Bedin et al., 2004;Piotto et al., 2005Piotto et al., , 2007. However, detailed investigations concerning the different stellar populations indicate a difference in age that is, in canonical GGCs (i.e. the most massive globulars are not included), on average shorter than 1 Gyr (Ventura et al., 2001;Cassisi et al., 2008). The intrinsic uncertainty does not seem to be the main source in the error budget of GGCs absolute ages. The cluster distance scale Several approaches have been suggested in the literature to overcome some of the thorny problems affecting the estimate of the absolute age of GGCs. The time scale within which we can significantly reduce the theoretical uncertainties can be barely predicted. On the other hand, the empirical uncertainties are strongly correlated with technological developments either in the detectors or in the observing facilities or in both. The uncertainties affecting individual cluster distances would significantly benefit by the development of a homogeneous cluster distance scale. Although Galactic globulars are at the cross-road of major theoretical and empirical efforts, we still lack a distance scale based on the same diagnostic and on homogeneous measurements. The problem we are facing is mainly caused by the intrinsic limitations in the diagnostics used to determine the distance. The difficulties of these methods can be summarized briefly. i) The tip of the Red Giant Branch (TRGB) can only be applied to two clusters (ω Cen, 47 Tuc), due to limited stellar (Poisson) statistics when approaching the tip of the RGB. ii) Main sequence fitting is hampered by the fact that the number of field dwarfs with accurate trigonometric parallaxes is small and covers a limited range in iron abundance (Chaboyer et al., 2017;VandenBerg et al., 2014a). iii) Another possible approach to constrain cluster distances is to fit the white dwarf cooling sequence. However, this method can only be applied to a few clusters (Zoccali et al., 2001b;Richer et al., 2013;Bono et al., 2013) and we are still facing some discrepancies between the distances based on this diagnostic and other distance indicators . Furthermore, the number of nearby White Dwarfs (WDs) for which are available accurate trigonometric parallaxes is limited. iv) Cluster distances based on the kinematic approach appear also to be affected by systematics. This diagnostic has the potential to be a powerful geometrical method, since it is only based on the ratio between the standard deviation of proper motions and the standard deviation of radial velocities. There is mounting evidence that the current kinematic distances are slightly larger than those based on other primary distance indicators . v) During the last few years, accurate distances have been provided for a few globulars using eclipsing binary stars. This is also a very promising approach, since it is based on a geometrical method (Thompson et al., 2001). The main limitation of this distance indicator as well as methods iii) and iv) is that they are challenging from the observational point of view. This means that they have only been applied to a very limited number of clusters. vi) The Horizontal Branch (HB) luminosity level is one of the most popular distance indicator for GGCs. The typical anchor for the HB luminosity is the M RR V -metallicity relation at the mean color of the RR Lyrae instability strip. This color region was selected because it is quite flat in CMDs using the Vband magnitude. This distance indicator might be affected by two possible sources of systematic errors: a) The HB morphology -The metallicity is the most important parameter affecting HB morphology, indeed the HB is mainly blue in the metal-poor regime and becomes mainly red in the metal-rich regime. The parameter used to trace the change in the HB morphology is τ HB =(B-R)/(B+V+R), i.e. the number ratio among stars either bluer (B) or redder (R) than variable (V) RR Lyrae stars. Globulars approaching the two extrema (±1) typically lack of RR Lyrae stars. This means that it is quite difficult from an empirical point of view to anchor the HB luminosity level. The impact of this limitation becomes even more stringent for globulars more metal-rich than 47 Tuc, since their HBs typically display a stub of red HB stars. There are two exceptions, NGC 6441 and NGC 6388, that are metal-rich but display a well populated blue and red HB, plus a good sample of RR Lyrae stars (Pritzl et al., 2003); b) Predicted HB luminosity level -The current evolutionary prescriptions concerning the zero age HB (ZAHB) luminosity are typically 0.10-0.15 mag brighter than the observed ones. New interior conductive opacities (Cassisi et al., 2007) contribute to alleviate the problem, but the discrepancy is still present. vii) Near-infrared and mid-infrared observations of cluster RR Lyrae stars appear to be very promising for providing a homogeneous cluster distance scale. The RR Lyrae stars do obey to well defined PL relations in these bands. These relations are very narrow (e.g., Braga et al., 2015;Marconi et al., 2015;Neeley et al., 2017) and are not affected by off-ZAHB evolution, thus reducing the systematics discussed above. NIR photometry has also the advantage of being less prone to uncertainties introduced by either large and/or by differential reddening corrections. This is a typical problem among the more metal-rich Bulge globulars. Indeed, the uncertainties affecting NIR/MIR bands are on average one order of magnitude smaller than the optical bands. However, the number of globulars hosting a sizable sample of RR Lyrae stars is roughly half of the entire Galactic sample. Finally, we still lack a detailed knowledge on how the reddening law changes when moving across the Bulge, along the Bar and into the nuclear Bulge (e.g., Indebetouw et al., 2005;Nishiyama et al., 2006Nishiyama et al., , 2008Nishiyama et al., , 2009Nataf et al., 2013;Schultheis et al., 2014). The white dwarf cooling sequence The white dwarf cooling sequence depends on different physics than the previouslydiscussed methods, and as such should be an excellent diagnostic to constrain possible systematics in absolute cluster age estimates based on the MSTO Salaris et al., 2010). This means that deep and accurate photometry of nearby globulars can provide crucial constraints on the plausibility of the physical assumptions adopted to construct either main sequence or cooling sequence models. The key advantages in using cluster WD cooling sequences is that they display a well defined blue turn off (WDBTO) in deep and accurate I,V-I CMDs. Theory and observations indicate that this change in the slope of the WD cooling sequence is due to interplay between an opacity mechanism called Collisional Induced Absorption (CIA) mainly from molecular hydrogen and/or to the cooling time of more massive WDs (Brocoto et al., 1999;Hansen, 2004;Richer et al., 2006;Moehler et al., 2008;Salaris et al., 2010;O'Malley, 2013). Recently, Bono et al. (2013) performed a detailed theoretical investigation of cluster WD cooling sequences and found new diagnostics along the WD cooling sequences and in NIR Luminosity Functions (LFs). The interplay between CIA and the cooling time of progressively more massive WDs causes a red turn-off along the WD cooling sequences (WDRTO). This feature is strongly correlated with the cluster age, and indeed the faint peak in the Kband increases by 0.2-0.25 mag/Gyr in the range 10-14 Gyr. Moreover, they also suggested to use the difference in magnitude between the MSTO and the WDRTO, since this age diagnostic is independent of distance and reddening. These predictions appear very promising in view of the unprecedented opportunity offered by JWST in the NIR/MIR regimes. This encouraging prospect also applies to ground-based extremely large telescopes equipped with stateof-the-art multi-conjugated adaptive optics systems . Thorium Cosmochronometry Elements beyond the iron-group are overwhelmingly created in neutron-capture fusion reactions by target heavy-element seed nuclei. Their syntheses mostly are either slow (β-decay timescales fast compared to neutron-capture timescales; called the s-process) or rapid (neutron-capture timescales much faster than βdecay ones; the r-process). Very heavy radioactive elements thorium (Z = 90) and uranium (Z = 92) can be created only via the r-process. The heaviest stable element is bismuth (Z = 83, with its sole natural isotope 209 Bi). All isotopes of elements with Z = 83−89 have very short half-lives, and therefore cannot be created in the s-process. Th and U decay on astrophysically interesting timescales: half-lives are 1.4 × 10 10 yr for 232 Th (its only naturallyoccurring isotope), 7.0 × 10 8 yr for 235 U, and 4.5 × 10 9 yr for 238 U. Therefore, derived abundances of Th and U with respect to stable r-process-dominated neutron-capture elements in low-metallicity halo stars have the potential to be translated into Galactic age estimates. Attempts to use stellar Th abundances as chronometers began with Butcher (1987). That study included only disk stars with metallicities [Fe/H] ≥ −0.8. As is the case for most Th abundance studies, Butcher (1987) analyzed just the 4019.2Å Th II transition. That line in high metallicity stars is at best a weak blending absorption in this crowded spectral region. Additionally, for a comparison stable neutron-capture element Nd was chosen in the Butcher (1987) study. Unfortunately, Nd in the solar system (and probably in most disk stars) has a s-process origin, accounting for 58% of its abundances; the r-process fraction is only 42% (e.g. Smolec (2005)). With these limitations, Butcher (1987) argued that the Th/Nd ratios were roughly constant in their sample of disk stars, irrespective of assumed stellar age. Analyses of low-metallicity halo population r-process-rich stars have yielded more easily-interpreted results. The first such star, HD 115444, initially identified by Griffin et al. (1982), has [Fe/H] −2.9, and [Eu/Fe] +0.9 (e.g. Westin et al. (2000); Sneden et al. (2009)). Then CS 22892-052, a red giant from the Beers et al. (1992) low resolution Galactic halo survey, was serendipitously discovered (Sneden et al., 1994) . The relatively depressed Th compared to Eu was taken to be a sign of radioactive decay from an assumed initial production ratio of [Th/Eu] ≡ 0.0. Initial application of theoretical models suggested an ancient age for the neutron-capture elements but with a large uncertainty: 11.5 ≤ t ≤ 18.8 Gyr (Cowan et al., 1997). Detailed study of another r-process-rich star CS 31082-001 (Cayrel et al., 2001;Hill et al., 2002) revealed the first detection of U in a low metallicity star (Hill et al., 2002). But the individual abundance ratios [Th/Eu] and [U/Eu] in this star turned out to be too large to be sensibly interpreted as a straightforward radioactive decay. Instead, it was necessary to postulate an "actinide boost" with initial production ratios [Th/Eu] > 0 and [U/Eu] > 0. Fortunately, the ratio between neighboring elements Th and U should have well-understood production ratios and, for example, Schatz et al. (2002) derived an age from the U-Th abundance ratio of 15.5 ± 3.2 Gyr, consistent with but not constraining the age of the Galaxy determined from other indicators. The problems and prospects of U and Th abundances are considered well in Hill et al. (2002) and Schatz et al. (2002) and will not be repeated here. The actinide boost problem effectively forces attention on detection of both Th and U for meaningful radioactive cosmochronometry. But the problem is the rarity of U detections even in low metallicity stars with extreme r-process enhancements (e.g. [Eu/Fe] > +1). Only a single U II transition at 3539.5Å has been detected to date, and it is at most a very weak bump among a clump of lines dominated by two strong Fe I lines as well as weaker Nd II and CN lines. This is shown, for example, in Fig. 10 of Hill et al. (2002), Fig. 9 of , Fig. 2 of Frebel et al. (2007). If r-process production ratios [Th/Eu] cannot be predicted with certainty given the large Periodic Table stretch between elements 63 and 90, can another element closer to Th serve as the stable comparison element? Kratz et al. (2007) suggested that Hf (Z = 72) might be a good candidate, as their computations showed that Hf is made in the r-process at similar neutron densities to Th (see their Fig. 3). We tested this idea by considering Eu, Hf, Th, and U abundances reported for these kinds of stars in the literature. In Fig. 23 panel (a) we show that for radioactive elements U and Th, their "absolute" number density ratios log (U/Th) 1 are essentially constant in the high-resolution studies published to date, and Hill et al. (2016) report discovery of a fifth r-rich star that has a nearly identical value of this ratio. In panel (b) we show a similarly tight correlation between the stable rare-earth elements Hf and Eu. Star-to-star scatter increases markedly in ratios log (Th/Eu) and log (Th/Hf). Observations are clearly telling us that any actinide boost of the heaviest r-process elements sets in beyond Z = 72. Abundance data on 3 rd r-process peak elements Os, Ir, Pt (Z = 76−78) and Pb (Z = 82); see individual papers cited above, (e.g., Cowan et al., 2005;Plez et al., 2004). But these elements: (i) have detectable transitions only from their neutral species, whereas all other neutron-capture elements with Z ≥ 56 are represented only by ionized transitions, greatly increasing the derived abundance uncertainties in the comparison; (ii) all of their detectable transitions are in the near-UV or vacuum-UV. Their abundances do correlate with those of other very heavy neutron-capture elements, but do not add effective cosmochronometry information at present. Moreover, predicted production ratios in the r-process can have significant uncertainties, as very little experimental data exist on nuclei far from the valley of β stability. The influence of these uncertainties have been discussed in several papers published after Th and U detections were announced (e.g., Goriely & Clerbaux, 1999). A good summary of the nuclear issues is in Niu et al. (2009), who considered uncertainties in astrophysical r-process fusion conditions, nuclear mass models, and β-decay rates. They suggest that "the influence from nuclear mass uncertainties on Th/U chronometer can approach 2 Gyr."; see their Fig. 5, which shows derived ages of HE 1523-091 (Frebel et al., 2007) and CS 31082-001 (Cayrel et al., 2001;Hill et al., 2002) with variations in all of the quantities that can influence the conversion of derived observational U/Th abundance ratios into final age estimates. Their computed mean ages are 11.8 ± 3.7 Gyr for HD 1523-0901 and 13.5 ± 2.9 Gyr for CS 31082-001, consistent with current age estimates of the Galaxy, albeit with substantial error bars. Cosmochronometry from radioactive elements U and Th is promising, but not precise enough yet to set serious age constraints. Since uncertainties abound in all phases of this exercise, one suspects that a prerequisite for further progress is simply more detections of U in very low metallicity r-process-rich stars, which should naturally have strong Th transitions. One looks forward to more discoveries of more such stars in field star surveys in the near future. This appears as a good viaticum for future developments, since age-dating methods based on MSTO and/or on the MSK can only be applied to ensemble of stars. The key advantage of the cosmochronometric method is that it can be applied to individual cluster and field stars and it is independent of both distance and reddening. The reader interested in a more detailed discussion concerning age-dating of field stars is referred to Salaris (2016). Relative cluster age estimates Relative cluster ages are used to address different astrophysical problems, e.g. they play a crucial role in investigating the early formation of the Galactic components dominated by old stellar populations (Halo, Bulge). In particular, the spread in relative ages of Halo and Bulge stars is tightly correlated with the timescale of the collapse of the protogalactic cloud. Classical results based on accurate and homogeneous photometry of GGCs suggested that a significant fraction of Galactic globulars are, indeed, coeval (e.g., Buonanno et al., 1998;Rosenberg et al., 1999). There are reasons, mainly kinematic, to believe that the ones that do not follow this trend are clusters that have been accreted. Cluster ages based on a relative diagnostic (vertical, horizontal) are less prone to systematic uncertainties. The key idea in these methods is to estimate the difference in magnitude between the MSTO and a brighter point that is either independent or mildly dependent on the absolute age of the cluster. This means that they are independent of uncertainties on cluster distance and reddening. In dealing with large homogeneous photometric data sets, the relative ages are also less affected by uncertainties on the photometric zeropoint. The pros are also on the theoretical side, since the clock is used, over the entire metallicity range, in relative and not in absolute sense. The relative cluster ages of GGCs is lively debated in the recent literature due to their impact on the early formation and evolution of the Galactic halo. The new results are based on deep and accurate photometry collected with ACS at HST. On one hand, it has been found by Marin-Franch and collaborators (Marín-Franch et al., 2009) that a sample of 64 GCs, covering a broad range in metallicity, in Galactocentric distance and in kinematic properties, show a bi-modal distribution. The former group is coeval within the errors, i.e. its mean age is ∼ 12.8 Gyr with a small dispersion (5%). The latter group is more metal-poor and follows an age-metallicity relation. This group might be associated to dwarf galaxies that have been accreted into the Halo as supported by their kinematic properties. The top panel of Fig. 24 summarizes the results obtained by Marín-Franch et al. (2009). On the other hand, based on independent ages estimated by VandenBerg et al. (2013) and Leaman et al. (2013), it was found that GGCs display a clear dichotomy, since at fixed cluster age there are two groups of GCs with roughly 0.4 dex of difference in metallicity (see bottom panel of Fig. 24). These two groups seem to follow two different, and well defined, age-metallicity relations. Leaman et al. (2013) also suggested that the metal-poor group were accreted, while the metal-rich ones are the truly globulars formed in situ. In this context it is worth mentioning that the globulars and the optical photometry adopted by Leaman et al. (2013) significantly overlap with the sample adopted by Marín-Franch et al. (2009). We note here that data plotted in Fig. 24 are only Fig. 24 Top: Relative age estimates of Galactic globular clusters according to Marín-Franch et al. (2009), these have been scaled to 12.8 Gyr, i.e. a mean age coming assuming isochrones from Dotter et al. (2007). Clusters more metal-poor than [Fe/H]=-1.3 were plotted as blue stars, while those that are more metal-rich as red stars. The blue and the red lines display the age-metallicity relations estimated by VandenBerg et al. (2013) and by Leaman et al. (2013). The metallicities of the GCs are in the Carretta et al. (2009) metallicity scale. The grey area shows the ±1σ uncertainty in the age estimate. Bottom: Same as the top, but for the absolute age estimates by VandenBerg et al. (2013) and used in Leaman et al. (2013). This sample includes the GCs observed by Marín-Franch et al. (2009) and plotted on the top panel. However, we have plotted only GGCs in common between the two studies. those in common between the two studies and a glance to the data indicates a significant difference in their age determination. In particular, Fig. 24 shows several interesting features worthy of discussion. i) Absolute age estimates provided by VandenBerg et al. (2013) (VB) are, at fixed metal content, more precise than the relative age estimates provided by Marín-Franch et al. (2009) (MF). The dispersion in age in the former sample is ∼17% smaller than the latter one over the entire metallicity range. ii) The metal-poor globulars seem to show an age-metallicity relation independently of the approach adopted to estimate the absolute cluster ages. The relation drawn through the VB's ages (blue solid line) is also a reasonable 'eye' fit to the MF's age estimates. The quoted samples also show evidence of a flatting of the quoted age-metallicity relation when moving from metal-poor ([Fe/H]∼-2) to more metal-poor GCs. iii) The more metal-rich GCs display a different trend in the two different absolute age estimates. The more metal-rich GCs in the MF sample are within the errors coeval, while in the VB estimates they show a slope that is quite similar to the slope of the more metal-poor GCs, but shifted by 0.4 dex in metallicity. The above findings clearly indicate that the main difference between the MF and the VB analysis is in the age estimates of metal-rich GCs. Note that to overcome possible deceptive uncertainties on the cluster iron abundance we adopted the homogeneous metallicity scale provided by Carretta et al. (2009). The current uncertainties on cluster iron abundances are on average smaller than 0.1 dex (Gratton et al., 2004). To unveil possible deceptive systematics either in age-dating globulars or in their metallicity scale the use of NIR diagnostics appears very promising. This applies not only to NIR photometry to use the MSK (see § 5.4), but also to high-resolution NIR spectroscopy to use different sets of iron and α-element lines. The empirical routes for relative age estimates The two most popular approaches adopted to estimate relative ages are the vertical and the horizontal photometric methods. The key advantages of these approaches are that they are independent of uncertainties affecting the distance modulus and the cluster reddening. i) Vertical Method -The vertical method relies on the difference in magnitude between the HB luminosity level and the MSTO. The HB luminosity level is typically chosen across the RR Lyrae instability strip. The former anchor is assumed to be independent of age, while the latter is age dependent. In applying this method there are a number of caveats. The main one is that it is mainly based on visual magnitudes, since at shorter (U,B) and at longer (R,I) wavelengths the HB shows a well defined negative/positive slope when moving from red to blue HB stars. This means that the anchor to the middle of the RR Lyrae instability strip might be affected by systematic errors, due to the quoted problems with the HB morphology. Moreover, this approach is also hampered by the lack either of RR Lyrae stars or of accurate and homogeneous multi-band photometry for cluster RR Lyrae stars. It is worth mentioning that the cluster age is considered to be one of the main culprits in the variation of the HB morphology (second parameter problem) when moving from more metal-poor to more metal-rich GCs. In case the cluster age is confirmed to be the second parameter, then the ages based on the vertical method would be affected by systematic errors. In particular, a decrease in the age of the progenitor causes a decrease in the HB luminosity level, therefore smaller vertical parameters, and in turn systematically younger ages. This effect becomes more relevant for stellar structures at the transition between forming or not forming an electron degenerate helium core. This means the young tail in the age distribution of GCs (see Fig. 8). Note that a systematic decrease in age typically means a redder HB morphology, thus suggesting once again that more metal-rich clusters are more prone to possible systematic age uncertainties. ii) Horizontal Method -The horizontal method relies on the difference in color between the MSTO and the base of the red giant branch. The empirical estimate of the latter reference point is not trivial in the V,I bands. Therefore, it was suggested to use the difference in color between the MSTO and the color of the RGB at a luminosity level that is 2.5 magnitudes brighter than the MSTO (Buonanno et al., 1998;Rosenberg et al., 1999). This approach has the same advantages of the vertical method, since it is independent of uncertainties on the cluster distance and reddening. Moreover, the reference point along the RGB is less affected by age effects and the RGB morphology is well defined in GGCs. However, the use of the color (horizontal method) instead of the magnitude (vertical method) together with the dependence of predicted effective temperatures at the MSTO, and in particular along the RGB, by the adopted mixing length parameter, are a source of further concern. The color-temperature relations predicted by atmosphere models are also less accurate when compared with the bolometric corrections. Note that this limitation becomes more severe when moving from the metal-poor to the metal-rich regime (Tognelli et al., 2015). Moreover, it is worth mentioning that the ratio between the duration of the sub-giant phase (hydrogen thick shell burning) and the MSTO age is not constant when moving from metalpoor to metal-rich stellar structures (Salaris & Weiss, 1997). Furthermore, the slope of the RGB depends on the metal content, therefore, the difference in color with the MSTO steadily decreases when moving from more metal-poor to more metal-rich globulars. Empirical and theoretical evidence indicates that this feature is mainly caused by the collisionally induced absorption of H 2 at NIR wavelengths (Saumon et al., 1994) in the atmosphere of cool dwarfs. The shape of the bending in NIR and optical-NIR CMDs depends on the metal content, but the magnitude of the knee seems to be independent of cluster age and of metallicity (magnitude in K-band of ∼3.5 mag). This means that the difference in magnitude between the MSTO and the MSK should be a robust diagnostic to constrain the absolute cluster age. The new diagnostic shares the same positive features than the classical vertical and horizontal methods. It is independent of uncertainties in cluster distance and reddening and minimally affected by uncertainties in photometric zero-point. The key advantages of the MSK when compared with the classical vertical and horizontal methods are the following: i) The MSK is a faint but well defined feature -The reference magnitude for the MSK is fainter than the MSTO, this implies larger photometric uncertainties for individual stars than when belonging to brighter features (RGB, HB). However, the number of faint, low mass stars (≤ 0.5 M ) is intrinsically larger than that of more massive and brighter stars. Furthermore, these stars are in their slow central hydrogen burning phase, thus they are more populous than in subsequent evolved phases. As a result, old, intermediate and young stellar systems always display a well sampled faint MS, whereas: a) the HB is less populous because it is populated by stars in their central helium burning phase and, even more, this feature shows up only in old stellar systems and depends on a number of parameters, including the metallicity; b) the RGB is typical of stellar systems that are either old or of intermediate age, and again the difference in color with the MSTO depends on the metallicity. The main consequence is that the MSK can be easily identified in all the stellar systems, provided that photometric data sets are pushed to reach very faint magnitudes, while the HB and the RGB detection/morphology strongly depend on the evolutionary properties of the underlying stellar population. ii) The MSK is less affected by theoretical uncertainties -The MSK only relies on the physics of hydrogen burning phase, while the vertical and the horizontal methods are affected by uncertainties in extra-mixing (RGB, HB), in microphysics (electron degeneracy, conductive opacities, 3α and 12 C(alpha,γ) 16 C nuclear reaction rates) and in dealing with the transition from the core helium flash to the core helium burning (ZAHB, Sweigart et al., 2004)). Furthermore, the MSK together with the vertical method are independent of the theoretical uncertainties plaguing the color-temperature transformations required in the horizontal method. The same outcome applies to the dependence on the adopted mixing length parameter. According to theory, MS stellar structures with a stellar mass of M ∼0.5-0.4 M are minimally affected by uncertainties in the treatment of convection, since they are almost entirely convective and the convective transport is nearly adiabatic (Saumon & Marley, 2008). iii) The MSK has a lower global error budget -Recent empirical evidence indicates that the MSK provides absolute cluster ages that are a factor of two more precise when compared with the classical method of the MSTO (Bono et al., 2010;Sarajedini et al., 2009a;Di Cecco et al., 2015;Monelli et al., 2015;Massari et al., 2016a;Correnti et al., 2016). Moreover, preliminary findings suggest that the correlation between the MSK and the cluster age is linear over the age range typical of old open cluster (a few Gyrs) to GGCs. We still lack a detailed theoretical investigation to constrain the dependence of the MSK on the chemical composition (metals, helium) when moving from optical to NIR and optical/NIR CMDs. The main cons in dealing with the MSK is that the identification of the knee requires accurate and deep photometry in crowded stellar fields. However, the advent of HST and of modern Adaptive Optics (AO) assisted NIR instrumentation overcame this problem. The MSK is a feature that can be detected and used in both optical and NIR bands, whereas the classical vertical and horizontal methods are robust age diagnostics only in optical CMDs. Accurate and deep NIR CMDs show that HB stars are far from being horizontal. They become systematically fainter when moving from cool to hot and extreme HB stars (positive slope Del Principe et al., 2006;Coppola, 2011;Milone et al., 2013;Stetson et al., 2014). The same problem shows up in CMDs based on near UV and far UV bands (negative slope, Ferraro et al., 2012). This means that the identification of the HB luminosity level needs a further anchor in color along the HB, difficult to be uniquely identified. On the other hand, the difference in color between the MSTO and the RGB, required by the horizontal method, is hampered by the fact that MSTO and RGB have almost the same color in NIR bands. This means that the difference in color is steadily decreasing (Coppola, 2011;Stetson et al., 2014) when moving from optical to NIR. NIR photometry is going to be exploited even more in the near future when sophisticated AO will allow us to reach the diffraction limit of ground based extremely large telescopes (Diolaiti et al., 2016) and from the space with JWST. The use of new observables also means the opportunity to constrain possible systematics in evolutionary diagnostics currently adopted. The absolute age of M15 To provide a new and independent absolute age estimate of the GC M15 (NGC 7078), operating at the Large Binocular Telescope (LBT). We remember here that M15 is located at ∼10 Kpc and it is affected by moderate extinction (E(B-V)=0.08, (Harris, 1996(Harris, , updated 2010 version) thus with current AO assisted 10 m telescopes we do expect to detect the MSK. Interestingly, M15 is supposed to be one of the oldest and most metal poor ([Fe/H∼-2.4]) GCs of the Milky Way Halo. FLAO data have been taken with the NIR high resolution imager PISCES (pixel scale=0.0193"/pix) in J and K s bands. Due to the highly structured and asymmetric PSF shape, the data reduction was successfully performed with ROMAFOT suite of programs (see details in Fiorentino et al., 2014;Monelli et al., 2015). A natural guide star (NGS) of R=12.9 mag has been used to close the AO loop on a field located at ∼3' from the cluster center. This field was sufficiently uncrowded to allow us to reach a very deep K s band magnitude of ∼22 mag, see Fig. 25. This limiting magnitude allowed us to measure the location of the MSK; see Table 3 in Monelli et al. (2015). Note that the detection of faint stars in crowded stellar fields as the center of GCs is severely hampered by the large number of bright stars. However, as it is shown in Fig. 25, given the small PISCES field of view (FoV) ∼20" and the radial distance from the cluster center, we do not have sufficient sampling of the MSTO magnitude. We have used LUCI1 data (FoV=4'×4') to properly determine the location of the MSTO. These LUCI1 data have also been used to perform a proper calibration of PISCES data to the 2MASS photometric system. After measuring the difference MSTO-MSK, we are ready to compute the absolute age of M15. We compare this number with theoretical relations derived using a set of evolutionary isochrones provided by VandenBerg et al. (2014b) that relate the variation of MSTO-MSK with the absolute ages. Using only NIR, we derived an absolute age for M15 of 13.7±1.4 Gyr, which is compatible, but with a smaller uncertainty, to that obtained using the classical MSTO method in NIR (14.0±3.1 Gyr) or in purely optical HST bands (12.8±2.0 Gyr). This old age provides an upper limit to the age of the Universe and a lower limit to the Hubble constant H 0 , since the former parameter is roughly the inverse of the latter one Monelli et al., 2015). Conclusions and final remarks There is mounting evidence of a difference between estimates of a Hubble constant based on direct measurements (Cepheids plus supernovae: Riess et al. (2016);Freedman et al. (2010), Beaton this conference) and indirect methods (CMD, BAO, lensing, Suyu et al. (2013); Bennett et al. (2014); Calabrese et al. (2015), Planck collaboration 2015). This critical issue has been addressed in several recent papers suggesting a difference that ranges from almost 2σ (Efstathiou, 2014) to more than 3σ (Riess et al., 2011(Riess et al., , 2016. The quoted uncertainties on the Hubble constant, once confirmed, can open the path to new physics concerning the number of relativistic species and/or the mass of neutrinos Dvorkin et al. (2014); Wyman et al. (2014); Luković et al. (2016). Moreover, the quoted range in H 0 implies an uncertainty on the age of the universe t 0 of the order of 2 Gyr. This uncertainty has a substantial impact not only on galaxy formation and evolution, but also on the age of the most ancient stellar systems, i.e. the globular clusters can play a crucial cosmological role. Based on the tantalizing evidence that stellar age is not an observable, the different stellar "clocks" that can be applied to date globular clusters provide the unique opportunity to constrain the micro-and the macro-physics adopted to construct evolutionary models. This comparison becomes even more rewarding when comparing main sequence stellar structures with white dwarf models. It might be possible that in the era of "precision cosmology" we could use cosmological parameters to constrain the physics of stellar interiors. In the mean time, it is clear that calibrating clusters to which we can apply different age diagnostics (MSTO, MSK, white dwarf cooling sequence, cosmochronometry) become fundamental astrophysical and cosmological laboratories. In this investigation we reviewed the most popular methods to estimate both relative and absolute cluster ages. We focussed our attention on the error budget and critically discussed pros and cons of the different age diagnostics. In particular, we outlined the key advantages in using a new NIR age diagnosticthe main sequence knee-and its application to M15. The main limitation being the faintness of this anchor. However, there are two new observing facilities that are going to play a fundamental role in the use of the MSK. i) Multi-conjugated adaptive optics -The development of multi-conjugated adaptive optics systems at the 8-10 m class telescopes is a real quantum jump. They provide NIR images that approach the diffraction limit of a field of view of the order of one arcmin. This means that they can provide accurate and deep NIR CMDs of the innermost crowded regions of GCs. Recent findings based on NIR images collected with GEMS at GEMINI indicate that the mix between NGS and lasers allow us to reach the MSK in a sizable sample of Galactic globulars Massari et al. (2016a); Turri et al. (2016). The empirical scenario becomes even more compelling if we take account for the next generation of NIR detectors AO assisted at VLT (ERIS). This instrument will simultaneously cover NIR bands and the L-band, and in turn, the unique opportunity to identify the MSK in the most crowded and most reddened regions of the Galactic Bulge. ii) JWST -JWST is going to revolutionize the view of resolved stellar populations in the nearby Universe. The coupling between field of view and NIR/MIR bands would provide the unique opportunity to identify the MSK in a significant fraction of nearby dwarf galaxies. This means the opportunity to determine homogeneous ages for old stars in old stellar systems (dwarfs, globulars) to investigate whether they formed, as suggested by cosmological models, at the same epoch. The cosmic distance scale and the age-dating of nearby stellar systems have been for more than half a century the two fundamental pillars on which quantitative astrophysics build up. At the beginning of the new millennium they are waiting for massive solidification. The near future appears quite bright not only for the next Gaia data releases, but also for the near future ground-based (LSST, ELTs) and space observing facilities (JWST, WFIRST, EUCLID). The same outcome applies for the wide spectroscopic surveys in optical and NIR regimes (DESI, PFS, 4MOST, MOONS-GAL, APOGEE, WEAVE).
2018-06-19T22:30:00.000Z
2018-06-19T00:00:00.000
{ "year": 2018, "sha1": "684e21dccad5149fb690bd21d5519b0b119b9563", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1806.07487", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "684e21dccad5149fb690bd21d5519b0b119b9563", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259929525
pes2o/s2orc
v3-fos-license
Genome-wide association study using specific-locus amplified fragment sequencing identifies new genes influencing nitrogen use efficiency in rice landraces Nitrogen is essential for crop production. It is a critical macronutrient for plant growth and development. However, excessive application of nitrogen fertilizer is not only a waste of resources but also pollutes the environment. An effective approach to solving this problem is to breed rice varieties with high nitrogen use efficiency (NUE). In this study, we performed a genome-wide association study (GWAS) on 419 rice landraces using 208,993 single nucleotide polymorphisms (SNPs). With the mixed linear model (MLM) in the Tassel software, we identified 834 SNPs associated with root surface area (RSA), root length (RL), root branch number (RBN), root number (RN), plant dry weight (PDW), plant height (PH), root volume (RL), plant fresh weight (PFW), root fractal dimension (RFD), number of root nodes (NRN), and average root diameter (ARD), with a significant level of p < 2.39×10–7. In addition, we found 49 SNPs that were correlated with RL, RBN, RN, PDW, PH, PFW, RFD, and NRN using genome-wide efficient mixed-model association (GEMMA), with a significant level of p < 1×10–6. Additionally, the final results for eight traits associated with 193 significant SNPs by using multi-locus random-SNP-effect mixed linear model (mrMLM) model and 272 significant SNPs associated with 11 traits by using IIIVmrMLM. Within the linkage intervals of significantly associated SNP, we identified eight known related genes to NUE in rice, namely, OsAMT2;3, OsGS1, OsNR2, OsNPF7.4, OsPTR9, OsNRT1.1B, OsNRT2.3, and OsNRT2.2. According to the linkage disequilibrium (LD) decay value of this population, there were 75 candidate genes within the 150-kb regions upstream and downstream of the most significantly associated SNP (Chr5_29804690, Chr5_29956584, and Chr10_17540654). These candidate genes included 22 transposon genes, 25 expressed genes, and 28 putative functional genes. The expression levels of these candidate genes were measured by real-time quantitative PCR (RT-qPCR), and the expression levels of LOC_Os05g51700 and LOC_Os05g51710 in C347 were significantly lower than that in C117; the expression levels of LOC_Os05g51740, LOC_Os05g51780, LOC_Os05g51960, LOC_Os05g51970, and LOC_Os10g33210 were significantly higher in C347 than C117. Among them, LOC_Os10g33210 encodes a peptide transporter, and LOC_Os05g51690 encodes a CCT domain protein and responds to NUE in rice. This study identified new loci related to NUE in rice, providing new genetic resources for the molecular breeding of rice landraces with high NUE. Introduction Nitrogen is one of the macronutrients necessary for rice growth and development, and its contribution to crop yield can reach 40%-50% (Daniel et al., 2007;Sylvester-Bradley and Kindred, 2009). However, excessive nitrogen application can not only cause eutrophication and soil acidification but also increase the cost of agricultural production (Allen et al., 2004;Rodrigo, 2012). Quantitative trait locus (QTLs) has been identified as controlling NUE due to its inherent complexity (Sheng et al., 2006;Qian et al., 2015). Based on this, many efforts have been made to improve NUE through agronomic practices or genetic dissection (Liu et al., 2023). Previous studies have demonstrated the potential of manipulating genes directly responsible for N uptake and assimilation to improve rice NUE (Gojon, 2017). For example, there has been much progress in QTLs related to nitrogen utilization, such as the discovery of qNUE6 (Yang et al., 2017), qRDWN6XB (Anis et al., 2019), and qRDW-6 (Anis et al., 2018) on rice chromosome 6. Similarly, the introduction of indica OsNR2 into Nipponbare improved its effective tiller number, grain yield, and NUE (Gao et al., 2019). Increased OsNRT2.1 transcription from the OsNAR2.1 promoter led to significant increases in rice yields and NUE . It has been found that the overexpression of glutamine synthases OsGS1.1 and OsGS1.2 in rice reduces grain yield and disrupts C metabolism (Cai et al., 2009). The introgression of OsTCP19-H into two japonica cultivars significantly increased tiller, indicating the potential for improvement in rice NUE with OsTCP19-H (Liu et al., 2021). Furthermore, OsDREB1C overexpression promotes early flowering and increases NUE (Wei et al., 2022). The above evidence suggests that manipulating genes directly involved in nitrogen uptake and assimilation is a good way to improve NUE in rice. Rice mainly absorbs nitrate and ammonium from the soil. NRT1s and NRT2s are the major nitrate transporters in rice. According to the affinity of nitrate transporters, NRTs can be divided into two categories: low-affinity and high-affinity. The NRT1/PTR family belongs to low-affinity nitrate transporters (Table 1). OsNRT1 is the first identified low-affinity nitrate transporter in rice, which regulates nitrate uptake in roots (Lin et al., 2000). OsNRT 1.1 has two splice forms: OsNRT1.1a and OsNRT1.1b which differ in their splicing patterns (Xiaorong et al., 2016). OsNRT1.1A can improve NUE and promote flowering (Wang et al., 2018). NRT1.1B affects the NUE of rice by regulating the rice root microbiome. Moreover, NRT1.1B not only has the function of nitrate uptake and transport but also has the function of sensing nitrate signals Zhang et al., 2019). OsNPF2.2 can export nitrate from the xylem and can also transport nitrate from root to stem, affecting the overall growth and development of the plant vascular system . OsNPF2.4 is mainly expressed in rice epidermis, xylem tissue, and phloem sieve tube companion cells, playing critical roles in NO 3 absorption, inter-organ transport, and redistribution . OsNPF4.5 is specifically expressed in mycorrhizal arbuscules and participates in mycorrhizal symbiotic pathways for nitrate uptake and mycorrhizal formation (Wang et al., 2020). OsNPF5.16 encodes a pH-dependent, low-affinity nitrate transporter that positively regulates rice tiller number and yield by modulating cytokinin levels . OsNPF6.1 encodes a nitrate transporter with two haplotypes: OsNPF6.1 HapA and OsNPF6.1 HapB . OsNPF6.1 is induced by nitrate and is mainly expressed in rice lateral roots, root epidermal cells, and leaf nodes. Under low nitrogen conditions, the expression of OsNPF6.1 is higher in NILs-OsNPF6.1 HapB , and its transport activity is also higher (Tang et al., 2019). OsNPF7.2 is mainly expressed in the elongation zone and mature zone of roots, especially in the sclerenchyma, cortex, and stele of roots. It controls tiller bud growth and root development by regulating cytokinin levels and cell cycles (Hu et al., 2016). Both spliced transcripts of OsNPF7.7 affect nitrogen uptake and distribution, and they positively regulate rice tillering and NUE. Overexpression of OsNPF7.7-1 can promote root nitrate influx and concentration, while overexpression of OsNPF7.7-2 promotes the influx and concentration of ammonium ions in the root system. OsNPF7.7RNAi and osnpf7.7 showed increased amino acid content in the leaf sheath and decreased amino acid content in the leaves, thereby affecting nitrogen distribution and plant growth (Weiting et al., 2018). The overexpression of OsNPF7.1 or OsNPF7.4 can promote nitrate uptake. OsNPF7.1 overexpression can moderately increase the nitrate and amino acid concentrations, which in turn increases seedling biomass and yield. However, excessive nitrate in OsNPF7.4-overexpression plants may lead to the accumulation of amino acids in leaf sheaths, thereby inhibiting seedling biomass; moreover, the reduced nitrate reutilization rate in seedlings also limits the accumulation of plant biomass (Weiting et al., 2019). The NRT2 family belongs to the high-affinity nitrate transporters, among which OsNRT2.4 is a dual-affinity nitrate transporter . OsNRT2.1, OsNRT2.2, and OsNRT2.3a are transcriptionally upregulated by nitrate supply; they need to interact with the chaperone OsNAR2.1 to uptake different concentrations of nitrates. On the contrary, OsNRT2.3b and OsNRT2.4 can still function in the absence of NAR2 Tang et al., 2012). OsNRT2.1 is involved in nitrate-dependent root elongation by regulating auxin transport to the root. The overexpression of OsNRT2.1 promoted the effect of NO 3 treatment on root growth, which required the active polar transport of auxin (Misbah et al., 2019). OsNRT2.3a is responsible for nitrate loading in roots and nitrate transport to shoots, but it has no effect on nitrate uptake in roots (Tang et al., 2012). OsNRT2.3b is mainly expressed in the phloem of the stem, playing a role in pH and ion homeostasis; it can also cause membrane potential depolarization and cytoplasmic acidification under NO 3 supply conditions . The responsible genes for nitrate uptake in roots were clearly identified as OsNRT1.1a, OsNRT1.1b, OsNRT1.1A, OsNNT1.1B, and OsNPF2.4. On the other hand, OsNPF 2.2, OsNRT2.3a, and OsNRT2.3b were found to be responsible for nitrate transportation. While rice can absorb nitrate nitrogen, paddy soil is typically flooded for prolonged periods, resulting in high concentrations of ammonium nitrogen. Therefore, ammonium nitrogen is considered to be the main form of nitrogen absorbed by rice (Wang et al., 1993). The absorption of ammonium in rice requires ammonium transporters (Table 1). In rice, there are 12 hypothesized ammonium transporters that can be classified into five categories (Nicolaus et al., 2000;Cheng-Hsun andYi-Fang, 2010): OsAMT1.1, OsAMT1.2, OsAMT1.3, OsAMT2.1, OsAMT2.2, OsAMT2.3, OsAMT3.1, OsAMT3.2, OsAMT3.3, andOsAMT4 (Sonoda et al., 2003a;Sonoda et al., 2003b;Suenaga et al., 2003). These ammonium transporters can provide a stable nitrogen source for rice, especially when the rice root is submerged in water for a long time. Nitrogen absorbed by rice requires transformation before it can be utilized. OsAMT1.1 is a low-affinity NH 4 + transporter, and the OsAMT1.1mediated NH 4 + uptake and transport are not affected by intracellular or extracellular pH but are regulated by feedback from substrate accumulation (Yang et al., 2015). Under low OsGRF4 LOC_Os02g47280 Transcription factor (Li et al., 2018) DST LOC_Os03g57240 Transcription factor (Han et al., 2022) ammonium conditions, both root growth and ammonium uptake are inhibited after the knockout of OsAMT1.3 . OsAMT1.1, OsAMT1.2, and OsAMT1.3 synergistically regulate ammonium uptake in rice under low nitrogen conditions. When ammonium supply is low, the single mutants have unaltered growth and nitrogen accumulation. In contrast, the amt1.1:1.2 double mutants exhibit decreased stem growth and nitrogen content by 30%, while the amt1.2:1.3 double mutant is not affected. The triple mutant has the most significant phenotype, with 59% inhibition of stem growth and a 72% decrease in nitrogen accumulation (Konishi and Ma, 2021). These results suggest that OsAMT1; 1, OsAMT1; 2, OsAMT1; 3, OsAMT2; 1, and OsAMT3; 1 are responsible for ammonium uptake in rice roots. (Bao et al., 2014). On the other hand, OsGS2 is mainly expressed in the chloroplasts of leaves and plays a dominant role in the reassimilation of NH 4 + released by photorespiration (Wallsgrove et al., 1987). The majority of the absorbed NH 4 + is assimilated in the form of glutamate and glutamine and transported to the aerial part. When the external NH 4 + concentration increases, the expression of OsGS1.2 and OsNADH-GOGAT2 in root epidermal cells and outer cortex cells increases significantly, thus rapidly assimilating NH 4 + , and the resulting glutamine and glutamate are transported to the aerial part (Tamura et al., 2011). The processes of nitrogen uptake, transport, assimilation, and regulation involve complex gene regulatory networks. Many transcription factors have been identified to participate in the regulation process (Table 1), such as NLP (Wu et al., 2020), NAC42 (Tang et al., 2019), BTB (Araus et al., 2016), and GRF4 (Li et al., 2018). OsNLP3 is a core transcription factor gene involved in nitrate signaling. It can translocate to the nucleus and initiate the transcription of NUE-related genes . OsNAC42 can activate OsNPF6.1, especially for OsNPF6.1 HapB , while OsNAC42M can only transcriptionally activate OsNPF6.1 HapB , indicating that OsNPF6.1 HapB is more sensitive to OsNAC42 and its mutants (Tang et al., 2019). GRF4 is a positive regulator of plant carbon-nitrogen metabolism. It can promote nitrogen uptake, assimilation, and transport, as well as photosynthesis, carbohydrate metabolism, and transport, thereby promoting plant growth and development (Li et al., 2018). Additionally, a study found that the expression of OsNR1.2 was controlled by a zinc finger transcription factor called DROUGHT AND SALT TOLERANCE (DST) (Han et al., 2022). To identify new genes related to NUE, this study used genomewide association analysis to identify the loci that affected 11 traits of 419 rice landraces, including RSA, RL, and RBN. By using the software MLM model, we detected 834 significantly associated SNPs (p < 2.39×10 -7 ), and using GEMMA, we identified 49 significantly associated SNPs (p < 1×10 -6 ). Finally, RT-qPCR was used to validate the genes involved in NUE, providing additional genetic resources for the cultivation of new rice landraces with high nitrogen efficiency. Material planting The experimental materials were collected from 419 rice landraces in Guangxi, including 330 indica rice, 78 japonica rice, and 11 other rice. Three biological replicates were set up for the NUE experiment, with 12 plants in each replicate. To break dormancy, the rice seeds were soaked in water for 24 hours at an ambient temperature of 28°C. Then, they were put on a damp cotton cloth at the same temperature for 24 hours to allow germination. The germinated seeds were sown in 96-well culture boxes and cultured with the normal nitrogen-level nutrient solution (1 mM NH 4 NO 3 ). The nutrient solution was a thousand-fold dilution of Yoshida culture solution B (Yoshida, 1976). The seedlings were all placed in an incubator, with an ambient temperature of 28°C, humidity of 80%, light intensity of 40%, and light/dark cycle of 13/11 h. The seedlings were cultured for 20 d. Phenotyping The phenotype parameters included plant height, plant fresh weight, plant dry weight, root length, root number, root branch number, root volume, number of root nodes, root fractal dimension, average root diameter, and root surface area. Plant height was measured from the base of the rice to the leaf tip. Plant dry weight was obtained by weighing the dried samples. The number of root nodes is the sum of the root number, root branch number, and the number of points at which the roots cross. The root fractal dimension is a direct indicator of root development. A higher root fractal dimension indicates a more developed root system, while a relatively small root fractal dimension suggests a weaker root meristem ability. The root data was obtained by scanning and analyzing the roots with the LA-S root scanning system (WSeen, Hangzhou, China). Specific-locus amplified fragment sequencing and SNP genotyping Specific-locus amplified fragment sequencing was performed on an Illumina Hiseq 2500 system. The clean reads were clustered using BLAT software to obtain polymorphic SLAF tags. Then, the BWA software was used to align the polymorphic SLAF tag sequence to the Nipponbare reference genome (http:// rice.uga.edu/). GATK and SAM toolkits were used to analyze SNP calling. A total of 208,993 SNPs were obtained based on a minor allele frequency (MAF) > 0.05 and a deletion rate < 0.5 (Yang et al., 2018). Genome-wide association analysis The genetic relationship between samples was calculated using the Centcred_IBS module of Tassel. The population structure was analyzed by ADMIXTURE software. We conducted GWAS using the TASSEL software on 208,993 SNPs genotypes and seedling phenotype data. The MLM used a (Q+K) model, where Q was the population structure and K was the kinship coefficient. The SNPs with p < 2.39×10 -7 were considered to have a significant association. Manhattan and Q-Q plots were generated in the R environment. At the same time, GEMMA, another commonly used software for GWAS, was used to conduct association analysis on 208,993 SNPs genotypes and seedling phenotype data. The SNPs with p < 1×10 -6 were considered to have significant correlations. Manhattan and Q-Q plots were also generated in the R environment. Candidate gene prediction The Nipponbare was used as the reference sequences (http:// rice.uga.edu/). The candidate regions were selected based on LD decay of 419 rice landraces, with 150-kb intervals upstream and downstream of the SNPs showing a significant correlation. Total RNA extraction and RT-qPCR The materials for extracting RNA were the variety with the smaller RN: C117, and the variety with the larger RN: C347. The total RNA was extracted with an RNA extraction kit following kit instructions. For reverse transcription, 1 mg of RNA was used as the template for reverse transcription to synthesize cDNA. A 20 ml PCR reaction system was prepared following the kit instructions. The system consisted of 1X RT buff, 1 mM d NTPs, 0.5 mM oligo-dT primer, and 0.5 U RNase inhibitor. The 20 ml RT-qPCR system was prepared with 2xUniversal Blue SYBR Green qPCR Master Mix (Servicebio, Wuhan, China), plus 0.4 ml forward/reverse primers, 2 ml product cDNA, and 7.2 ml Nuclease-Free Water. Actin was used as the internal reference gene (Gaur et al., 2012). All primers used for RT-qPCR are in Supplementary Table 1 The RT-PCR reaction was carried out on the BIO-RAD T100 Thermal Cycler PCR instrument (Bio-Rad, California, USA), and the RT-qPCR reaction was carried out on the BIO-RAD CFX 96 Touch fluorescence quantitative PCR imaging system (Bio-Rad, California, USA). Statistical analysis RT-qPCR data were analyzed using the 2^D DCt method (Thomas and Kenneth, 2008). Statistical analysis and plotting were performed using Origin 2022b and GraphPad Prism 8 software. Rice phenotypic analysis under normal nitrogen levels The scope of research on NUE covers the entire process of nitrogen uptake, transport, assimilation, redistribution, and signal transduction in plants, which are essential for the improvement of NUE. In this study, 11 traits relating to N uptake, transport, and assimilation were investigated, and the genes associated with these traits were identified under unoptimized N conditions. The seedlings of 419 rice landraces had significant differences in traits such as PH, PFW, and PDW after 20 days of hydroponic cultivation ( Table 2). The data distribution of PH and RFD were less diverged, with a small coefficient of variation, indicating small genetic We used Origin 2022b software to analyze the correlation among the 11 different traits and found that the PFW and PDW were highly positively correlated. Additionally, RSA and RV were also highly positively correlated; RBN was significantly positively correlated with RN, NRN, and RL, while NRN was significantly positively correlated with RBN and RL. On the other hand, ARD was significantly negatively correlated with PH, as well as multiple traits. Overall, the correlations among the root traits were significant (Figure 1). To reflect NUE, we identified PH PH, PFW, PDW, RSA, RBN, RFD, RN, NRN, ARD, RV, RL under normal nitrogen level (Figure 2). After 20 days of cultivation, 419 rice landraces showed differences in 11 traits.C51 has the largest PFW, RFD, RV, PDW and RSA. The C93 has the largest ARD. C339 has the largest PH. C349 has the largest RBN, RN, RL and NRN. Genome-wide association analysis Phylogenetic tree construction and principal component analysis The construction of the phylogenetic tree construction and principal component analysis for the 419 rice landraces were performed in the previous study (Yang et al., 2018). Population structure and LD analysis To analyze the population structure of the entire population based on the screened SNPs, we used the admixture software and classified the samples (K value) into 1-10 groups. As a result, the 419 rice landraces were divided into five populations ( Figure 3A), with the minimum cross-validation error rate (CV errors), indicating that all the samples might belong to these five populations ( Figure 3B). Next, genome-wide SNPs were used to analyze the LD level of the total population, and the physical distance corresponding to the 1/2 maximum correlation coefficient value (r 2 ) was taken. The result showed that the LD decline distance of the total population on a genome-wide level was 150 kb ( Figure 3C). Genome-wide association analysis By using the MLM model, we identified 834 SNPs that were significantly correlated with RSA, RL, RBN, RN, PDW, PH, RL, PFW, RFD, NRN, and ARD, with a significant level of p < 2.39×10 -7 . These significantly associated SNPs were distributed on 12 rice chromosomes (Supplementary Table 2 and Figure 4). Among all the traits, RV was associated with the most significant SNPs (157). In terms of the chromosomal distribution of significant SNPs, because PH and ARD had few significant SNPs, the significant SNPs of PH were only distributed on chromosomes 1 and 6, and the significant SNPs of ARD were distributed on chromosomes 2, 4, 9, and 10, while the significant SNPs of other traits were distributed on all 12 chromosomes, and the minimum number of the significant SNP on chromosome 1 was one, and the maximum was 22. For the loci effect, multiple traits shared the same significant SNPs. For example, Chr2_32491137 was associated with RL, RBN, and RN; Chr10_21705001 was associated with RSA, PDW RV, and PFW; Chr1_35486889 was associated with both RSA and RV. A peak SNP Chr5_29956584 was associated with RN, RL, and RBN (Figures 4A, C, E), and the most significant SNP Chr5_29804690 was also associated with RSA, RFD, PDW, PFW, and RV ( Figures 4B, D, F, G, I). Chr10_17540654 was detected for RL, RN, and NRN ( Figures 4A, C, H). In addition, a highly significant SNP Chr6_20714026 was identified for PH ( Figure 4J). We identified a total of 49 SNPs correlated with RL, RBN, RN, PDW, PH, PFW, RFD, and NRN using GEMMA, with a significant level of p < 1×10 -6 . These SNPs were distributed on all chromosomes other than 9 and 11 (Supplementary Table 2 and Figures 5 A-K). We also identified that the same significant SNPs were detected in multiple traits. For example, Chr5_6117508 and Chr5_6117514 were identified in RL, RBN, RN, and NRN. The most significant SNP Chr1_32712980 was also identified for PH ( Figure 5J). We identified 272 final SNPs correlated with RSA, RL, RBN, RN, PDW, PH, RL, PFW, RFD, NRN, and ARD by using IIIVmrMLM (Supplementary Table 2 Eight common SNPs were identified from both GEMMA and MLM, such as Chr5_6117508 and Chr5_6117514, which were associated with RL, RBN, RN, and NRN, as well as Chr1_32712980 for PH. There were eight SNPs in common between mrMLM and IIIVmrMLM, five SNPs in common between MLM and mrMLM, and nine SNPs in common between MLM and IIIVmrMLM. There were four SNPs in common between GEMMA and mrMLM and only one SNP in common between GEMMA and IIIVmrMLM. Candidate gene analysis According to the level of LD decay, candidate genes were selected within 150 kb upstream and downstream of significant Pearson correlation coefficient plot for 11 rice traits. The color box in the upper left corner represents the correlation size.*p < 0.05,**p < 0.01, and ***p < 0.001. SNPs. In the MLM model, the cloned NUE genes OsAMT2.3 and OsNRT2.3 (Gaur et al., 2012;Fan et al., 2016) were identified in the linkage intervals of SNP Chr1_35486889 and Chr1_29156411 on chromosome 1, which were significantly associated with RSA and RV. On chromosome 2, OsGS1 was identified in the linkage intervals of Chr2_30716371, which was associated with RSA; OsNR2 was identified in the linkage intervals of Chr2_32491137, which was associated with RL, RBN, RN, and PDW; OsNRT2.2 was found in the linkage intervals of Chr2_691901, which was associated with RV Lee et al., 2013;Gao et al., 2019). On chromosome 4, OsNPF7.4 was identified in the linkage interval of Chr4_30171095, which was associated with RL, RBN, RN, and PDW (Leran et al., 2014). On chromosome 5, there were 14 candidate genes identified within the linkage interval of Chr5_29956584, which was associated with RN, RL, and RBN; in addition, there were 37 candidate genes in the linkage interval of Chr5_29804690, which was associated with RSA, RFD, PDW, PFW, and RV. On chromosome 6, OsPTR9 was identified in the linkage interval of Chr6_29837500, which was associated with RL, RBN, RN, and PDW (Fang et al., 2013). In the linkage interval of Chr6_20714026, 47 candidate genes were identified, which were associated with PH. On chromosome 10, OsNRT1.1B was identified in the linkage interval of Chr10_21705001 , which was associated with RSA and RV. In the linkage interval of Chr10_17540654, 38 candidate genes were identified, which were associated with RL, RN, and NRN. In GEMMA, 41 candidate genes were identified within the linkage interval of Chr1_32712980, which was associated with PH. Finally, based on the p-value of SNP and gene annotation, the candidate genes associated with Chr5_29956584, Chr5_29804690, and Chr10_17540654 were selected (Supplementary PAGE \# "'Page: '#''" Q[CE] The emphases (colored text) from revisions were removed throughout the article. Confirm that this change is fine. Table 3) because these SNPs appeared in multiple traits, and their candidate genes might have high research significance. Expression analysis of candidate genes In the MLM model, Chr5_29956584 was the most significant SNP associated with RN, RL, and RBN; Chr5_29804690 was the most significant SNP associated with RSA, RFD, PDW, PFW, and RV; Chr10_17540654 was associated with RL, RN, and NRN. These three significant SNPs appeared in multiple traits, and therefore the associated candidate genes associated with these SNPs were chosen for subsequent expression analysis. The Q-Q plot of MLM and GEMMA results showed that the model of RN was the best in both analysis methods, and thereby RNA was extracted from the materials of RN (Figures 4, 5). Among 419 landraces, we selected the varieties with differences in both RSA and RN to measure 22 important genes in the linkage intervals of Chr5_29804690, Chr5_29956584, and Chr10_17540654 using RT-qPCR (Supplementary Figures 3, 4). The results showed that LOC_Os05g51720, LOC_Os05g51820, LOC_Os05g51860, and LOC_Os05g51900 had little or no expression in C117 or C347. The expression levels of LOC_Os05g51690, LOC_Os05g51750, LOC_Os05g51754, LOC_Os05g51790, LOC_Os05g51800, LOC_Os05g51810, LOC_Os05g51830, LOC_Os05g51850, LOC_Os05g51870, LOC_Os05g52080, and LOC_Os05g52090 were not significantly different between C117 and C347 (Supplementary Figures 5A-K). The expression levels of LOC_Os05g51700 and LOC_Os05g51710 were significantly lower in C347 than that in C117, and the expression levels of LOC_Os05g51740, LOC_Os05g51780, LOC_Os05g51960, LOC_Os05g51970, and LOC_Os10g33210 were significantly higher in C347 than that in C117 ( Figures 6A-G). Genome-wide association analysis based on MLM. (A-K) Manhattan plots and quantile-quantile plots of the MLM model. Red arrows indicate significant sites associated with cloned nitrogen efficiency genes and green arrows indicate significant sites that can be further investigated. The red line indicates the significance threshold at p = 2.39×10 -7 . The blue line indicates the significance threshold at p = 4.78×10 -8 . Discussion In recent years, GWAS has become an effective technology to detect complex trait loci in rice. In this study, we used the MLM model to detect 834 SNPs significantly associated with 11 traits, and we used the GEMMA model to identify 49 SNPs significantly associated with eight traits. We identified 272 final SNPs by using IIIVmrMLM, and 193 final SNPs by using mrMLMN. Due to the difference in SNP screening criteria and the p-value threshold between the Tassel and GEMMA software, the number of detected association SNPs was also different. Association of nitrogen utilization of known genes with studies Among the candidate genes associated with significant SNP from the MLM model, eight were known NUE-related genes, including OsAMT2.3, OsGS1, OsNR2, OsNPF7.4, OsPTR9, OsNRT1.1B, OsNRT2.3, and OsNRT2.2. OsAMT2.3 encodes an ammonium transporter protein. When nitrogen input is increased, the number of grains per panicle, thousand-grain weight, percentage of nitrogen in biomass, and protein content in grains between nitrogen-efficient and nitrogen-inefficient varieties show significant differences; the gene expression of OsAMT2.3 was also different in the flag leaves at the filling stage (Gaur et al., 2012). In this study, OsAMT2.3-associated SNPs were detected in RSA and RV, suggesting that OsAMT2.3 may also affect these two parameters. OsGS1 encodes a cytoplasmic glutamine synthetase. OsGS1 is expressed in all rice tissues and is highly expressed in leaves. When rice uses ammonium as the main nitrogen source, OsGS1 plays an important role in coordinating the entire metabolic network and can affect the normal growth and filling of rice (Tabuchi et al., 2005;Kusano et al., 2011). OsGS1-associated SNPs were detected in RSA. OsGS1 is also expressed in roots and is important for root growth. OsNR2 encodes nitrate reductase, which exhibits a high enzyme activity and is known for its sensitivity to hypochlorite as well as its ability to absorb high amounts of nitrate. OsNR2 promotes the expression of OsNRT1.1B in indica rice 9311, which in turn enhances the expression of OsNRT1.1B (Gao et al., 2019). OsNR2-associated SNPs were detected in RL, RBN, and RN. OsNR2 is expressed in the vascular tissue of rice shoots and roots, as well as the elongation zone of young roots, but OsNRT1.1B was not found in the linkage intervals of the SNP significantly associated with these three traits. It is possible that the interaction between OsNR2 and OsNRT1.1B was not obvious in the rice landraces included in this study. OsNPF7.4 encodes a nitrate transporter. Knockout OsNPF7.4 can increase seedling biomass, tiller number, seed number, and yield per plant, and excessive nitrate in plants with high OsNPF7.4 expression may lead to amino acid accumulation in leaf sheaths, thereby inhibiting the seedling biomass (Weiting et al., 2019). OsNPF7.4-associated SNPs were detected in PDW. Since the high expression of OsNPF7.4 expression could inhibit seedling biomass, it may also affect seedling PDW. OsPTR9 encodes a peptide transporter that localizes to the plasma membrane. The expression of OsPTR9 is regulated by exogenous nitrogen and circadian rhythm. The overexpression of OsPTR9 can increase ammonium uptake, promote the formation of lateral roots, and increase yield; the downregulation of OsPTR9 has the opposite phenotypic effect (Fang et al., 2013). OsPTR9-associated SNPs were detected in PDW. The positive effect of OsPTR9 on lateral root formation may increase PDW. OsNRT1.1B encodes a nitrate transporter, which is mainly expressed in root hairs, epidermis, and vascular tissues, and it is also highly expressed in epidermal cells and stele cells adjacent to the xylem in roots. OsNRT1.1B affects the NUE of rice by regulating the microorganisms with nitrogen transformation ability in roots Zhang et al., 2019). OsNRT1.1B-associated SNPs were detected in RSA, PDW, RV, and PFW. Since OsNRT1.1B is expressed in roots and regulates root microbiome, root surface area and volume can affect the contact area between roots and microorganisms; thereby, OsNRT1.1B may be involved in the regulation of RSA and RV. OsNRT2.2 encodes a high-affinity nitrate transporter, which is upregulated by nitrate and inhibited by NH 4 + and high temperature; its expression level is increased by light or exogenous sugar treatment . OsNRT2.3 is localized on the plasma membrane and is mainly expressed in the parenchyma cells of the root xylem. It is responsible for nitrate loading in the root and transport to the aerial part and has no effect on the uptake of nitrate in the root (Tang et al., 2012). Both OsNRT2.2 and OsNRT2.3-associated SNPs were detected in RV, so both OsNRT2.2 and OsNRT2.3 may have a certain effect on RV. Association of candidate genes with studies The key to plant development is the uptake of micronutrients and macronutrients by the root system. The use of these resources, which are often unevenly distributed in the soil, is optimized as root architecture responds to nutrient availability (Kemo et al., 2017;Hans et al., 2019). Previous GWAS studies have successfully elucidated the adaptive mechanism of root structure to nutrients (Bouain et al., 2018). In a previous study, 96 varieties of Arabidopsis thaliana were screened under nitrogen treatment, and the GWAS results on seven root traits showed that only one-third of the genes were associated with the same trait (average lateral root length) under two different nitrogen concentrations (Miriam et al., 2017). In this study, we analyzed the RSA, RL, RBN, RN, PDW, PH, RV, PFW, RFD, NRN, and ARD of 419 rice landraces in a nutrient solution. The phenotypic data showed a large genetic diversity and a large correlation between different traits. Through gene expression analysis, we found that the expression of LOC_Os05g51700 and LOC_Os05g51710 in C347 was significantly lower than that in Expression analysis of candidate genes in C117 and C347. (A-G) The expression amounts of LOC_Os10g33210,LOC_Os05g51700,LOC_ Os05g51710, LOC_Os05g51740,LOC_Os05g51780,LOC_Os05g51960 and LOC_Os05g 51970 respectively. The x-axis represents the material used to detect the amount of gene expression, and the y-axis represents the amount of gene expression in the material. *p < 0.05, **p < 0.01, and ***p < 0.001. LOC_Os10g33210 in C347 was significantly higher than that in C117. Since LOC_Os10g33210 was found in RL, NRN, and RN, it is likely to participate in the regulation of these traits. Although there are expression differences among LOC _Os05g51700 , LOC_Os05g51710, LOC_Os05g51960, and LOC_Os05g51970, the functions of the encoded proteins are unknown, and they belong to hypothetical proteins. LOC_Os10g33210 has been identified before (Jie et al., 2010;Leran et al., 2014). An intuitive tool called CRISPR-adapted Functional Redundancy Checker has been proposed to facilitate functional genomics in rice (Hong et al., 2020). LOC _ Os10g33210 was found to have protein sequences and expression patterns similar to several nitrate transporter proteins (https://cafri-rice.khu.ac.kr/inspector). LOC_Os05g51780 encodes a zinc finger protein, which belongs to a family of zinc finger transcription factors that can transmit signals. Some studies have found that zinc finger transcription factors are involved in nitrogen assimilation; under a condition with sufficient water, zinc finger transcription factors could up-regulate target genes OsNR1.2 and OsPrx24, which can improve nitrogen assimilation and promote the opening of stomata. Under osmotic stress, zinc finger transcription factors could down-regulate the expression of OsNR1.2 and OsPrx24, leading to inhibited nitrogen assimilation and stomatal closure, which enhances the drought tolerance of rice. LOC_Os05g51780 was also associated with RSA, RFD, PDW, PFW, and RV. In LOC_Os05g51690 and LOC_Os05g51830, although there was no significant difference in the expression of these two genes in C117 and C347, they were associated with RSA, RFD, PDW, PFW, and RV. LOC_Os05g51690 is involved in the response to the lack of macronutrients, and it affects rice growth. Through alternative splicing, LOC_Os05g51690 produces two transcripts with the same 5' end, NRRa and NRRb, which encode 308aa and 223aa proteins, respectively. NRRa has one more CCT domain at the Cterminus than NRRb. NRRa and NRRb can regulate the structure of rice roots to improve macronutrient absorption; they also play a negative regulatory role in root growth and regulate the heading timing of rice (Yu-Man et al., 2012;Yuman et al., 2013). LOC_Os05g51830 is located in the nucleus and expressed throughout the life cycle of rice. It can regulate rice seed germination under abiotic stress conditions. The overexpression of LOC_Os05g51830 reduced the responses to abscisic acid (ABA), salt stress, and osmotic stress during seed germination and delayed seed germination . Differences present in the studies When analyzing the group structure, we divided the 419 rice landraces into five groups, while other studies divided the materials into six groups. The reason for the different groupings may be that the criteria used to filter SNPs were different. The GWAS results from the MLM model and GEMMA model were obviously different, and the MLM model generated much more significant SNPs than the GEMMA model. The different results of the two models may be caused by the strict SNP filtering in the GEMMA model. More than 200,000 SNPs were used in the MLM model, while less than 50,000 SNPs were used in the GEMMA model. On the other hand, the alleles associated with phenotypic diversity might have occurred in a low frequency, which made them hard to be detected by GWAS (Korte and Farlow, 2013). Conclusion In this study, we used 208,993 SNPs to perform GWAS on 419 rice landraces. With the MLM model of the Tassel software, we detected 834 SNPs associated with RSA, RL, RBN, RN, PDW, PH, RV, PFW, RFD, NRN, and ARD under p < 2.39×10 -7 significant levels, and they were distributed on 12 chromosomes. With GEMMA, we detected 49 SNPs associated with RL, RBN, RN, PDW, PH, PFW, RFD, and NRN under p < 1×10 -6 significant level, and they were distributed on 10 chromosomes except chromosomes 9 and 11. RT-qPCR was used to detect the expression levels of candidate genes. The expressions levels of LOC_Os05g51700 and LOC_Os05g51710 in C347 were significantly lower than that in C117, while the expression levels of LOC_Os05g51740, LOC_Os05g51780, LOC_Os05g51960, LOC_Os05g51970, and LOC_Os10g33210 were significantly higher in C347 than in C117. Comprehensive analysis indicated that LOC_Os10g33210 and LOC_Os05g51690 might be important candidate genes affecting NUE in rice. This study provides a theoretical basis for the genetic improvement of NUE in rice. Author contributions XY acquired the funding and participated in supervision; DL participated in supervision; ZL and YQ conducted the field trials and data collection and data analysis; ZL and XX carried out data visualization; BN and ZZ were in charge of the investigation; ZL, XY, and DL wrote, reviewed, and edited the draft. All authors contributed to the article and approved the submitted version. Funding This project is funded by the National Natural Science Foundation of China (32060476, U20A2032, and 31860371). prerequisite for the completion of this study. ZL was also very grateful to my supervisors Danting Li and Xinghai Yang for their valuable advice on the formulation of research questions and research methods. Finally, ZL would like to thank my parents and friends, whose help and support are the driving force for me to move forward.
2023-07-17T15:06:36.306Z
2023-07-14T00:00:00.000
{ "year": 2023, "sha1": "43446f00af5643b8b7832884653e3207159bb2b0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2023.1126254/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9105d3a7dee1491a5f834ce039a32bc6c713498", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
226444017
pes2o/s2orc
v3-fos-license
ASSESSMENT OF GENOTYPE × ENVIRONMENT INTERACTION FOR GRAIN PROTEIN CONTENT IN SHORT- SEASON SOYBEAN GENOTYPES The protein content is an important parameter of the technological quality of soybean grain . Therefore, the selection work is aimed at creating a genetic basis for obtaining varieties not only of high yield but also varieties of improved grain quality . In order to provide sustained progress in breeding, it is necessary to find a stable source for breeding for desired traits . The aim of this study was to examine the value of the genotype × environment (G× E) interaction for protein content in 14 Maize Research Institute „Zemun Polje” short-season soybean accessions and to identify stable sources that can be used in breeding for protein content improvement . G×E interaction for grain protein content was analyzed using a linear-bilinear AMMI-1 model . The influence of genotype and environment on the total variation of protein content was approximately equal, while the smallest variation is attributed to genotype x environment interaction . A large number of genotypes with different protein content (Korana, PI 180 507, Kabott, Krajina, Canatto) showed a small contribution to the interaction in studied environments, the most important of which were genotypes with above-average protein content, as potential sources for future breeding programs . Introduction The modern processing industry requires soybean varieties characterized by good parameters of technological quality of the grain, the most important of which are protein and oil content . Therefore, the breeding work is aimed at creating a genetic basis for obtaining varieties not only of high yield but also varieties of improved grain quality (Miklič et al ., 2018) . The major problem in obtaining high yielding and high protein soybean cultivars is the negative correlation between oil and protein content as well as the negative correlation between protein and grain yield Cober and Voldeng, 2000) . Furthermore, seed protein content is a quantitative trait determined by a number of genes with minor or major effect (Hyten et al ., 2004), and largely depends on environmental factors as well as genotype × environment interactions (Miladinović et al ., 1996;Balešević-Tubić et al ., 2011) . The interaction of genotype × environment implies an inconsistent reaction of the genotype to changes in environmental conditions (Baker, 1988) . From the breeding aspect, interaction is an aggravating factor in selection, because the participation of the interaction component in the total variability reduces the heritability of the trait, and thus the reliability of selection based on the main components (Kelly et al ., 1998;Kang, 2004) . The interaction obscures the agronomic value of the introduced material (Giaufrett et al ., 2000), which has recently become increasingly important in soybean breeding, given the dramatic narrowing of the genetic basis due to breeding within elite lines . The introduced germplasm is generally a poorly adapted, so for the successful integration of genes into elite soybean varieties, it is necessary to determine the stability of the introduced sources (Palomeque et al ., 2009) . A large number of statistical models have been developed to assess genotype × environment interactions, with AMMI (Additive Main effects and Multiplicative Interaction) and GGE models being the most commonly used to determine genotype response patterns across different environments (Gauch and Zobel, 1998;Yan and Rajcan, 2002) . In this study, the AMMI-1 model was applied in order to identify genotypes with above-average grain protein value and good stability, which may have a potential contribution to breeding, and to analyze the influence of agroecological factors in individual environments on the variability of the seed protein content . Material and methods The experiment included 14 soybean genotypes of maturity group 00 (very early varieties), maintained in soybean collection of Maize Research Institute "Zemun Polje" . Field trials were set up over two years (2011 and 2012), at two locations (Zemun Polje and Pančevo), according to a randomized complete block (RCB) design with three replications and experimental unit area 5m 2 . The harvest was carried out with plot combine . The content of total proteins in grain was measured on NIRT (near infra-red transmission) analyzer "Infraneo"®, Chopin Technologies and expressed in percentage (%) on a dry matter basis . The data were analyzed by a linear mixed model of classical analysis of variance with a random effect of blocks within the environment . Differences between genotype pairs over 4 environments and differences between means of environments were tested using Tukey's multiple comparison test . Interaction of genotype and environment for grain protein content was analyzed using a linear-bilinear AMMI-1 model (Crossa et al ., 1990) . Meteorological conditions The years and locations of the experiment varied greatly regarding meteorological conditions (Table 1) . In general, both years were dry, while 2012 appeared to be among the driest growing seasons in the history of meteorological observations . The sum of precipitation in June, July and August 2011 was three times higher than in 2012 at location Zemun Polje and almost four times higher than in 2012 at location Pančevo . In these months (6, 7 and 8) soybean is passing through a reproductive stage and is highly sensitive to drought (the critical period for water) . Grain protein content Analysis of variance (mixed model) revealed that the influence of genotype, environment (season and location) and their interaction on soybean grain protein content was highly significant (P<0.01) . Almost equal influence of genotype and environment on the variation of protein content suggested that examined soybean genotypes were less responsive to environmental changes . Similar findings were reported by Miladinović et al . (2006) who tested 4 Serbian soybean cultivars in 6 environments and found that the influence of genotype and environment on the total variation of protein content was approximately equal, while the smallest part of the variation is attributed to genotype x environment interaction . In research by Sudarić et al . (2006) and Vollmann et al . (2000a) environment has proven to be the most important source of variation, while less variation is attributed to the effects of genotype and genotype x environment interaction . Average seed protein content per environment varied from 39,1% to 41,4% (Table 3) . Genotypes tested at location Zemun Polje during the year 2012 had the highest mean for protein content . At the same time, the mentioned environment had the lowest precipitation in June, July and August, suggesting that high temperatures and water deficiency could favour protein synthesis . In soybean genotypes of early maturity groups, average to high proteins were found in years with high temperatures and moderate rainfalls, while seed protein concentration was reduced in years with greater precipitation during a period of seed filling (Vollmann et al ., 2000b) . Similar findings were reported by Dornbos and Mullen (1992), who found that severe drought increased protein content by 4 .4 percent, while oil content decreased, by 2 .9 percent . Average seed protein content in 14 genotypes over two years and two locations varied from 37,9% to 43,2% . Genotypes Progres, Mini Soja and Canatto had a significantly higher protein content compared to other genotypes in the group, representing a potentially valuable source for breeding for protein content . Among different models for G × E interaction assessment, the AMMI-1 method has proven to be effective in predicting the performance of soybean genotypes in different environments (Faria et al ., 2016;Souza et al ., 2015), providing the possibility of graphical representation of the interaction on a biplot, where the values of the main effects (genotypes, environments) are presented on the abscissa, and the values of the first interaction axis (IPC1) on the ordinate (Crossa et al ., 1990) . A large part of the variation (59 .9%) of the genotype × environment interaction for grain protein content in 14 soybean genotypes was explained by the first interaction axis of the AMMI-1 model (Figure 1) . The differences in the main effects of four environments were not large, since all environments had a protein content close to the general mean which was 40,1% . The interaction effect of the four environments on genotypes was mostly variable . Both test sites showed a positive interaction effect in 2011 and a negative interaction effect in 2012 . Genotypes tested in Pančevo in 2011 were the most unstable, while the genotypes tested in 2012 showed approximately equal stability at both sites . Most of the studied genotypes were distributed around the average protein content, while the value of the interaction component differed mainly . The genotypes with interaction scores close to zero contributed little to the interaction and were considered stable (Korana, PI 180 507, Kabott, Krajina, Canatto) . Among them, genotypes of above-average protein content (Kabott and Canatto) are potentially of the greatest importance for designing parental combinations in breeding for protein content . Although the Mini Soja and Progress genotypes were characterised by very high protein content, the high value of the interaction limits their importance and use in the selection program to improve the technological quality of grain . Conclusion According to the results of our study, AMMI-1 method could be used as an efficient tool for better prediction of phenotypic stability of genotypes grown in different environments . Among the number of genotypes which showed good stability, two genotypes -Kabott and Cannatto were distinguished for being stable and having a high mean for grain protein content . These genotypes could represent a potential source in breeding for enhanced quality of soybean grain .
2020-07-30T02:02:52.374Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "54cdaeeb659309b4d439360cdcb9bdd5adc73e8d", "oa_license": "CCBYSA", "oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0354-5881/2020/0354-58812001052P.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "538c1320003930a68b474006ff0b1a542ea1c4c7", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
268927526
pes2o/s2orc
v3-fos-license
Expanding access to healthcare for people who use drugs and sex workers: hepatitis C elimination implications from a qualitative study of healthcare experiences in British Columbia, Canada Background Hepatitis C virus (HCV) is a major health threat in Canada. In British Columbia (BC) province, 1.6% of the population had been exposed to HCV by 2012. Prevalence and incidence of HCV are very high in populations of people who use drugs (PWUD) and sex workers (SW), who may experience unique barriers to healthcare. Consequently, they are less likely to be treated for HCV. Overcoming these barriers is critical for HCV elimination. This research sought to explore the healthcare experiences of PWUD and SW and how these experiences impact their willingness to engage in healthcare in the future, including HCV care. Methods Interpretive Description guided this qualitative study of healthcare experiences in BC, underpinned by the Health Stigma and Discrimination framework. The study team included people with living/lived experience of drug use, sex work, and HCV. Twenty-five participants completed in-depth semi-structured interviews on their previous healthcare and HCV-related experiences. Thematic analysis was used to identify common themes. Results Three major themes were identified in our analysis. First, participants reported common experiences of delay and refusal of care by healthcare providers, with many negative healthcare encounters perceived as rooted in institutional culture reflecting societal stigma. Second, participants discussed their choice to engage in or avoid healthcare. Many avoided all but emergency care following negative experiences in any kind of healthcare. Third, participants described the roles of respect, stigma, dignity, fear, and trust in communication in healthcare relationships. Conclusions Healthcare experiences shared by participants pointed to ways that better understanding and communication by healthcare providers could support positive change in healthcare encounters of PWUD and SW, who are at high risk of HCV infection. More positive healthcare encounters could lead to increased healthcare engagement which is essential for HCV elimination. Background Canada has committed to eliminating hepatitis C virus (HCV) infection as a public health threat by 2030 [1].Chronic HCV infection can progressively damage the liver, potentially resulting in cirrhosis and liver cancer.HCV had infected an estimated 1.6% of the population in British Columbia (BC), Canada by 2011-2012 [2]. People who currently use or formerly used drugs (PWUD) and with current or former work in the sex trade (sex workers, SW) have particularly high HCV incidence and prevalence [3,4].These two populations are not mutually exclusive, and the PWUD population in BC is difficult to define and estimate.Most figures for PWUD relate to a subset, people who inject drugs, PWID.A recent study estimated 65% of BC's PWID will be exposed to HCV in their lifetime, and that the PWID population comprised 1.2 to 1.5% of British Columbians [5].In BC at the end of 2015, 45% of people diagnosed with HCV were PWID, and recent research estimated that 80% of incidence was in PWID [4,6].A prospective cohort of PWID in the largest urban area of BC found HCV incidence of 3.1/100 person-years (PY) between 2006 and 2012, despite widespread availability of free harm reduction supplies [7].HCV can be transmitted by non-injection drug use as well, although less efficiently [8]. Estimating the SW population in BC is speculative, so prevalence is uncertain [9].However, two recent studies in Vancouver measured HCV antibodies in cohorts which included SW.Goldenberg et al. found that 44% of 759 SW in their Vancouver study had been exposed to HCV [3].Incidence between 2010 and 2014 in this SW cohort was 3.8/100 PY.Incidence was elevated in participants using non-injection crack (6.3/100 PY), and 23.3/100 PY for participants using injection drugs.Shannon et al. found that among 3074 youth who injected drugs in Vancouver, 44% of those that those who did not work in the sex trade had evidence of HCV infection, which rose to 60% of youth involved in survival sex work [10]. The Treatment as Prevention (TasP) paradigm, initially a strategy to reduce HIV incidence in BC through early treatment of all eligible persons, can also apply to HCV elimination [11][12][13].HIV and HCV differ in two ways relevant to TasP: curability and reinfection.As HIV is a lifelong infection, HIV TasP focuses on reducing transmission through case-finding and rapidly supressing and maintaining supressed viral load [14].HCV TasP concentrates on case-finding, treatment, and follow-up as needed to reduce the risk of or promptly treat reinfection [15].The microelimination approach complements TasP by structuring the response to ongoing incidence, identifying potential transmission networks and offering testing, treatment, and prevention simultaneously to all people in them [16][17][18][19]. BC took a critical step in operationalising HCV TasP in 2018 by removing disease-stage eligibility for care covered by the province's universal medical services plan.This publicly funded HCV care covers antibody and RNA testing, diagnostic investigations, direct-acting antiviral (DAA) and other needed treatment, and follow-up at no cost to patients [20,21].Expanding eligibility resulted in increased treatment uptake but not equitable access [22,23].Treatment uptake in high-incidence populations remained under 50% in BC in recent data [22]. Eliminating HCV as a public health threat requires greater healthcare engagement with PWUD and SW populations, who bear a disproportionate HCV burden [1,24].Simple and highly effective DAA treatment created a prospect for elimination although critical health system and service barriers are hindering access to HCV care in these high-incidence populations.Many PWUD and SW are disengaged from healthcare, with stigma often cited as the primary reason for reluctance to engage [25][26][27].Understanding the barriers, including prominently stigma from healthcare workers, which these populations commonly face when seeking healthcare and how some of these populations' members have overcome them provides an opportunity to promote access so that those at high risk of HCV can receive equitable care. To this end, this research explores healthcare experiences and relationships of PWUD and SW, and how positive and negative experiences affect their willingness to engage in future healthcare, including HCV care. Theory and methodology Interpretive Description, a qualitative research approach for applied health research, guided this project [28].The Interpretive Description methodology is suited to this research as it can incorporate professional knowledge and theoretical frameworks to guide interpretation toward pragmatic rather than theoretical understanding [28][29][30].Therefore we designed interviews to elicit accounts of specific experiences, and other material contributing to the understanding of the participants' experiences as they related to the accessibility of healthcare in general and for HCV specifically, to inform recommendations for increasing healthcare engagement. The Health Stigma and Discrimination Framework (HSDF) proposed by Stangl and colleagues to link stigma and health outcomes is the theoretical framework we used [31].(See Fig. 1) The HSDF builds on previous work on health-related stigma in the Goffman intellectual tradition [32][33][34].As this framework is not specific to particular health or life conditions, nor a place or time, it can be used in applied research to trace the flow of antecedents of stigma through multiple steps and levels to their impact on individual and population health outcomes.The framework allows theory-based identification of potential intervention points.The framework includes 'drivers' which reinforce stigma but also 'facilitators' which can decrease stigma.The HSDF informed this study's interview guide and a priori coding, and led us to focus on drivers leading to stigma manifestations and consequences and facilitators which can ameliorate stigma, rather than on the experiences of stigma as such. Text in this figure was drawn directly from data and may include items not quoted.BC, British Columbia; ED, emergency department; HCV, hepatitis C virus; HIV, human immunodeficiency virus; PWUD, people who use drugs; STI, sexually transmitted infection; SUD, substance use disorder; SW, sex workers. The research team included two persons with lived or living experience of key aspects of the populations studied.Methodological and subject matter experts filled out the remainder of the authorship team.A checklist consolidating qualitative criteria proposed by Tong and colleagues influenced the reporting processes [35].Ethical approval for this research study was obtained through the Simon Fraser University Research Ethics Board, approval H20-02176. Sampling and data collection The inclusion criteria for this study included informed consent, being at least 19 years of age, self-identification as someone who had past or present experience of drug use or work in the sex trade, and willingness to participate in interviews in English on their experiences in the BC healthcare system. Sampling for this study was purposive.We sought potential participants between May 2021 and July 2022 using various strategies.First, research team members with experience of drug use, sex work, and HCV contacted participants in person at harm reduction sites in cities and towns in the rural parts of BC and through their personal networks and provided them information the study and contact information.Second, regional Drug User Groups, harm reduction, and supportive service organisations for PWUD and SW posted printed and electronic posters with the study's recruitment text and contact information.These included the Northern BC Network of People Who Use Drugs, AIDS Network Kootenay Outreach and Support Society, and Harm Reduction Saves Lives.Third, in chain sampling interviewees could pass study information onwards to others in their social networks.Sampling was adaptive, to ensure participation of people with experience outside of the main metropolitan area of BC, as Metro Vancouver PWUD in BC are overrepresented in health research relating to drug use.We also prioritised SW, who have been underrepresented in HCV research.NC had no relationship with participants prior to the commencement of the study.JL and AS were acquainted with some of the participants. Potential participants contacted the lead author by telephone or email to inquire about the study and receive a consent form.Consent forms were delivered to participants via their choice of email attachment, mobile telephone multimedia message, or on paper.Participants returned signed forms electronically or on paper, or informed the team that they could not return them.At this contact, an interview was scheduled for at least 24 h later.Participants who could not return a consent form gave verbal consent before the interview began.The investigators did not require identity documentation, allowing complete anonymity.No participants dropped out or declined to answer questions.All contact with potential and actual participants was virtual, due to COVID-19 pandemic restrictions, with the exception of participants who collected the honorarium in person, which was done outdoors.Interviews were recorded by Zoom ® or GoToMeeting ® videoconference software, with or without video depending on participant preference or equipment availability.NC initiated calls in a private room at a secure location, and JL joined some calls from a private room.Participants joined from a place of their choice. Each participant took part in one semi-structured interview of 40 to 120 min and was compensated a minimum of CAD$30 per hour in cash or bank transfer for their time and contributions.The interview guide was pilot tested jointly by NC and JL, reviewed by VDL and KS, and twice revised.Interview questions evoked the quality of healthcare relationships and encounters, and factors that improved or detracted from these experiences. Analysis Following the phases for rigorous thematic analysis as outlined by Nowell et al., NC transcribed interview recordings verbatim (deleting some filler words) and annotated them immediately following the interview [36].NC wrote field notes in transcripts, a research journal, and in QSR NVivo ® 12 software [37].KS read transcripts; NC and JL, a community researcher, reviewed transcripts multiple times becoming familiar with the data.A priori codes had been posited from peer knowledge and theory (cf.Interview guide in Appendix 2).These codes were revised and further codes generated in a deductive-inductive iterative process.We sought themes related to a priori codes (e.g., healthcare avoidance) and the HSDF in a deductive process.Inductive analysis constructed themes (e.g., fear and trust in healthcare relationships) which emerged from the data through theory and researchers' intuition from lived experience.Patterns and connections between experiences and actions (e.g., consequences of having drug use identified in medical care and refusal of care, building trust with healthcare providers and greater willingness to engage) were recorded in notes and memos as they became evident in the coding.We collated patterns from participants' answers into themes.Proposed themes and sub-themes were reviewed, rearranged, renamed, and some eliminated during rounds of analysis and discussions between NC and KS, JL, and AS. NC managed the study data including transcripts, field notes, versions of codebooks, and analytical memos in NVivo ® and reflexive notes in a research journal.NC deidentified transcripts during transcription after which recordings were securely deleted.Deidentification concealed places, dates, other persons, work, and non-salient medications and health conditions.Participants did not comment or correct transcripts, but they could request a printed copy of their deidentified interview; two participants did. The data collected satisfy the definition of meaning saturation [38]; however, the goal was not theoretical or thematic saturation.Following Interpretive Description, we considered sampling sufficient when the breadth of experiences, including geographical spread and diverse or contrasting cases, was appropriate to create knowledge to inform the practices relevant to this research [39,40]. KS contributed to methodology choices and identifying a priori codes.NC, KS, and JL read transcripts.KS, VDL, NC, and JL developed, piloted, and revised the interview guide.NC and JL contributed to identifying a priori and emergent codes, coding, analysis, and interpretation.NC drafted the paper.All authors reviewed drafts and contributed to the interpretation.NC made the final selection of themes to be presented and examples to illustrate each theme. Sample Twenty-five participants were interviewed, including 15 women, nine men; two participants used neutral pronouns, one of whom also used male pronouns.Of the 11 HCV infections discussed (including one case of reinfection and one of a close relative of a participant), six were cured and five were not treated.Participants brought up their status in five populations recognised as having elevated exposure to HCV: 24 participants spoke of their use of drugs (previous or current), two mentioned Indigenous identity, three men spoke of sex with men, 12 had previous or current sex work, and 12 had experience in correctional institutions.In addition, 11 participants mentioned mental health diagnoses and 11 experience of being unhoused. Themes Participants described their experiences accessing healthcare, their willingness to engage in care, and the critical importance of communication by healthcare professionals in their experience.Their relationships, whether brief in a single encounter, or extended in a hospital stay or primary care attachment, were shaped by patterns of communication that healthcare workers may not be conscious of. We present the findings in three major themes: (1) "Other than, lesser than" Access to healthcare, which collects data on whether or not participants received care; (2) "It's hard to reach out for help" Choices of healthcare avoidance or engagement, in which the emphasis is on whether or not participants wanted care and under which conditions, and (3) "Treat me like a human" Communication and relationships in healthcare, in which participants describe qualities of verbal and non-verbal communication shaping their experience in healthcare and contributing to their willingness to seek healthcare.Some participants' answers emphasised individual-level factors contributing to healthcare encounter quality, and others brought in institutional-or societal-level factors. Theme 1: "Other than, lesser than" access to healthcare This theme on participants' access to healthcare collated cases when participants described their efforts to seek healthcare, their success or failure, and the impact of their perception of institutional culture.While almost all participants had some experiences of healthcare in BC which they labelled as good, the times when they did not receive such care stood out to them.Participants described common failures of healthcare, including delays or refusal of care for infection, illness, or injury, inadequate or absent pain management, and some counterexamples. Notably, many experiences described involved multiple healthcare providers within the institutions providing care.In one example of delayed HCV care, a maternal health team diagnosed participant 13 (PWUD, SW) with chronic HCV but offered no counselling or path to treatment."… [T]he kids… I've been at risk over the years." She pursued HCV care through a low-threshold clinic after her primary care provider was slow to act when she became symptomatic: Participant 13: "I got frustrated when I wasn't getting any results back … I had to go down on the [inner city] where a low-barrier hep C program is. I got my name on the list and that's how I got treated. " While delay in HCV care was more common among participants than timely care, it should be noted that this sample was not representative of the PWUD and SW populations in BC.Nevertheless, it was particularly striking that so many of the participants did not have the first step in HCV care despite their high probability of exposure: knowing their HCV status. HCV is rarely an emergency, but participants also spoke of being refused care in emergency departments (EDs) for serious conditions.Participants perceived the refusal of care to be related to their status as PWUD.The following quotes include one participant who worked in an ED and described the institutional culture regarding PWUD in EDs where they have worked. Participant 4 (PWUD) was turned away from an ED with untreated bone fractures: Participant 4: "Yeah, broken [bones] for three weeks. And I didn't [go to another hospital] because when I went … they did nothing to help me, and they dismissed me as a dirty drug user. " When she did seek healthcare again seen three weeks later, she was scheduled for surgery. Participant 11 (PWUD, SW) was repeatedly refused adequate care in an ED over the course of several days as her health deteriorated, putting her long-term health at risk.She perceived that the delay in access to life-saving healthcare by multiple healthcare providers was due to her being identifiable as a PWUD: A phrase frequently used was "lesser than", i.e., not being seen or treated equitably by healthcare professionals.Devaluing the health of PWUD could be fatal, as described by participant 12 (PWUD, SW).Participant 12 was waiting in an ED when another patient alerted medical staff that a third patient was showing diminished consciousness and other early signs of toxicity.The second patient suggested the nurse check his vital signs.Participant 12 heard an ED nurse falsely claim to have already checked him.The third patient went into the washroom and had a cardiac arrest with the door locked.Participant 12 saw a team responding to 'code blue' , indicating he required resuscitation.She saw the team using a defibrillator, but she did not know whether he survived.She could not be sure if attentive staff could have averted the incident, but she witnessed the lack of urgency.She attributed the staff 's slow reaction to an institutional culture which dismissed the health and life of a PWUD: Participant 12: "They had the curtain, everything, shocking him and everything.The time they took to get that [washroom] door open because he was a dumb little addict is too long. It was about 20 minutes by the time they figured out how to get that door open. … And if she had done his vitals before, when the … lady asked her to?" Three further examples illustrate aspects of a particular kind of care refused in primary care and hospital settings.Pain management after injury or surgery could be insufficient or denied to participants who had been identified as PWUD.The first quotation depicts a typical example of a participant denied pain relief by healthcare providers who were more concerned with the danger of addiction than the intense pain.Another example describes healthcare providers deliberately cutting off pain medication, apparently for their own amusement.In each of these scenarios, the healthcare staff devalue the extreme pain suffered by the participants, creating an immediate problem and long-term mistrust. Participant 8 (PWUD, SW) received only paracetamol with codeine in hospital after abdominal surgery, which she found to be inadequate to relieve pain.She was denied this and any further prescriptions once she left the hospital, leaving her in severe unrelieved pain.For her this was a stigmatising experience which she generalised into a profound reluctance to seek healthcare: Participant 8: "I hate them so much.It was that thing where you just feel so demeaned and so 'other than' and you're just looking to get your needs met when you're in pain.I had a 7-inch-long scar down the middle of my belly….and they wouldn't give me my medications….So now when I'm sick or something's going on… I'm like 'No, they're not going to help me anyway." Participant 1 (PWUD, SW) described hospital staff deliberately exposing them to intense pain.Two hospital staff mocked up a morphine pump and dislodged their IV pain medication supply when transferring them to and from another care site.Participant 1 told how staff members ignored their distress: Participant 1: "They said, 'Hey, when you're with us you get this. You get that extra pump of morphine every five minutes. ' … It wasn't hooked up to anything. … I really got in my head about it for a long time afterward. I was like, 'What would motivate someone to do that?' … Well, prejudice against people who use drugs. … I started pouring sweat and … they were basically laughing at me. … It was like everyone was in on the joke. " It was alarming to Participant 1 that the medical staff had evidently planned together to deprive them of pain relief, implying that neglecting the pain of PWUD patients was condoned by institutional culture. Participant 18 (PWUD) described multiple times healthcare providers refused to provide pain relief after injuries or invasive medical procedures, even years after he stopped taking any drugs but prescribed buprenorphine-naloxone.He perceived this to be due to the providers' judgment that PWUD wanted the medicines for enjoyment, rather than for pain therapy: Participant 18: "It's horrible….It's really unfair and not right that people should have to suffer in pain because [healthcare providers] think they're getting something out of it by giving it to them.When I really am just getting relief.I don't know.That's a hard topic to talk about because I suffered so much." Participants also described effective pain management.Participant 15 (PWUD, SW) was concerned about taking opioids when he had surgery within a year of stopping drug use.Concerned about relapsing, he tried to recover from surgery without asking for analgesia.He felt ashamed to ask for medication, but eventually he could not stand the pain.When he did ask, healthcare staff quickly administered morphine, saying, "You don't have to wait for it to be that bad.If you need help we can help you." Other participants reporting effective post-surgical pain care had their addiction specialist or family doctor communicate with the surgical team to plan the pain therapy. Theme 2: "It's hard to reach out for help" choices of healthcare avoidance or engagement This theme gathered the variety of participants' desired and actual levels of engagement with the healthcare system.Participants fell into four categories, with some avoiding healthcare while acknowledging, and sometimes suffering, the risks of remaining untreated or treating themselves.These participants would only use emergency care, and some avoided even that.Others were able to retain a primary care provider who kept them engaged in healthcare even throughout years of problematic drug use, precarious housing, or work in the sex trade.They highly valued these long relationships.Between these endpoints were participants who relied on urgent-care or walk-in clinics for primary care.Some participants using walk-in clinics would prefer to have a regular family physician but were unable to find or retain one.Finally, others preferred walk-ins as they could choose how much of information to reveal.As seen in Theme 1, being identified as a PWUD could limit the care available, and some participants did not disclose their history.For these participants, BC's patient-centred care policy did not provide them the care they desired.Centring the patient asks healthcare providers to look at the whole person, not just the health condition.Some participants, including Participant 10 (PWUD, SW) found the "whole person" approach intrusive."I don't need you to tell me what's wrong with my life.… I just need some medical intervention." Rejecting such intrusion, Participant 10 told about treating an infection with prescription antibiotics on her own, and asserted that she would have sacrificed the limb to avoid going to a hospital where she expected to face stigma from healthcare providers: Participant 10: "I had an abscess once in an injection site.No way.I probably would have lost that arm before I would have gone into a hospital and said, like, 'I've been injecting drugs with a dirty needle.' … I had access to antibiotics.I medicated myself." Participant 13 refused to go with an ambulance whose crew tried to bring her to an ED after she escaped a murder attempt with injuries.She adamantly refused further treatment because she had been poorly treated in the past. Participant 13 (PWUD, SW): "I was covered in blood, … and I would not let them take me to the hospital. …I would have felt like I got raped over again, you know what I mean? The way how I've been treated in the past. I was not going to fucking put myself in a situation like that again. " In a case of a well-engaged person, Participant 9 (PWUD) attributed her consistent seeking of healthcare to good experiences in her youth.She was able to maintain a connection to care despite long periods of uncontrolled drug use and other challenging situations."When I was in addiction, as soon as I noticed anything, in I went." She attributed her survival to her strong engagement, as she rapidly sought treatment for a life-threatening antibiotic-resistant soft-tissue infection and received therapy promptly. Participants described times when they were conflicted as they thought the correct thing to do was to seek care but they did not.These participants chose to treat their own medical conditions or go without care rather than seeking care from EDs or urgent care clinics like they 'should' .Participant 17 (PWUD) described in detail how he used household tools to set his own broken finger rather than seek professional care.Participant 5 (PWUD) ended up hospitalised with an overdose after treating herself with medicines from a trusted friend.Participant 24 (PWUD) frequently injured himself at his job, and treated himself when he could.He described a cut which bled for four hours while he tried to glue it shut. "I know I should go for stitches, but if I can crazy-glue'em, that's where I'm at. If I have a broken toe or hands and shit, I just don't go…. Oh yea, yea, I know. " Participants also changed their engagement in care.Participant 22 (PWUD) knew he had HCV but his primary care providers did not engage him on it so he "just set it aside".After family and friends had good experiences with DAA therapy he sought treatment."I might as well give it a chance and not let [HCV] take too much of my health away.Before it's too late…". Theme 3: "Treat me like a human" communication and relationships in healthcare: Participants' perceptions of the roles of respect, dignity, stigma, trust, and fear This theme of communication and relationships in healthcare examines how the relational aspects of respect, dignity, stigma, and trust, were enacted or conveyed, and the effect of fear on communication between participants and healthcare providers.While most healthcare interactions explicated in the two themes above involved two-way communication, the participants focused their descriptions on other aspects.In this theme we look more closely participants' perceptions of the effects of verbal and non-verbal communications. Contrasting descriptions of attentive and dismissive one-on-one communication with a healthcare provider are seen in subsequent quotations.Participant 6 (PWUD, SW) described how verbal and non-verbal communication made a first encounter with a new family physician positive: Participant 6: "The first time I met him, he sat down and we discussed like all of my health concerns for an hour. And he sat there at my level and actually like he listened to me and explained everything in his perspective, and just, I felt really validated. " Participant 4 (PWUD), in contrast, spoke of encountering dismissive attitudes in healthcare settings where she thought more attentive healthcare providers should pick up non-verbal communication from patients who were not ready to communicate fully.In her experience, fear prevented her from saying what she needed to healthcare providers. Participant 4: "… people get really dismissed in a medical setting because the doctor knows best, and that's it. So they're not really listening to what you are saying. Or they're not really listening to the things you're not saying, which is: 'I'm scared. I'm terrified. This is too much information for me to take in all at one time. Slow down. ' We don't say those things. " Participant 23 (PWUD), who had untreated HCV, spoke of not being able to get the better of his fear when encountering healthcare providers during a drug-using phase of his life, preventing him from communicating the extent of his drug use.Participant 2 (PWUD) pointed out that trust needed to be established on both sides.Healthcare providers frequently inquired about drug use more than 10 years after she ceased taking drugs."They always just assume that you still could be using and just not saying anything, right?"This perceived mistrust detracted from her healthcare relationship. Participants 10 (PWUD, SW) and 4 (PWUD) were among those who spoke about how communication about issues outside the ones the participant wished to raise could be perceived as judgmental and stigmatising.Participants tried to keep the discussion away from their history of drug use or sex work, and on the medical complaint they came for.Participant 10: "It's just it's hard to reach out for help when you're going to be stigmatised." Participant 4: "When you go in so broken … if they don't handle [your history] well … you start feeling really embarrassed and shameful. So, you already got enough of that, trying to get out -even [>10 years] in sobriety -you already have enough of that to last a lifetime. You don't need that from your healthcare professionals. " Respect can be expressed in verbal and nonverbal communication as well as actions.Participants found it important to communicate explicitly to establish a respectful relationship and recognition of their dignity.Participant 5 used a phrase that came out in many interviews: the wish to be spoken to and treated "like a human"." [They assume I have no education.] They won't talk to me like I'm a human, really. Oh yeah, it's awful. " Nonverbal communication was particularly important in whether people felt they could maintain their dignity Participant 11 (PWUD, SW) contrasted her perceptions of lacking dignity when she was laughed at to her later experiences: Participant 11: "I went into the washroom and used while being in the ER.And I had ... a small seizure… and the security were coming in, they started laughing at me.I was then put into a room with restraints … I was treated very poorly and with no dignity.Like, I felt like the scum of the earth.And I can definitely tell I was treated like that because I was in active addiction, because I've gone to the hospital after that while being clean and been treated totally different.Like, with morals, compassion, empathy.And I did not have that experience before that." Participant 20 (PWUD, SW) maintained a strong relationship with a primary care provider during periods of drug use and sex work.One night she needed emergency care.A nurse's comment had a near-fatal result and left an indelible memory: Participant 20: "I had an infection in my arm because of intravenous using and the [triage nurse] that was admitting me actually said, 'Well it's your own damn fault. ' … If I could've stopped, I would've stopped. … I was so filled with shame and guilt, I attempted suicide that night after I left the hospital. I'll never forget her saying that to me. " Participant 19 (PWUD) was one of the participants who appreciated a healthcare provider drawing diagrams about their care for them in a combination of verbal and non-verbal communication: Participant 19: "She explained how everything was going to go… drew out diagrams for me … 'this is what this is, and this is what that is. ' … Like she explained everything and what the [drugs] would do. It just-that really is reassuring. And you're knowing what your medical journey is. It's being totally explained to you, instead of living in the dark. " Participant 4 (PWUD) gave another positive account of an individual healthcare provider countering the effect of previous experiences.Her doctor asked her why she had avoided all healthcare for 10 years.After hearing of the times when she experienced indignity in healthcare, he explicitly took a position: "[He said,] 'I'm so sorry, you should never have been treated like that….There's no way that should have happened.'". Discussion This study illustrated a wide range of healthcare experiences of PWUD and SW in BC.Negative experiences outweighed positive ones in participants' recall.Low healthcare engagement among PWUD and SW has been shown in extensively in research, but most studies concentrate on healthcare avoidance on the part of PWUD and SW during active use and work, though there are exceptions [27,[41][42][43][44].Our findings showed diminished access to healthcare through both participants' avoidance of care and providers' refusal to give care.Participants also reported the effects of negative experiences lasting for many years after drug use or sex work had ceased. It has long been recognised that stigma detracts from many aspects of healthcare for people and populations that are labelled and devalued by healthcare professionals, reflecting general attitudes in their society [32,34,[45][46][47][48][49][50].Many negative experiences depicted in this study fell in the category of stigma manifestations, in terms of the HSDF.Negative experiences were traceable to the HSDF's drivers of stigma, including lack of respect for PWUD and SW patients, lack of appropriate training, and institutional culture allowing inequitable treatment of PWUD and SW.PWUD and SW generalised their negative experiences, resulting in low seeking and uptake of care.Each participant could also recall healthcare experiences meeting BC Ministry of Health standards, i.e., quality, appropriate, and timely health services [51,52].Participants appreciated listening, trust, understanding, encouragement, respect, empathy, and compassion.Regarding the HSDF, these are the results of facilitators such as healthcare worker training, trauma-informed care, nonjudgmental institutional culture, and positive individual attitudes.Figure 1 shows the Health Stigma and Discrimination Framework with examples from this study [31]. Given the many efforts over decades to reduce stigma in healthcare, the findings of severe and long-lasting effects of stigma shown in detail in our findings are all the more troubling.Our results add to prior studies' findings that the issue of stigma in healthcare was a high and consistent priority for PWUD and SW [53][54][55][56].As other studies which explore patient experience as a PWUD, SW, or person with HCV we found current and former PWUD and SW populations presenting multiple reasons for low healthcare engagement, many at least partially credibly associated with stigma: experiences of dismissive attitude, intrusive questioning, blaming and other types of poor communication, delays in care, inadequate or inappropriate care, and withholding of care directly or indirectly reduced access to emergency, acute, and primary healthcare for participants [43,44,50,57,58]. Our findings offer positive and negative examples of how verbal and nonverbal communication affected healthcare relationships.Trust is recognised as an important aspect of healthcare [59][60][61][62].Healthcare staff who spoke rudely, blamed participants for their own health issues, laughed at participants, asked questions not related to the medical intervention, lectured participants about their life or past as a PWUD or SW created distrust and reluctance to engage in healthcare.Clinicians who sat at the participants' level, spoke empathetically when learning of participants' history of negative experiences in healthcare, apologised for their institution, fully informed participants often by explaining processes with diagrams, shared decisionmaking, spoke nonjudgmentally about their past, and most importantly, listened respectfully could build trust.Explicitly addressing past stigma and adverse healthcare experiences, and demonstrating respect also built trust and dispelled fear.Participants in ongoing nonjudgmental healthcare relationships appreciated providers' questions about past experiences in healthcare. In literature on stigma in healthcare, fear is presented as felt by the more powerful party in an interaction, as a driver of stigma [31,34,63].This study's results can alert healthcare providers to the likelihood of fear being felt by patients with a history as PWUD or SW, especially in early visits with a new provider.Fear in our data was not only fear of anticipated stigma, but a generalised fear which inhibited participants' ability to communicate with healthcare providers. Provider-initiated HCV care was remarkably low.Delay or refusal of treatment is contrary to a TasP approach.The lack of care described by participants contributes to the expansion of the HCV epidemic as long as transmission of HCV remains high in populations with active drug use and sex work [3,4].The first step in HCV care is diagnostic testing, and since 1997 Canadian guidelines have consistently recommended tests for people who inject drugs and MSM [64].However, we found that many participants did not know their HCV status, despite falling within testing recommendations.Of those who tested positive for HCV RNA, it was common for them to find care on their own initiative, or not seek care rather than having diagnosis and treatment or referral offered, per guidelines, by primary healthcare providers [65][66][67]. Changes in communication in ED have great potential as the ED is the only contact with the healthcare system for many PWUD and SW [42].Study findings of the common occurrence of negative experiences in EDs suggest that more deliberate and respectful communication and efforts to build trust in emergency settings could be a step toward drawing people who avoid regular healthcare back into the primary care system. Limitations of this study included the requirement to conduct interviews remotely due to Ethics Board requirements during COVID-19 restrictions, which biased the sample towards people in more stable situations which may be atypical for current PWUD and SW.This bias was mitigated by adaptively recruiting participants with living experience of drug use and sex work, and asking participants about past experiences.Another limitation was using a single main coder, increasing the risk of systematic personal bias.This limitation was mitigated by the co-review of transcripts and coding by JL, a research team member with lived and living experience of the conditions of interest.NC not being a member of the communities of interest was another limitation.This limitation was mitigated by having two team members with lived and living experience of HCV, drug use, and sex work.The ability to explore experiential issues such as engagement with sex work shaping PWUD experiences was limited by the choice of Interpretive Description as an approach, which directed attention away from deeper understanding of the of experiences of stigma, and toward implications for healthcare practice.A strength of this research was that it included experiences across the province, in contrast to the majority of research with PWUD and SW in BC concentrating on Vancouver's metropolitan area or Downtown East Side, which has been described as one of the most heavily researched populations in the world [68].Another strength is the inclusion of people who were known to have a high probability of exposure to HCV whether or not they had been tested, thus capturing more of the experiences of people who avoid healthcare and do not know their HCV status. Conclusions Our study builds on previous evidence that healthcare engagement in PWUD and SW is low, and that stigma and other negative experiences decrease willingness to seek or accept healthcare.Low healthcare engagement will slow HCV elimination, as scale-up of HCV TasP and implementation of microelimination depend on a large proportion of people willing to engage in offered HCV testing and treatment. In this study, collecting data on positive and negative experiences enabled us to identify potential points and means to support positive change in healthcare encounters of two high HCV-incidence populations critical to the success of elimination.While few healthcare providers deliberately undertreat, reject, or stigmatise their patients, providers should understand that many of their patients with histories of drug use or sex work have experienced stigma or inadequate treatment when seeking healthcare.Such negative experiences may have become generalised in PWUD and SW attitudes to all healthcare providers, creating fear of rejection, stigma, coercion, or refusal to provide adequate care.Healthcare providers can actively work to reduce the effects of negative healthcare experiences once they are aware of patients' history and its long-term effects.Inquiring about past experiences, being aware of the tension between fear and trust, being explicit about accepting patients' past without judgment and respecting their efforts to improve their health are all ways that healthcare providers can support patients with a history of drug use or sex work. Fig. 1 Fig. 1 Findings from BC PWUD and SW in the health stigma and discrimination framework Excess mortality, avoidable morbidity, poor quality of life, avoidable pain A participant [all details withheld] who worked in an ED described the culture which led from people being perceived as drug-seeking to them being refused care."Ifyou are classified as dope-seeking or drug-seeking in Emerg, you are kiboshed.The quicker you get thrown out is the most rewarded behaviour.You are deemed an absolute powerhouse, not to be reckoned with, for throwing out the dope seeker.[laughs] You get props for that kind of stuff.Dope-seeking in Emerg is laughed at and not treated.And even more, people will boast that they caught it.… 'We knew exactly was he was doing, didn't get nothing out of us.' …Once you get labelled with drug-seeking, you're done at Emerg.You're not going to get treatment for a broken foot that day." "Yeah, in active addiction, probably wasn't the most honest guy, you know.I was always fearful." This experience was echoed by Participant 22 (PWUD) who described the dynamic of active PWUD who "are in protection mode all the time.It's a learned behaviour.Trust, vulnerability, are off the table." Participant 10 (PWUD, SW) had a long-standing relationship with a family physician who retired before DAA eligibility expanded to include Participant 10.She did not have enough trust in healthcare providers to speak to a new physician about HCV: Participant 4 (PWUD) noted that she needed to have the courage to build trust with her healthcare provider and tell the truth about her drug-use history and be honest about her fear.She described how her physician showed he did not judge her and recognised her efforts, saying, "Look, you know, these things happen.And you know you're changing that around now…. " He gained further trust by asking if she would try things, contrary to what she had feared.She had expected him to force treatments on her. Interview guide Interview guide for hepatitis C priority population healthcare experiences Revised 9 Nov 2021Thanks for this call.In this interview, we want to understand your experiences in healthcare system in BC.Theultimate goal of this project is to improve the quality of hep C care experience in BC.I am Nance Cunningham, and this is part of my PhD research.I am Jessica Lamb, [Jess introduces herself ].You have chosen to talk with us for 30/60 min, but you can change your mind at any time, to speak for longer or shorter.This interview is about your experience in healthcare, including what you have witnessed and felt.We/I don't need to know your medical conditions, only about your view of your healthcare experience.What you tell us/ me will remain anonymous, unless you have chosen to use your own name.We may quote you with the name you have chosen.The interview as a whole remains confidential, and can be read only by the research team.Only quotations of a few sentences may be published.If you don't want to answer a question, that's no problem, just ask to go on to the next question.If you want to tell us/me something not asked, please do.Do you have any questions about the interview?[Answer any questions] [As Jessica told you earlier], we/I will record and write out this interview to be sure of your exact words.I will start recording now.[Start recording] You can ask me to stop any time.Let us/me know if you need a break for any reason.[For those who have not been able to provide informed consent, ask for informed consent and pseudonym here: Please state whether you understand this study, and consent to do this interview, and what name you would like to be used if we quote you.]To start off, we/I'd like to ask you about your recent experiences in healthcare in BC.That is about in experiences in any kind of healthcare setting, it could be a hospital stay, a clinic visit, something that happened in an emergency room, with picking up a prescription, going to a lab, 911 call, anything like that.You can talk about what you experienced yourself, or what happened to someone else when you went with them.
2024-04-06T06:17:52.733Z
2024-04-04T00:00:00.000
{ "year": 2024, "sha1": "183bb4c0c3c7a7fb04a4f177a249f14d41395470", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "adc4d11c99686f2d727358ae9f3f7e7e8a8d96fa", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
169944127
pes2o/s2orc
v3-fos-license
Managing innovation and standards • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. v Preface How can innovators manage the seemingly paradoxical relationship between creating radical innovations and complying with external requirements that aim to fix solutions in place? Although businesses face this question whenever they want to bring a new product to market, there is surprisingly little research on the topic. This observation motivated me to investigate how innovative companies deal with standards, as a key example of the external requirements that businesses face. A review of the literature in Chapter 1 shows that standards indeed have a substantial impact on innovation. Depending on the specific standards, these effects can be positive (e.g. facilitating market access, defining interfaces to supporting infrastructures), but also hinder innovation (e.g. through lock-in). Although the relationship between innovation and standards is not as paradoxical as it first seems, literature confirms its importance for innovators. To understand how they address this topic, I conducted an in-depth grounded theory case study of the micro Combined Heat and Power (mCHP) technology's development in Europe. As Chapter 2's introduction to the case shows, this radical sustainable innovation is ideal for understanding standards in the context of innovation. Based on in-depth interviews with the key involved actors, I was able to trace in much detail how the technology, standards, and regulation co-evolved. Studying the case yielded some unexpected insights: It shows that standards' link to regulation can be more central than the literature suggests (Chapter 3). It also suggests that aligning innovations, standards, Table 3.3 Standards' potential implications for mCHP 48 Table 4.1 Overview over appliance manufacturers' activities 53 Table 5.1 Overview of collaborations related to mCHP technology 79 Table 6.1 Overview over functions fulfilled by supporting institutions in the mCHP case 126 Table 6.2 Examples of different types of interest in interactions with the developments related to electricity grid access in the mCHP case 133 Table 6.3 Strategies for influencing developments in the wider context 135 impose limitations on actors' behaviour (fligstein & McAdam, 2012;Polanyi, 2001;Stiglitz, 2001). This implies that such requirements need to be carefully managed to ensure that businesses succeed within these boundaries. In this context, we want to understand how innovative companies manage standards-as an important example of such external requirements-while they are developing new products. Standards have a profound impact on the development of new technologies, services, and other novel ideas. Extant literature finds that standards are often important factors supporting innovations but can also hinder in other cases. The arguably most fundamental positive effect is that standards often facilitate or even enable innovative products' and services' entry into the market. Other positive effects include, for example, the ability of standards to diffuse knowledge (e.g. Swann, 2010), standards' potential for facilitating collaboration (e.g. Allen & Sriram, 2000), and their role in creating bandwagons for new technologies (e.g. Belleflamme, 2002;farrell & Saloner, 1985). On the other hand, examples for standards' negative effects include their potential to restrict creativity and the implementation of new ideas (e.g. Kondo, 2000;Tassey, 2000), as well as the danger that they lock users into using old technologies (e.g. Allen & Sriram, 2000;Tassey, 2000). These potentially far-reaching effects imply that innovators need to manage standards carefully so that they support, rather than hinder, innovation. Extant literature considers how standards can co-evolve with new technologies to facilitate their emergence featherston, Ho, Brévignon-Dodin, & O'Sullivan, 2016;Ho & O'Sullivan, 2017). These studies focus on the timing when specific types of standards are required to support a technology's further development and on technology roadmapping approaches that can help develop strategies for standardising new technologies. They therefore mostly look at new standards needed for an emerging technology and pay little attention to already existing standards that might affect an innovation and to the processes needed to develop and/or adapt standards for the innovation. This is an important limitation of the extant literature because many of the negative effects of standards found in literature, such as lock-in or limitations for creativity, arise in situations where an innovation is confronted with existing standards. furthermore, these situations may be particularly challenging to manage because of the dynamics and resistance innovators are likely to encounter when challenging existing standards that may still serve the interests of other actors (see Wiegmann, de Vries, & Blind, 2017). To generate insights into how companies deal with both existing and new standards, we conduct an exploratory case study of a major innovation within an established industry where many standards apply. In this study, we take the perspective of innovating companies to understand how they manage this topic and its potentially important ramifications for their work. We study the micro Combined Heat and Power (mCHP) technology in the European heating industry. In this case, several companies developed new products in parallel, which were based on the mCHP technology. These products were aimed at existing markets where relevant standards already existed but only partly supported the new technology. Our study shows in detail how this innovation was affected by various standards. Our study also explores how these companies managed the relevant existing and new standards, which industry dynamics resulted from their activities and how these events impacted on the companies' new product development (NPD) activities. Based on this in-depth study, we develop new theory about managing the co-evolution of innovation with standards and regulation. The resulting theoretical contributions are based on the fundamental finding that activities related to aligning an innovation with relevant standards and regulation occur on three nested levels: (1) the company, which is part of (2) an industry, which in turn is situated in (3) a wider context. Building on this insight, we identify company-and industry-level activities, which are needed to effectively use standards and regulation to align the innovation with needs and demands originating from the wider context. We also pinpoint supporting factors that are needed to carry out these activities successfully and establish through which channels events at each level impact on what happens on the other two levels. We therefore contribute a more detailed and dynamic view to the debate on how to manage standards in innovation contexts, both at company and industry levels. To firmly root our study in previous findings, we provide a more detailed review of the literature that we summarised in the previous paragraphs. We first look into the extant findings on the links between standards and innovation (Sect. 1.1). following this discussion, we consider existing insights on how standards can be managed in innovation contexts in Sect. 1.2, which culminates in identifying several important theoretical gaps that motivate the study. standards' effects on innovation Standards, which according to de Vries's (1999, p. 15) definition specify "a limited set of solutions (…) to be used repeatedly", at first sight appear to oppose innovation which aims to create new solutions rather than reuse a limited set of existing ones. In their literature reviews, Dahl Andersen (2013) and Swann and Lambert (2017) found many different ways in which standards impact on innovation. Despite the intuitive expectation that standards are at odds with innovation, Dahl Andersen (2013) reports that around 60% of papers included in his review found a positive link between standards and innovation. Standards can be distinguished according to their economic functions which include (1) specifying interfaces and providing compatibility; (2) defining minimum quality and safety requirements; (3) reducing variety; (4) disseminating information; and (5) defining measurements (Blind, 2004(Blind, , 2017Swann, 2010). Egyedi and Ortt (2017) provide a further refined classification, according to which all standards have the primary functions of (1) reducing variety and (2) providing information. They then identify secondary functions, according to which standards can be distinguished: (1) ensuring compatibility; (2) providing reference measures and defining measurement methods; (3) establishing classifications and (4) codifying behaviour protocols (Egyedi & Ortt, 2017). The impacts of standards differ substantially, depending on which of these categories they fall into (Blind, 2004(Blind, , 2017Egyedi & Ortt, 2017;Swann, 2010). Consequently, most of the literature that we cite below focuses on specific types of standards and their effects. Standards can also be distinguished according to whether they are 'design based' (prescribing a particular specification) or 'performance based' (requiring a certain performance level without specifying how this should be achieved) (Tassey, 2000). Generally speaking, designbased standards are more often constraining for innovation whereas performance-based standards usually are more supporting for innovation (Tassey, 2000). This distinction is therefore similarly important to the distinction between the economic functions for understanding the effects of standards on innovation. Effects of standards occur at all stages of innovation. They affect the incentives for companies to innovate (e.g. de Vries & Verhagen, 2016;Maxwell, 1998); have implications for the technological development process (e.g. Allen & Sriram, 2000;); and influence the innovation's eventual diffusion in the market (e.g. Allen & Sriram, 2000;Tassey, 2000). Since our research question concerns the management of standards in the NPD process, i.e. after the decision to innovate has been made, we are particularly interested in the effects of standards on the latter two phases. We provide an overview over these effects in Table 1.1 and outline them in more detail in Sects. 1.1.1 and 1.1.2. Standards' Effects on the New Product Development Process Standards play a key role in supporting the development of a new technology. They contribute to the institutional foundations between the involved actors and give them a common understanding of the technology (Bergholz, Weiss, & Lee, 2006;foray, 1998;Van de Ven, 1993). More concretely, three key effects of standardisation on NPD activities have been documented in the literature: (1) limiting options available to innovators; (2) acting as a source of information, including about performance requirements; and (3) facilitating (and sometimes requiring) collaboration and division of labour in innovation. .1 Standards Limiting Available Options The first (and most obvious) effect of standards is limiting the options that are available to an innovation's developers and restricting their choices and freedom in designing their product (e.g. Kondo, 2000;Tassey, 2000). Paradoxically, this may be positive in some situations because it can reduce the search costs involved in solving technological problems (foray, 1998); ensure that different parties working on an innovation follow a common direction (Swann, 2010); and guide individual actors' investments ( Van de Ven, 1993). furthermore, the degree to which standards limit the available options differs depending on whether they are design-or performance based: While design-based standards are very restrictive, performance-based standards leave more freedom (Kondo, 2000;Tassey, 2000). Process standards that are written in this way may even increase creativity and motivation and thus lead to superior results (Kondo, 2000). Standards as an Information Source Second, standards are a useful source of information for innovation (Allen & Sriram, 2000;Bergholz et al., 2006;Blind, 2004;featherston et al., 2016;Schmidt & Werle, 1998;Swann, 2010;Van de Ven, 1993). This information is particularly important when developing new technologies and/or products in networked industries where the innovation must work seamlessly with other elements of a network (Bergholz et al., 2006;Blind, 2004;Schmidt & Werle, 1998). Standards can also be used to disseminate results from basic research to facilitate their application in an innovation (Allen & Sriram, 2000;Bergholz et al., 2006;) and can facilitate the interface between developing new products and developing the needed production processes to manufacture them at large scale (Lorenz, Raven, & Blind, 2017). This also makes standards a potential external source of innovation for open innovation, in addition to the ones outlined by West and Bogers (2014). Especially for design-based standards, the degree to which this information is useful for developing innovations depends on two factors. (1) Technological solutions included in standards are sometimes related to someone's intellectual property rights (IPR). If this is the case, this IPR must be available for licensing so that the information can be used by actors who are developing an innovation (Tassey, 2000). (2) The information disseminated through the standard should be up to date and have been included in the standard when the underlying technology was sufficiently mature. Outdated information may no longer be useful and even lock innovators into using old technological solutions (Allen & Sriram, 2000;Swann, 2010;Tassey, 2000). Information included in standards that were passed too early in a technology's lifecycle may constrain its further development or be incomplete Tassey, 2000). When standards are performance based, the information included in them is valuable to innovators because it specifies targets that an innovation has to meet (Abraham & Reed, 2002;de Vries & Verhagen, 2016;Swann, 2010). However, when these requirements and testing procedures are not harmonised internationally, they can also lead to substantial additional efforts. In such cases, required tests need to be repeated for each country where the innovation is intended to be sold (Abraham & Reed, 2002). Standards Facilitating Collaboration and Division of Labour Third, standards support and sometimes also require collaboration and division of labour in innovation. Standardised interfaces in complex system enable companies to focus their innovations on particular elements of these systems (Chen & Liu, 2005;Tassey, 2000) and base these innovations on complementary assets provided by other parties (see, e.g. Teece, 1986Teece, , 2006. furthermore, standardised interfaces between companies also facilitate collaboration between them in innovation projects, as Allen and Sriram (2000) demonstrate in the case of the Boeing 777's development. However, standards may also necessitate collaboration and a systemic approach to innovation when the requirements set in performance standards are higher than what one actor can achieve individually, as de Vries and Verhagen's (2016) case of the Dutch building sector shows. In such cases, achieving the required performance level may invoke reconfiguring a system's underlying architecture, rather than only innovating parts of it and therefore require the input of all actors who are involved in the system (de Vries & Verhagen, 2016). from an innovator's point of view, this may signify substantial additional cost and effort. Standards' Effects on Technology Diffusion In addition to the effects on developing an innovation, standards also may enable or hinder the innovation's eventual success in the market. While they have the positive effect of providing legitimacy and access to the market and supporting the development of complementary assets, they potentially can also impede an innovation's diffusion by causing lock-in. Standards Providing Legitimacy, Market Access and Supporting Complementary Assets Standards are central to framing markets for technologies by defining and codifying rules, norms, and values that actors in these markets should follow (Delemarle, 2017). By doing so, they fulfil a key function of legitimising solutions (see Botzem & Dobusch, 2012;Tamm Hallström & Boström, 2010). This legitimation is likely to be particularly important for innovations where actors may be sceptical and still uncertain about the benefits. In such a context, testing the product according to respected standards can help signal an innovation's quality to the market (Tassey, 2000) and thus legitimise it. In Europe, such testing standards can also help to prove an innovation's regulatory compliance to the authorities and therefore provide access to the market. In technological areas that are covered by the 'New Approach', following standards which have been recognised by the European Commission gives actors a 'presumption of conformity' (Borraz, 2007;European Parliament & Council of the European Union, 2002;frankel & Galland, 2017). An additional way in which standards can contribute to an innovation's legitimacy is by signalling that it is likely to be adopted by many players (farrell & Saloner, 1985;Van de Ven, 1993). This expectation is based on the broad support needed for a solution to emerge as a standard (see Wiegmann et al., 2017) but also on other factors, such as the role that standards play in government procurement and the associated demand (Blind, 2008;Edler & Georghiou, 2007;Rosen, Schnaars, & Shani, 1988). Standards can therefore help to "build focus and critical mass in the formative stages of a market" (Swann, 2010, p. 9) , prevent market fragmentation and support exploiting network effects (Bergek, Jacobsson, Carlsson, Lindmark, & Rickne, 2008). If standards contribute to the widespread use of an innovation in this manner, this can also lead to substantial additional revenues for the innovation's developers from licensing fees paid on IPR that is declared standard essential (Kang & Motohashi, 2015). finally, innovations often rely on complementary assets and/or supporting infrastructures for their success (Teece, 1986(Teece, , 2006. In addition to creating critical mass which encourages others to supply these assets (Rosen et al., 1988), standards can also play a more direct role in their provision. By disseminating information about the innovation, standards help others to produce the required complementary assets in the manner outlined in Sect. 1.1.1 Schmidt & Werle, 1998). When standards are incorporated into the innovation's development in this manner, they also allow the innovation to make use of existing complementary assets and supporting infrastructures. Standards Causing Lock-In Although standards can contribute positively to an innovation's diffusion, they can also create lock-in that prevents users from adopting the new product (e.g. Allen & Sriram, 2000;David, 1985;farrell & Klemperer, 2007;Tassey, 2000). A classic example of lock-in is the QWERTY keyboard which persists in usage despite better alternatives being available (e.g. Allen & Sriram, 2000;David, 1985). In cases of lock-in, large parts of the market use a solution based on an outdated standard and face high switching costs (David, 1985;Rosen et al., 1988). These switching costs prevent the users from adopting the innovation, even if it is superior to the solution prescribed by the existing standard. managing standards in innovation contexts The effects of standards on innovation outlined in Sect. 1.1 make them an important element of innovation management. In Sect. 1.2.1, we summarise the limited available literature about company-level standards management. Other literature provides some insights into how standards and innovation co-evolve on the industry level (see Sect. 1.2.2) but neglects important dynamics, which may, e.g. result from conflicting stakes. In Sect. 1.2.3, we argue why these dynamics are likely to occur and what implications they may have for managing standards in innovation contexts. finally, we summarise the important gaps in the literature that form the basis for our study (Sect. 1.2.4). Managing Standards on the Company Level Although literature about managing standards on the company level mostly does not specifically address innovation (the paper by Großmann, filipović, & Lazina, 2016 being a notable exception), several authors (Adolphi, 1997;Axelrod, Mitchell, Thomas, Bennett, & Bruderer, 1995;Blind & Mangelsdorf, 2016;foukaki, 2017;Jakobs, 2017; van Wessel, 2010;Wakke, Blind, & De Vries, 2015) offer insights that are also likely to apply in this context. On a fundamental level, they argue that managing standards needs to be aligned with the overall business strategy. To do so, companies should formulate a standardisation strategy (Adolphi, 1997;Großmann et al., 2016), which may be driven by the company's organisational culture (foukaki, 2017). Based on this, organisational structures need to be put in place that enable activities on the tactical and operational levels which help achieve the strategic goals (Adolphi, 1997;foukaki, 2017). The resulting organisational structures need to facilitate a number of day-to-day tasks, such as applying standards, monitoring the application of standards within the firm, informing company-internal stakeholders about standards, and influencing standard development processes (Adolphi, 1997). In the specific innovation context, Großmann et al. (2016) argue that these day-to-day tasks mainly concern screening existing standards regarding their relevance for the innovation and activities related to feeding the innovation's results into new standard development. These activities should then be related to specific decision points in the NPD process (Großmann et al., 2016). Adolphi (1997) argues that companies face 'make-or-buy-decisions' whenever they encounter a situation where a standard is needed, meaning that they can either implement existing standards or contribute to developing new ones. 1 Decisions to engage in standard development can be based on a number of strategic motives, such as facilitating market access, influencing regulation, seeking knowledge, maximising compatibility, or enhancing prospects in international trade (Axelrod et al., 1995;Blind & Mangelsdorf, 2016;foukaki, 2017;Jakobs, 2017;Wakke et al., 2015). following this decision, companies need not only participate in forums where standards are developed but also carry out supporting activities, such as eliciting requirements and defining success criteria according to which the standardisation work's outcomes can be evaluated (Jakobs, 2017). Alternatively, companies can implement already-existing standards. Van Wessel (2010) identifies four necessary activities in this context, each of which needs to be carefully managed: (1) selecting appropriate standards, (2) implementing them, (3) using the standard, and (4) assessing the outcomes. One key aspect of managing these activities is that all affected company-internal stakeholders need to be involved throughout the process in order to ensure alignment with their needs ( van Wessel, 2010). Co-evolving Innovation and Standards at Industry Level Because standards are key to framing markets for new innovations, they need to co-evolve with emerging technologies (Delemarle, 2017). Some existing studies consider how this (should) happen at the industry level featherston et al., 2016;Ho & O'Sullivan, 2017). argue that specific types of standards (e.g. semantic standards or interface standards) are needed at various stages as a technology evolves from pure basic research to its application in the market. In this context, the interface between the R&D process and standardisation and the involvement of scientists and practitioners are particularly important to ensure that standards, reflecting both the state of research and practical applications, are developed . A technology roadmapping approach can be used to plan such a process and ensure that the necessary standards are developed at the right point in time (featherston et al., 2016;Ho & O'Sullivan, 2017). featherston et al. (2016) and Ho and O'Sullivan (2017) develop a framework that links required standards to specific activities in the technological trajectory and allows actors to plan the standardisation process(es) alongside a technology's development. These existing approaches to co-evolving standards and innovation at industry level focus on the development of new standards needed to support an innovation. While there are cases where scientific discoveries lead to an entirely new technology being developed with no pre-existing standards, such as the example of nanotechnology that Delemarle (2017) and use, many innovations are developed in areas where relevant standards already exist. If these standards have the positive effects on innovation cited in Sect. 1.1, this is not an issue. However, standards with negative effects such as lock-in, need to be updated to increase an innovation's chances of success. In this context, current literature offers some insights into how standards can be changed when needed. Changes to standards occur on a regular basis-for example, 40% of the standards studied in a study of IT standards were subject to changes at some point in their lifecycles (Egyedi & Heijnen, 2008;Schmidt & Werle, 1998). Such an evolution of standards often follows out of innovations and is driven by four key reasons: (1) new user requirements; (2) anticipation of new technology features; (3) requirements from new technological development, and (4) new applications of existing technologies (Egyedi, 2008). These changes can manifest themselves in deviating ways of implementing the standard (Egyedi & Blind, 2008) which implies that there is no formal process to change the standard and an alternative implementation may become a de facto standard if it is adopted by a large number of players (see, e.g. den Uijl, 2015). furthermore, these changes can also result from more formalised, and therefore also more manageable, processes. Many standard setting organisations (SSOs) have procedures to update standards, e.g. by releasing updated versions and/or withdrawing outdated standards and replacing them with new documents (Egyedi & Blind, 2008). Due to the time needed for these procedures, these changes in standards are likely to occur with some delay after the corresponding technological development (see Adolphi, 1997, p. 41). Dynamics Affecting the Management of Standards in Innovation Contexts Standardisation in innovation contexts often is a contentious issue. The standardisation process is likely to include a range of stakeholders and may also be influenced by external factors, such as societal debates and trends (Delemarle, 2017). When establishing new standards to support an innovation, these actors are likely to attempt influencing standards in a way that gives them an advantage in the innovation's further development (e.g. Blind & Mangelsdorf, 2016;Delemarle, 2017;Rosen et al., 1988;Teece, 2006;Van de Ven, 1993). furthermore, changing standards frequently leads to issues like added complexity, reduced interoperability, and problems for standard implementation (Egyedi & Heijnen, 2008). Actors with no stake in the innovation may therefore resist changes in standards needed for the innovation's success to avoid such issues. Such competing interests have strong implications for a standardisation process, e.g. conflicts in SSOs (e.g. Jain, 2012), fierce battles in the market (e.g. den Uijl, 2015), or government involvement in the process (e.g. Meyer, 2012). The resulting dynamics may even be amplified when multiple of the three modes of standardisation (committee based; market based; government based) are involved (Wiegmann et al., 2017). This results in a challenge for innovators to influence standards in such a way that they are eventually supporting, rather than hindering. Gaps in the Literature The available literature provides a good foundation for understanding how to manage standards in innovation contexts, but nevertheless leaves important questions unanswered. Our literature review suggests that a more complete understanding is needed of (1) the company level, where the 'managing' is done, and (2) industry-level processes which are likely to result from these management activities but also shape them to some extent. The management of standards in innovation contexts is therefore preferably studied at both levels. Specifically, we identify three gaps in the literature: (1) The literature on standards management at company level (see Sect. 1.2.1) mostly does not specifically address the context of innovation, even though we show in Sect. 1.1 that this is an area where the impacts of standards on companies' activities are particularly strong. On the other hand, the literature that considers how standards and innovation co-evolve (see Sect. 1.2.2) largely treats companies as 'black boxes' and does not consider the extensive activities that are likely to happen inside them. (2) Given the lack of attention to the company level, the literature on the co-evolution of innovation and standards also misses out on the dynamics within and between the company-and industry levels which we expect to be a major factor in this co-evolution. (3) finally, the approaches to the co-evolution of standards in innovation contexts cited in Sect. 1.2.2 pay relatively little attention to conflicting interests and the resulting dynamics in the process (see Sect. 1.2.3). Because most innovative products are arguably aimed at existing markets with existing standards, and with actors who may oppose the innovation, such conflicts can be expected to often be critical when managing standards in this context. These omissions motivate our case study. Our study design, as outlined in Chapter 2, allows us to capture activities on both levels of interest, the resulting dynamics and their effects on an innovation. We therefore contribute a first step towards addressing these three gaps in the literature. references Abraham, J., & Reed, T. (2002). Progress, innovation and regulatory science in drug-development: The politics of international standard-setting. grounded theory methodology As outlined in Chapter 1, we are interested in a detailed exploration of how innovators manage external requirements (imposed by standards), the dynamics that result from this, and how this affects NPD activities. Specifically, we want to explore how this occurs on the company-and industry levels and how these two levels interact. The lack of literature addressing these questions makes an in-depth exploratory case study, which uses inductive reasoning to derive a grounded theory, the most suitable research design Glaser & Strauss, 1973;). This grounded theory approach allows us to conceptualise patterns that we find across the data to generate our theoretical contribution (Glaser & Strauss, 1973). In Sect. 2.1.1, we explain our case selection. Section 2.1.2 shows how we collected our data. finally, Sect. 2.1.3 summarises our approach to analysing these data. Case Selection: Theoretical Sampling following and , we selected our case on theoretical grounds rather than through random sampling. following on from our research question and the identified gaps in the literature, we defined five criteria that the case would have to meet. (1) It needed to be a case of an innovation for which both existing standards are relevant and new standards are required. (2) This innovation needed to represent a substantial technological leap. This maximised our chances of observing standards having a major impact on the innovation, and the involved actors' approaches to managing these impacts. (3) Our specific interest in NPD activities also means that the innovation in our case needed to be at a stage when companies developed products intended to be sold on a large scale. The initial fundamental research considered by should therefore already have been concluded. (4) furthermore, NPD activities concerning the innovation should preferably be pursued in parallel by several companies as this would allow us to compare their potentially different approaches to managing the relevant standards. (5) finally, for practical reasons, data about the case needed to be accessible and the case should be relatively recent to ensure that informants would be able to recall the needed information. We found a suitable case which meets all five requirements in the development of micro Combined Heat and Power (mCHP) technology. Several companies in the European heating industry simultaneously developed innovative natural gas powered central heating boilers, which convert excess heat into electricity, making them embedded units in the case (see . Standards were relevant, both because interfaces with other supporting infrastructures (e.g. the electrical installation in a building and the electricity grid) are needed for the innovation to be of value and also because important safety and efficiency issues make this a technology that is covered by the European Commission's 'New Approach'. 1 When mCHP was developed, generating electricity was an entirely new feature for the industry, meaning that it was a substantial departure from existing technologies. Nevertheless, there already were several existing standards affecting the technology, because the market that it was aimed at and the supporting infrastructures (gas, electricity, water) were already in place. Lastly, the case also satisfies the practical requirements outlined above. Data Collection The largest share of our data was collected in interviews. following two interviews with existing contacts, we used snowball sampling and contacted actors who we identified as relevant in desk research (e.g. additional companies with mCHP products) and when attending an industry conference. This approach resulted in approximately 26 hours of interviews conducted between April 2015 and August 2017 as detailed in Table 2.1. These interviews gave us insights into the perspectives of all groups of actors who were involved in developing mCHP-related products and/or managing standards to facilitate the technology, as well as perspectives from different countries which are key markets for the new technology. In order to ensure that the main topics of interest were covered in each interview while leaving the interviewees enough leeway to 'tell their stories', we used a semi-structured format. Gioia et al. (2013) highlight the importance of the interview guideline to ensure that this results in useful data for deriving theoretical patterns. This guideline was adjusted for each interview to cover all important topics (interviewee's involvement in the case, views on relevant standards, companies' processes for managing the topic, interactions with other stakeholders, results of their activities, etc.). Using these guidelines, we obtained detailed accounts of the interviewees' activities in the case and their views on the events. Where possible, we recorded the interviews and transcribed them verbatim in the language in which the interview was conducted (English for Interviews 1,8,9,12,and 14; German for all other interviews). In addition, some interviewees provided us with internal company documents. furthermore, we considered European Union policy documents related to the standards in the case which provided us with additional information on the evolution of standards in relation to the European directives that they were supposed to support. A final source of information was attending an industry conference hosted by the European industry association for co-generation of heat and power (COGEN Europe) in March 2016. At this conference, we gained further insights into the major topics of interest for industry actors and gained background information on how mCHP fits into the wider industry context. The conference also provided us with an opportunity to have informal discussions with important actors in the case. Data Analysis In line with our study's inductive reasoning, we based our data analysis on a grounded theory approach (Glaser & Strauss, 1973). We initiated our data analysis in parallel to data collection so that the information from earlier interviews could inform subsequent data collection efforts. In order to come closer to Glaser and Strauss's (1973) ideal of developing grounded theory without preconceived notions of existing theory, two assistants performed most of the open coding (see Alvesson & Sköldberg, 2009;Gioia et al., 2013) under the author's supervision. All coding was performed on transcripts in the languages in which the interviews were conducted (German and English, see Sect. 2.1.2) in order to stay as close as possible to the empirical evidence at this stage. Simultaneously to coding, we started the further data analysis by 'integrating categories', as suggested by Glaser and Strauss (1973, pp. 108-109). Clear themes that later became the key concepts of our theory emerged from the data at this stage, although we did not follow the strict template provided by Gioa et al. (2013). These theoretically saturated (see Glaser & Strauss, 1973, pp. 111-113) key themes are based on the main discussion topics across our interviews and reflect the elements that our interviewees emphasised. Chapters 3, 4, and 5 are structured along these themes and use extensive quotes from the interviews and-where available-supporting evidence from other sources to ensure that our constructs are deeply rooted in empirical observations. 2 In parallel to identifying these key concepts, we also looked for relationships between them (see Alvesson & Sköldberg, 2009, pp. 68-69;Glaser & Strauss, 1973, pp. 109-113). As suggested by Glaser and Strauss's (1973) description of the constant comparative method, we did so by alternating between noting down our ideas about such links and verifying in the data whether these ideas were supported by the evidence. This verification was based on whether we could identify a plausible explanation for each relationship in the data, for example by comparing different firms (embedded units) in our case, or by searching for interviewees' explanations of the reasons behind certain activities and events. This process ultimately resulted in the theory that we present in Chapter 6 and makes this theory firmly rooted in the empirical observations from our case. (EHI, 2014). These appliances would typically be used in single-family houses. The technology is a major innovation in the European heating sector. In addition to providing hot water and heat for buildings, mCHP boilers also generate electricity. This additional functionality represented a major technological leap for the European heating industry which did previously not make any electricity-generating products. In order to provide context for our analysis of how products using this technology were developed and standards were managed during this process, we cover background information that is important for a good understanding of the case. We first portray the European heating industry and mCHP's role for it (Sect. 2.2.1). following this, we give a brief overview over different technological approaches to mCHP and how the relevance of standards differed for them (Sect. 2.2.2). The European Heating Industry and the Market for mCHP Heating of buildings is estimated to be responsible for around 40% of the EU's energy consumption and 36% of its CO 2 emissions (European Commission, 2017). Consequently, boiler manufacturers and other actors in the European heating industry have been facing expectations from the market and political actors to make their products more energy efficient and contribute to efforts to combat climate change. In response to these demands, the European heating industry developed several technologies to eventually succeed the established condensing boilers for domestic applications, including heat pumps, solar thermal systems, and mCHP. Which of these technologies is most energy efficient depends, e.g. on heat demand and the local electric power generation mix where an appliance is installed. The technologies therefore address different market segments. A key advantage of mCHP products compared to heat pumps and solar thermal systems is that they can be integrated in existing buildings more easily if designed in such a way that they match existing infrastructure in buildings. This made mCHP a potentially promising technology to attain higher energy efficiency in the replacement market, which one interviewee described as existentially important for the companies in the industry: We live off the existing [building] stock and replacement. The relation between newly built buildings and existing buildings in Germany in a year is approximately 1:10. This means that, for every boiler or heating appliance that we sell into a newly built house, we sell ten into existing buildings. (translated from German) The European heating industry is distinctive in that the established players and market leaders are mostly owned by the founding families or by foundations with a mission to ensure the business's long-term viability. This gives the companies and the entire industry a long-term outlook which also manifested itself in the way standards were managed during the development of mCHP. However, it also means that the industry is relatively conservative and "not really known for being particularly innovative [and consisting of] rather traditionally shaped enterprises" (translated from German). Developing mCHP brought the involved actors into contact with several new key technological fields (see Sect. 2.2.2) and the players involved in these areas, requiring the industry to adopt new approaches to innovation and standardisation and become more open to dealing with actors outside the industry as outlined in Chapters 4 and 5. Within the industry, these developments were driven by a range of actors. In addition to the boiler manufacturers (OEMs) who developed and eventually sold complete mCHP appliances, suppliers of key components; certification bodies; engineering consultants; industry associations; and research institutes all were involved in the process. The OEMs developing mCHP and the component suppliers included established players in the industry and new entrants which were specifically founded as startups to develop mCHP appliances and components. Our interviews cover all key players in the case as well as some more peripheral actors (see the characterisations of companies covered by our interviews in Table 2.1). Technological Solutions for mCHP four technological approaches exist to realise the functionality of mCHP appliances: (1) Stirling engines; (2) fuel cells; (3) internal combustion engines and (4) steam expansion engines (EHI, 2014). While internal combustion engines and steam expansion engines have been barely used for mCHP applications, both products based on Stirling engines and on fuel cells have been developed and marketed. All interviewed OEMs have been developing fuel-cell-based mCHP appliances, although not all of them have brought them to the market yet at the time of writing. Some OEMs have been developing and offering Stirling-based mCHP appliances in addition. The OEMs that never developed the Stirling technology or exited its development cited technological challenges and doubts about whether mCHP appliances using Stirling engines could reach the same levels of efficiency as those using fuel cells as the reasons behind the decision to only pursue fuel cells. On the other hand, the companies that still have been pursuing the Stirling engine in parallel to fuel cells see the two technologies as catering for distinctive market segments: I expect there will be different technologies in parallel, and they could serve different markets segments. That has to do with the question how the ratio is between heat demand and power demand. That's one issue. And especially when the heat demand is high compared to the power demand then nowadays already Stirling engine could be a better solution than the fuel cell. Technologically, the two approaches are fundamentally different: (1) Appliances with a Stirling engine add this engine (and some control electronics) to a conventional condensing boiler. Such a boiler produces more heat than is needed to cover the demand for heating and hot water. The excess heat is then converted to AC electricity by the Stirling engine which is tuned to the frequency of the national electricity grid (50 Hz in Europe), meaning that the produced electricity can be fed directly into the grid. (2) fuel-cell-based appliances contain a reformer that extracts hydrogen from natural gas. This hydrogen is then used to power a fuel cell which produces both heat and DC electricity. An inverter converts this DC electricity to AC electricity that can be fed into the electricity grid. In addition, fuel cell appliances usually include a conventional gas boiler to cover peak heat demand. Some aspects of these technologies were already known to the involved companies and have been used in their products for decades. Particularly, the condensing boiler units that provide the heat for Stirling engines to operate were very similar to the ones used in the industry's existing products. However, both Stirling engines and in particular fuel cells were new and very complex technologies for all actors in the heating industry. furthermore, regardless of the technological approach to mCHP, its implementation required the industry to get involved in entirely new technological aspects, such as access to the electricity grid, technologies for communication with other devices, or grid stability. These fields presented a steep learning curve, in terms of both technology development and standardisation, as Chapters 4 and 5 show. Most relevant standards and regulatory requirements (see Chapter 3) applied equally to Stirling-and fuel-cell-based mCHP appliances and had similar implications for both technologies' development. The standards for connecting appliances to the national electricity grid are a key exception to this. Some changes to them that occurred while mCHP was being developed posed additional challenges for devices using Stirling engines but had a smaller impact on the development of fuel-cell-based mCHP (see Chapters 3 and Sect. 5.2 for details). references Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. Abstract micro Combined Heat and Power (mCHP) relies on standards in around a dozen technical areas, related to topics like product safety, electricity grid access, and environmental performance. This chapter provides an overview over relevant standards and their effect on mCHP. Under the European 'New Approach', many of these standards define 'essential requirements' in line with European regulation. This link makes standards important elements for conformity assessment and proving mCHP appliances' regulatory compliance. Standards are therefore key enablers for mCHP's developers to place the technology on the European market. The chapter concludes with an overview over the effects of standards and regulation on innovation in the mCHP case. Keywords Standards · European regulation · European new approach Effects of standards on innovation · Conformity assessment Regulatory compliance Standards, together with regulation and conformity assessment, have been crucial for the development of mCHP. While our study was initially focussing on the role and management of standards for the innovation, it soon transpired from our interviews that they are inextricably linked to European and national regulation and conformity assessment of mCHP appliances. In Sect. 3.1, we outline which standards have been relevant for the technology's development. Section 3.2 explores the link between CHAPTER 3 Standards, Regulation and Conformity Assessment for mCHP standards and regulation and its effects on mCHP. following this, we discuss the need for conformity assessment and the role that standards and regulation play in this context (Sect. 3.3). finally, we shed light on additional effects that standards had on the development of mCHP in Sect. 3.4. relevant standards for mchP Standards posed requirements for key aspects of mCHP technology, such as product safety, energy efficiency, and connections to the electricity grid, which needed to be fulfilled in order to provide the intended value for buyers and gain approval for market entry. A list of all relevant standards, that were mentioned during the interviews, can be found in Table 3.1. Many of these standards are interrelated. The standards identified in Table 3.1 broadly fulfilled two main functions for mCHP's development process: The first function is defining the interfaces to link mCHP to complementary technologies, such as the national electricity grid and electrical and gas installations in buildings. These infrastructures were essential to enable the innovation to deliver the new aspects of its value proposition-generating electricity that can be used by a device's owner and/or fed into the electricity grid. The second main function of standards for the innovation is related to support proving the compliance of mCHP appliances and their components with regulatory requirements (e.g. gas and electrical safety, energy efficiency and requirements for connecting devices to the electricity grid). This function has been key for the development of mCHP, based on the link between standards and regulation in the case, which we outline in detail in Sects. 3.2 and 3.3. All interviewees stressed the particular importance of the product standard (EN 50465-"Gas appliances-Combined heat and power appliance of nominal heat input inferior or equal to 70 kW") for the development of mCHP. This product standard addresses key elements of the technology, such as safety and energy efficiency, and defines minimum performance requirements for these dimensions of mCHP appliances. It has been key in outlining how mCHP appliances can meet regulatory requirements (see Sect. 3.2) and in supporting the conformity assessment of the appliances (see Sect. 3.3). When the technology's development started, this standard did not exist yet in its current form and did not cover all technological approaches to mCHP: At first you have to deal with the product standard. But at the moment that we did the development, it wasn't there. We did the development, the basic development, we started by the end of 2005 and at that moment there was no standard. This initial absence of the key standard had important implications for the technology's development and made writing this standard a priority for the industry in managing the standards related to the innovation, as we outline in Sect. 5.2.2. regulation for mchP and its relationshiP with standards Relevant regulation for mCHP covers the areas of product safety, energy efficiency and grid connections (see Table 3.2 for a list of all regulatory texts that were mentioned as relevant during the interviews). This regulation defines 'essential requirements' which mCHP appliances must meet if they are sold on the European market. In line with the European 'New Approach', these essential requirements laid down in the regulation are formulated on a relatively abstract level and do not prescribe technical details or solutions that need to be implemented to fulfil them. Standards provide important guidance regarding how to reach these requirements, as outlined below. Harmonised Standards Providing 'Presumption of Conformity' Under the European 'New Approach', the high-level requirements formulated in directives are supported by harmonised standards. These standards provide detailed specifications of the essential requirements, such as test methods to be used in assessing whether a product meets the essential requirements. Such harmonised standards are developed by the ESOs following requests by the European Commission. The European Commission then carries out an assessment whether the contents of these standards satisfy the essential requirements. If a standard passes this assessment, it is listed in the Official Journal of the European Union along with the directive against which it is harmonised. Once a standard has been harmonised in this procedure, complying with the standard gives a product 'presumption of conformity' with the associated European Directives. This means that any product which implements a harmonised standard is assumed to meet the essential requirements imposed by the directive: Someone who develops such a product (…) can work with the standards and can then assume that he also fulfils the requirements from the directives in this way. (…) This is called 'presumption of conformity' if a standard is listed under a directive in the Official Journal (…) which helps from a technical point of view. (translated from German) Fulfilling 'Essential Requirements' Without Relying on Harmonised Standards Although relying on harmonised standards is a straightforward and commonly used way of proving compliance with regulatory requirements, their use remains voluntary (European Commission, 2017). Manufacturers are also permitted to demonstrate in other ways that they reach a performance level that satisfies the regulation's essential requirements. A first way of doing so is implementing other standards developed by the European Standardisation Organisations (ESOs-CEN, CENELEC, and ETSI), even if they are not harmonised. These standards are assumed to reflect the current state of technological development, meaning that implementing them in an innovation is seen as following good practice. This also applies to the key product standard in the mCHP case (EN 50465). Due to conflicts between the European Commission and the European heating industry regarding the calculation methods for mCHP appliances' energy efficiency (see Sect. 5.2.2), this standard has not been harmonised yet at the time of writing. Nevertheless, it has emerged as the generally accepted standard detailing the essential requirements from the relevant European Directives for mCHP appliances. In addition to or as an alternative to relying on standards, manufacturers may also demonstrate their product's equivalent performance to the level described in the standard without using any standard: If his [a manufacturer's] product has a solution that is not covered by the standard (…), this is not forbidden. (…) [But] it has to be written in the development documentation that he (…) fulfils the requirements of the directive. (…) When he, as a manufacturer, prints the CE-mark 1 on the device he confirms at this time that all relevant directives are fulfilled (…) and this has been proven through the standard and (…) his own specifications. (translated from German) Such an approach of not relying on the standard then shifts the burden of proof that the mCHP appliance meets the regulatory requirements to the manufacturer: The burden of proof that this [the product fulfilling the essential requirements] is actually the case then lies with him [the manufacturer]. (…) When he uses a harmonised standard, the presumption of conformity applies. This means that if he uses the standard, he may assume that he fulfils the essential requirement. If this [fulfilling the essential requirement] is not the case, the burden of proof does then not lie with him but with the European Commission. This is all about who is liable. (translated from German) In addition to the issues surrounding liability when deviating from the solutions defined in a standard, taking such an approach would also require substantial additional effort and slow down the NPD process: [Standards] rather lead to speeding up a development process, because the requirements are clear. Imagine there were no standards and we only had the directives. Because directives are laws and safety-related laws always exist. (…) Then you first would have to translate: What does such a legal requirement mean for materials, for testing, for technology, for time response? So standards, because they are general specifications, are actually accelerating means for the development. (translated from German) In practice, the interviewed manufacturers therefore based the designs of their mCHP appliances on standards wherever possible and avoided using other technical solutions which would have required them to demonstrate compliance to regulatory requirements in other ways. This further underlines the importance of standards for the innovation and also had implications for the management of standards, where the industry sometimes invested substantial resources in order to influence standards, rather than implementing alternative solutions into their products (see Chapter 4). assessing conformity to essential requirements in the mchP case Because the essential requirements in the relevant regulation are mandatory (see Sect. 3.2), mCHP appliances can only be sold in the European market once their compliance to these requirements has been proven. While a declaration by the manufacturer, confirming that the requirements are met, is sufficient for many product groups, this is not the case for mCHP. Due to the inherent safety risks of gas-powered appliances, conformity assessment must be carried out by an accredited certification body which has been authorised by the government to carry out this assessment for the relevant European Directives. This party issues a certificate if the requirements are met 2 : [for] a gas appliance, a manufacturer cannot simply develop an appliance, produce it, and sell it. He needs third-party certification. This means he must go to an accredited testing laboratory. The product is tested on its conformity, strictly speaking to the directive but in practice to the standard. Then, a notified body issues the certificate. Only once he has this, he can sell it in Europe. (translated from German) Such independent test laboratories (often referred to by the legal term 'notified bodies') assess the technology and against essential requirements in the relevant directives. Notified bodies choose an appropriate basis for certification which defines both the requirements that mCHP appliances must fulfil and the methods, which are used to assess the fulfilment. Usually, the product standard (EN 50465 in the mCHP case) is used for this purpose. It defines both requirements and test methods but (at least in theory) test laboratories may also deviate from this: This inspector, who is employed by this institute, decides which basis he brings forward or draws upon to conduct the assessment. And in this, he is relatively free. So, if he says… He could still say today 'the 50465 is not sufficient for me'. This would not correspond to the facts, but he could always draw on another standard if this was necessary in his opinion. (translated from German) This discretion in choosing the basis for the certification process led to different approaches among testing institutes in the early stages of mCHP's development, when EN 50465 did not yet exist in its current form and therefore no standard detailed the essential requirements for mCHP appliances. In interviews with OEMs, we were told about various related standards (e.g. for conventional condensing boilers) being used as a preliminary basis for testing by the notified bodies. Another approach, which was described in an interview with a notified body, was developing a test regime directly based on the relevant directives: When we started this process, typically for fuel cell systems, there was no standard. So we had to certify directly on the directive. We have the essential requirements of the directive. So what we did, we created our test plan and said 'okay if you meet this, then we can certify against the Gas Appliance Directive'. So there was a lot of freedom for us, but in the end, as a competent notified body, we had to make a decision 'it's safe enough'. So, we could handle different technologies which were not addressed by standards. But it also means a very good relation between us and the manufacturer to really understand the technology and for them to understand what our safety requirements are. Standards Providing Certainty for Conformity Assessment The potentially different approaches to certifying mCHP appliances that could be followed in the absence of standards meant some uncertainty for the NPD process because the exact requirements for market access only became clear when the notified bodies were invoked into the companies' NPD activities. As Sect. 4.2.2 shows, the stages of development at which notified bodies were involved varied between companies, meaning that the magnitude of the resulting uncertainty also differed across actors in the industry. Nevertheless, having standards (in particular EN 50465) in place to provide more detailed information about essential requirements, as outlined in Sect. 3.2, helped all involved parties' NPD activities because this reduced leeway for different interpretations of the essential requirements: It is very important for industry that not everybody interprets the directive differently every day and at the end the certification laboratory differently than the manufacturer. (translated from German) In this way, standards provided important information about required performance and test procedures to prove this performance which could be used during the technology's development. Standards thus reduced the effort needed for mCHP appliances to pass the certification process. They reduced the need for extensive proofs of technical solutions meeting the essential requirements and provided a basis for a common understanding of these requirements: In fulfilling this function in the certification process, standards supported mCHP's access to the European market and therefore played an essential role in enabling the technology's diffusion. While using standards remains voluntary and other solutions are acceptable, there was a widespread sentiment among the interviewees that adhering to standards related to the applicable European Directives (see Table 3.2) was almost a necessary condition for bringing mCHP technology to market and that other solutions should only be chosen in exceptional cases. standards' additional effects on mchP's develoPment and diffusion Interviewees reported that the standards which were relevant for mCHP (see Table 3.1) had both positive and negative effects for their innovation activities. They emphasised the effects of standards on the certainty regarding regulatory requirements and certification (see Sects. 3.2 and 3.3). These aspects were a major focus of their activities related to managing standards (see Chapters 4 and 5). In addition, the experts also reported other effects of standards on both the development and diffusion of mCHP. The positive effects named in this context include standards often being useful information sources; standards supporting access to complementary infrastructures (e.g. the electricity grid); standards allowing the industry to signal mCHP's benefits to other actors; and standards helping build economies of scales for the innovation. Negative effects on the innovation usually were perceived when standards were out-of-date or required standards were missing. These perceived effects were the basis for how actors in the industry managed standards in the case (see Chapters 4 and 5). We explain the effects that standards had in the case in detail below. Support of Standards for mCHP's Development Often standards served as useful information sources in the development of mCHP, not only about regulatory requirements and testing procedures (see Sects. 3.2 and 3.3), but also about other topics. Especially in technological areas where the companies had no previous experience, like safety mechanisms related to shortcuts and switching the device off in emergencies or measuring the amount of electricity produced, interviewees explained that they could make use of standards in their designs: for the new functionality, especially for the generation of electricity, of course, they were new aspects for us. (…) for the things which are only new to us but which are self-evident, you have to follow them. So then standards are a good help to show you what you have to do. In addition, because "experience that has accumulated over decades is behind standards, especially in the electro-technical and gas areas" (translated from German) this information also supported more commonplace design decisions in the innovation process: When I do not need to ponder every time 'this material, this screw and this seal -may I or may I not?' This is definitely helpful. (translated from German) A second way in which interviewees perceived standards to support the innovation was the role that they played in defining interfaces to link mCHP appliances with other elements, such as the electricity grid; electrical and gas installations in buildings; and communication between electricity producing devices (see Table 3.1). These standards have not only been providing technical information for the companies' NPD activities but also have been supporting the innovation's eventual diffusion by offering certainty for the industry and eventually customers that the appliances would work with other elements as intended and limiting customers' needed investment in changing elements like the gas installations in their houses. However, interviewees pointed out that, for important interfaces, this support was only available at later stages of mCHP's development because the needed standards did not exist at all (e.g. communication between electricity producing devices), or needed to be adapted (e.g. standards for internal wiring of buildings, see below), making these interfaces an issue to be considered in the management of standards (see Sect. 5.1). In addition, standards also were described as supporting mCHP's diffusion by helping to signal mCHP's qualities and benefits to other actors, like consumers and governments. This particularly applies to the product standard (EN 50465) which also covers energy efficiency of the appliances and supports the requirements of the Energy Labelling Directive (see Table 3.2). EN 50465 includes a formula that allows calculating the energy efficiency of mCHP devices. This formula is intended to form the basis for determining an mCHP appliance's energy label, which the directive requires it to carry (although this formula was a major point of contention during the development of EN 50465-see Sect. 5.2.2). finally, standardisation supported the heating industry in reaching economies of scales for mCHP technology. By being able to rely on existing components from other products and standardising new key components, such as the Stirling engine, between manufacturers, the industry was able to reach higher production numbers much quicker than would otherwise have been feasible and thus bring the technology's costs down to make the price-performance ratio more competitive with other heating solutions and enable faster adoption in the market than might otherwise have been possible. Hurdles to mCHP's Development from Standards and Related Issues Standards sometimes also were seen as hindering the development of mCHP. Some standards contained requirements which were based on outdated assumptions and which were difficult to implement in the innovation or would have severely limited its value to users. for example, pre-existing standards for electrical installations within buildings were written under the assumption that there are only devices in a building that consume electricity but no electricity producing devices. These standards would have required substantial changes to a building's electrical installations to install mCHP appliances in existing buildings, thus adding to the technology's costs and making it less attractive to consumers in the crucial market for replacement of heating boilers in existing buildings. Another example of outdated assumptions underlying standards concerned test procedures fixed in a standard which may assume a certain device-architecture and specify the assessment of certain components of an appliance which may no longer be part of a new design and have been replaced by other components. A second notable area where standards have been imposing requirements that the interviewed companies sometimes found difficult to fulfil in mCHP appliances is the access to the electricity grid: Standards can also be used to hinder technologies. The 'Network Code Requirements for Generators' is in many areas… I don't want to say designed to… but I say it makes it very difficult, in particular for small electricity generators. (translated from German) Another interviewee described these requirements for generators as "a real problem for small generators, because it now sweeps up any generator in Europe that is greater than 800 W in power output". One key example of a difficulty resulting from this network code is the requirement for dealing with changing network frequencies, which changed while mCHP was under development (see Sect. 5.2.1) due to technological developments in other realms. While it was traditionally required to switch an electricity producing appliance off in the rare cases when the grid's frequency deviates from the usual 50 Hz, the new rules required generators to be able to remain online and adjust their own frequencies in line with Now it wants you to operate things from 47 Hz to 52 Hz or something, so it's much, much broader than frequency swing, which is very difficult for a tuned Stirling engine, free-piston Stirling engine. In fact, we can't operate over that wider band. Standards which imposed hurdles for mCHP in this manner required (sometimes extensive) action during the technology's development, either by adapting the technology or the standard, in order to avoid negative effects on mCHP's eventual chances of reaching large-scale diffusion in the market. Although hurdles for mCHP's development sometimes arose from standards (the two examples above being the most notable ones mentioned by the interviewees), there was consensus between the interviewees that the most serious standard-related obstacles to the innovation actually resulted from the absence of needed standards (either completely or on a European level). The absence of the product standard (EN 50465) outlined in Sect. 3.1 was key for the development of mCHP and necessitated substantial efforts when the industry engaged in standardisation for the technology (see Sect. 5.2.2). In other key areas, such as the natural gas composition; exhaust emissions; access to the electricity grid; or financial compensation for energy that is fed into the electricity grid, standards only did (and to some extent still do) exist on the national but not the European level. The following quotes are three out of many in our interviews that address this issue: So, each country has its own requirements and when you go through them, then Germany has a certain standard which involves some protections that should be in. for instance (…) how to test if you are connected to the grid. (…) So, indeed, in the United Kingdom is forbidden what is required in Germany. And this feeding into the grid is something which I still do not completely understand. On the European level, a standard exists on this topic. This standard basically consists of a rather large number of national appendices. And it explicitly states that the respective connection requirements in the individual countries, or even regions and network operator environments (…) must be taken into account. And this varies tremendously across Europe. (translated from German) And then there are the specific parts, in particular for the flue gas evacuation. There, we have a European patchwork which cannot be outdone. (translated from German) Such differences across countries meant that different versions of mCHP appliances needed to be developed and certified for each country where they were intended to be sold. This implied additional development effort and made it more difficult to achieve economies of scales for the components that needed to be adapted for the local versions. However, one interviewee at the European association of the heating industry pointed out that this might not be completely against the interests of the OEMs: Honi soit qui mal y pense. Of course, the manufacturers do not want movement of goods to be as free as the consumer might think. There are also price differences between countries and they are thereby being blocked a little bit. (translated from German) overall imPact of standards on mchP's develoPment In terms of their overall impact on the development of mCHP, interviewees saw standards mostly positive. Although there were some negative effects, as outlined above, there was consensus among the interviewees that these were by far outweighed by the positive aspects. This sentiment is represented by the following quote which characterises standards' function as proving a foundation for the innovation's development: The aim of standardisation is very clear. At this moment, at this early stage of the technology, it is to lay a good foundation for this technology, so that this technology can be accepted by the market. (translated from German) Based on the characterisations of support and hurdles arising from standards, they can be grouped according to (1) their link to regulation, and (2) whether the innovation can conform to the standard or not. While the first characteristic determines the strength of the impact on mCHP, the second characteristic determines whether this impact is positive or negative (see Table 3.3). furthermore, several standards, which were needed to market mCHP appliances, did not yet exist when the technology's development started. While already existing supporting standards were relatively straightforward to manage, standards that hindered the innovation and/or were still missing required substantial attention during the technology's development. We portray these management activities in Chapters 4 and 5. reference European Commission. (2017, September 25). Harmonised standards. Retrieved from http://ec.europa.eu/growth/single-market/european-standards/harmonised-standards_en. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. Abstract micro Combined Heat and Power (mCHP) technology was developed by several established companies and start-ups in parallel. This chapter provides detailed insights into the different companies' innovation management approaches. Based on in-depth interviews, it compares how these firms managed standards and regulation while developing their mCHP products. It shows the types of awareness, expertise, and resources needed to provide a solid foundation for addressing standards and regulation that affect an innovation. Building on this, the chapter shows how these factors enable managers to introduce their innovations into highly regulated markets. Keywords Innovation management · New product development Regulatory compliance · Standards · Regulation The findings outlined in Chapter 3 show the importance of standards for developing the technology of mCHP and bringing the appliances to the market in Europe, thus making standards a key issue to manage as part of this development. Processes to manage these standards occurred on two levels: (1) Each of the involved companies had its own internal NPD process, as part of which standards were addressed. (2) In parallel to these company-internal activities, the industry collaborated on developing new and adapting existing standards to allow mCHP's development, where needed. Both levels interacted throughout the process, i.e. work CHAPTER 4 Managing Standards for mCHP on Company Level within the companies reflected the industry-level developments, and the activities to adapt standards were driven by the individual actors in line with their internal activities. In this chapter, we focus on the company-level activities related to managing standards for mCHP (see Chapter 5 for a description of the collaboration between actors in the industry). There was a variety in approaches to managing standards and regulation and the degrees to which they were seen as important, as the following quote from an interviewee at a notified body illustrates: You see differences. Some manufacturers, they -I mean if we have this pre-assessment we push them to really read standards and then you see that some of them, they even haven't bought one. 1 And others, they already read it three times. So there is a difference in experience and seeing the need of using these standards. We summarise these different approaches in Table 4.1 2 and outline them in more detail below. In Sect. 4.1, we focus on the companies' general approaches to standards and regulation. This includes aspects such as their awareness of the topic and the degrees to which it is handled strategically, as well as how standards and regulation are embedded into the companies' structures. Section 4.2 then shows how the interviewed companies incorporated standards and regulation into the mCHP development process, covering aspects like the timing of their management, how the companies identified relevant standards and how they incorporated input from the industry level into their development activities. 1 Actors wishing to access the contents of standards developed by the ESOs and their national member bodies must buy the documents from the publishing arms of the standardisation organisations. 2 We omit component suppliers from this table because all three interviewed component suppliers' activities related to regulation and standards were tightly linked to those of the appliance manufacturers, rather than standing on their own. comPanies' aPProaches to managing standards and regulation As the quote in the introduction to this chapter shows, companies in the industry differ substantially on their fundamental approaches towards standards and regulation. Their awareness of the topic's importance varies (Sect. 4.1.1) and they are able to devote different amounts of the required expertise and resources to managing the subject (Sect. 4.1.2). As we outline in Sect. 4.1.3, these different foundations affect the grounding of managing standards and regulation, both in terms of strategic focus and integration into the organisation. Awareness of Standards' and Regulation's Importance A first factor driving companies' approaches to managing standards in the context of mCHP were the degrees to which they were aware of the topic's importance for developing the technology. This differed according to functions of standards and regulation, such as certification and providing market access, or acting as information sources. Awareness of Standards for Certification and Related Issues Standards and regulation can have a major impact on the certification, market access, and liability questions related to a technology like mCHP (see Chapter 3). One interviewee described this significance as follows: Both for the technology and the company -the success and the safety of a company -standardisation is an elementary topic. And companies and start-ups must be aware of this. (translated from German) Most established companies acted in line with this view on standardisation and regulation. Based on their experience in the industry, they treated managing standards and regulation as a necessary condition for successfully developing new products and bringing them to the market. On the other hand, new entrants to the market sometimes did not understand the importance of standards and the European system, as the following quote from an interview with an engineer from a notified body, who had conducted conformity assessment of many companies' mCHP appliances, shows: Basically, these boiler manufacturers, they already know standards, they know certification processes, so they were from that perspective better prepared. But on the other hand, the start-ups or the Japanese or the Americans are not familiar with the European situation. They were not that focused yet in standards, although some manufacturers were already (…) prepared but some of them were not prepared. Especially the startups -for them it's new to read and understand these standards, seeing the complete picture is difficult for them. And that's also the case for all parties outside Europe, they don't understand our system with directives and standards. While none of the companies that we interviewed lacked awareness to a degree described in this quote, two of the smaller start-up companies explained that their awareness developed throughout the development of mCHP. When these two companies initiated their activities in the field, they did not yet know about the need for considering standards which caused some duplications of effort in the NPD process (see Sect. 4.2). Awareness of Non-certification-related Functions of Standards On functions which are unrelated to certification that standards can fulfil, such as providing useful information for the technology's development or defining interfaces, we observed more variation in the awareness among our interviewees. Interviewees at smaller companies mostly focussed their attention completely on standards which are related to certifying the product. They therefore did not seem to have a high degree of awareness of standards' other functions. In established companies, interviewees were aware that standards can also fulfil non-certification-related functions. for example, interviewees brought up standards defining interfaces between a heating boiler and a building's pipework, standards providing information about characteristics of materials for certain applications, and standards reducing variety in components like control electronics. When these functions were mentioned, this was an aspect 'on the side', and interviewees saw them as a given when developing new products. They considered them such a basic element of their companies' internal innovation processes that they did not warrant much attention as part of managing standards and therefore these functions did not play a major role in the interviews. Nevertheless, the non-certification-related functions of standards were significant for developing mCHP in the collaboration of parts of the industry that we describe in Chapter 5. Examples include reducing variety by standardising the Stirling engine component across different companies' products, facilitating collaboration in technology development (see Sect. 5.1.1 for both), and defining interfaces with the electricity grid (see Sect. 5.2.1). In addition, developing a standard to provide information about appliances' energy efficiency was a major focus of the industry's collaboration (see Sect. 5.2.2). Expertise and Resources for Managing Standards and Regulation In addition to a company's awareness, its available expertise and resources are key to the ability to manage standards and regulation effectively. As outlined below, we found in our interviews that this work requires specific expertise which can only be provided if a company has substantial resources at its disposal. Required Expertise for Managing Standardisation and Regulation Our interviews show two distinct topic areas in managing standards and regulation that require different types of expertise: (1) topics with technical, subject-related focus, and (2) topics on a higher, strategic level. The first area comprises all work that is directly connected to the technical contents of the standards, such as contributing to the development of technical requirements in standards and regulation, assessing their implications for product design, and implementing them in technical development. It therefore often requires in-depth subject knowledge. Tasks related to the second type include, for example, following ongoing developments in standardisation and regulation, assessing their significance for the company, and deciding whether and how the company should engage in standardisation and regulation initiatives. This also aims to coordinate the company's standardisation and regulation initiatives, e.g. in terms of assuring that input into a standard for one technology does not result in issues for another technology in the portfolio. One interviewee described his work in this context as follows: I am responsible for the strategic association work (…). Strategic association work distinguishes itself from operational association work because it is concerned more with which associations we should be part of: Where do we need to represent our interests and, if we have interests there, what are our positions in the respective topics which are covered by the associations? (…) In addition to the strategic association work, the area of political lobbying belongs to association work. (translated from German) In addition to the skill sets required for these distinct activities, interviewees agreed that effective of standardisation and regulation and representing the company in external working groups also necessitates staff with a high level of social skills, as the following quote shows: It is equally important that one has the appropriate standing in these committees. Social skills in the widest sense. Because otherwise one leaves these committees with a lot of confusion and little results. (translated from German) Required Resources for Managing Standardisation and Regulation Providing the required expertise for managing standardisation and regulation is resource intensive. Especially in the early phases of a technology's development, many issues related to the topic must be resolved. There was consensus among interviewees that new technologies, such as mCHP, require substantial initial effort until the needed standards and regulation are established and all involved parties (manufacturers, notified bodies, regulators, market surveillance authorities etc.) are familiar with the technology. Once a technology has been established, the effort required for managing standards and regulation (e.g. following ongoing developments and contributing to keeping standards and regulation up-to-date) is much smaller. Accordingly, interviewees reported using substantial resources for managing standards and regulation in mCHP's development. One interviewee stated that his company invested several man-years of work time into mCHP-related standardisation and regulation questions as part of developing the technology. Another interviewee estimated that the work of one out of approximately 30 full-time-equivalent positions involved in developing mCHP at his company was related to the topic. Overall, all interviewees whose companies participated in standardisation and regulation work estimated the effort to be somewhere between three and ten per cent of the overall time and effort for developing mCHP. Standardisation-and regulation-related activities therefore comprised a relatively small but still significant share of all work needed to bring mCHP technology to the market. In larger established companies, these resources were usually available as needed, although one interviewee explained that it could sometimes be difficult to convince direct superiors of the required experts to make their staff available for standardisation work because the benefits may be long-term and/or difficult to measure. Smaller start-up manufacturers explained that their limited resources sometimes hindered their ability to effectively manage standards and regulation, even if they were aware of the topic's importance. Especially participation in standard development and lobbying for changes to regulation was often unfeasible for them, as the following quotes show: This [participation in standardisation], especially for a small enterprise, is very difficult. Such a new product development by itself already needs a great deal of resources and providing them in a company of our size is already, in my opinion, a considerable achievement. (translated from German) Definitively, this [participation in standardisation] is an enormous advantage, clearly. But, as I already said, there always is a balancing act at our company regarding what personal and financial resources are available. If one wants to participate there, participate really constructively, then one also has to invest quite a bit. And for us, this is always a balancing act what can be used for that or whether our means can better be used in another place for the actual development work. (translated from German) Unfortunately, they [the company's clients] didn't pay you to do that [participating in standardisation] and within [company name] we never had enough people. Again, this is where it's difficult to do a lot of product development and standards development from within a small company because we don't have the people, we don't have the money. Yeah, it would be nice to. Strategic and Organisational Grounding of Managing Standards and Regulation The degree of companies' awareness of standards and regulation and/or the available expertise and resources determined how the topic was grounded in the company's organisation. This in turn was linked to which degrees the companies could address the topic strategically. Some companies address these issues in an ad hoc manner whereas others have very clear structures and procedures for addressing standards and regulation. The smaller start-ups we interviewed fall on the 'ad hoc end' of this spectrum. Their lack of dedicated resources meant that they were only able to address the most pressing standardisation and regulation issues at the point when they occurred and could rarely address the topic in a very strategic way. Other companies spent substantial resources to put clear structures in place that support managing issues related to the topic in a strategic and coherent manner. In between these two extremes, other companies implemented some elements to steer their standardisation efforts while using fewer resources to do so. We outline these observations in detail below, focusing (1) on the organisational structures for the management of standards and regulation, and (2) the intra-company networks to facilitate these activities. Organisational Structures for Managing Standards and Regulation In order to provide the skills needed to fulfil the tasks outlined in Sect. 4.1.2, the companies attached standardisation and regulation activities to different parts of their organisational structures. The first, subject-specific area of activities was directly linked to the product development activities for mCHP at all interviewed companies. It was often stressed during our interviews that it is essential for effective management of standardisation and regulation that a company's representatives have in-depth technological knowledge. The following are only a few of many quotes in the interviews which stress this importance: It is very important that in meetings where these topics [standardisation and regulation] are discussed, the technical expertise is present to talk about these topics, so that one does not just stop and say 'I am going to discuss this and come back next time' but that one is immediately in a position to make the required points. (…) Otherwise (…) one has to rework everything back at the company, [then] goes back [to the committee], but they are already further. This really hinders the process. Especially these technical expertise and social skills of those who work there and their internal network in the development departments is very important. One cannot simply send any -I don't want to say business economist -who is detached from the technology. (translated from German) He [the company representative in standardisation] was extremely close to the project team [and] was very, very deeply involved in the development activities. This means it was not like we had a separate department which assumed the standardisation activities. Instead, the people who were very close to the project also did this. (translated from German) It has always been important that one directly implements this experience which one has gained in [product] development in the standard. This is extremely important. This is also why the employees who have contributed to the standardisation committees -they all were employees from the new product development area. (translated from German) And it can absolutely go so far that developers come along to, for example, the ministry of economic affairs to present a topic, explain a topic, precisely because these relationships are partly not trivial and are also not immediately accessible to civil servants, even if they have been at home in this subject area for a long period. Using development engineers for such communication tasks in our association work is something that we have been doing relatively often in the last years. (translated from German) All interviewed companies assigned subject-related tasks in managing standards and regulation to the development engineers whose work already addressed these technological questions. In contrast, they differed regarding where in the organisational structure the responsibility for the more strategic questions was located. Specifically, we observed three different ways in which this was addressed: (1) Companies at the very ad hoc end of the spectrum of standardisation approaches did not address strategic questions at all, usually because of lacking awareness and/or resources. (2) In companies falling in the middle of this continuum, the topic was often covered as an additional activity by one or a few employees who were also otherwise involved in managing standardisation in regulation. for example, these tasks were handled in one company by a senior product developer and in another one by the head of the department responsible for product certification: At [company name], we have a division which mainly occupies itself with certification, conformity declaration and so forth. And the head of this department dealt with the coordination [of standardisation activities] in close consultation with the development projects. (translated from German) (3) finally, two companies stand out because they have dedicated teams and can therefore be located at the very strategic and professional end of the continuum. The members of these teams to some extent also had a formal function to guide their companies in choosing where to engage and in defining common positions that should be followed by all staff representing these companies in standardisation and regulation. In the first example, the company established a team that is directly responsible to the head of product development which focuses on the strategic questions related to standardisation. In the second example, a team within the company's department of public relations is charged with these topics. I am responsible for the strategic association work (…). And we are embedded in public relations. (translated from German) Intra-company Networks for Supporting Standardisation and Regulation Work The organisational structures outlined above mean that the subject-specific questions are potentially addressed by many different experts. While some of the necessary alignment of their activities is ensured by the staff who address the strategic level of a company's standardisation activities, a consistent approach to standardisation also requires communication among the company's experts. In addition, some of the quotes above also show that there is a need for them to remain connected to other engineers who do not participate in standardisation themselves. In several companies, we observed informal networks to ensure this communication. for example, we learned that one company's engineers who participate in standardisation keep each other informed about their activities through regular e-mail exchanges and other informal communication. Beyond such an informal approach, interviewees at a company that falls on the professional end of the standard-management-spectrum also explained that they support this intra-company network with a database which keeps track of all of the company's standardisation activities and the experts who are involved in this work: Interviewee 1: [We were talking] of the integration and transmission of information from mainly standardisation committees or maybe also associations into our company structure. for standardisation, we have a network where we can approach specific people through a matrix if we have specific topics. (…) And in this network different people are named with different focus topics. And they are simply involved if you have such a topic. They then get the information. Interviewee 2: This is the same for industry associations. (…) Interviewer: This means a product development team can say 'we now have this problem here, we are now searching the database for the relevant person and approach him'? Interviewee 1: This as well, exactly. [And] you can also share information between, I say, stakeholders who are located in different parts of the company. And they know through this (…) company internal network who has also dealt with this specific topic. (translated from German) incorPorating standards and regulation into mchP develoPment following our outline of the general approaches that the companies in the case took towards standards and regulation, we now describe how they incorporated the topic into their development activities related to mCHP. Because most of the interviewees focussed on standards that are relevant for safety and obtaining certification for their mCHP appliances, we also emphasise these areas in our description. Our interviews reveal four core themes in this context: (1) identifying applicable regulation and standards (Sect. 4.2.1), (2) using them in specifying the company's product (Sect. 4.2.2), (3) evaluating the product's conformity to applicable standards and regulation (Sect. 4.2.3), and (4) the degrees of freedom for technology development afforded by standards and regulation (Sect. 4.2.4). Identifying Applicable Regulation and Standards In a first step of managing standards and regulation for mCHP, the companies needed to identify which regulatory texts and standards would be applicable to the technology's development. Doing so was important because companies entered new areas where they were unfamiliar with the requirements for the technology. In addition, regulation and standards are not static, meaning that the companies needed to stay aware of changing requirements. We observed two fundamentally different approaches to identifying applicable standards and regulation: (1) an active approach used by the established companies, and (2) a more passive approach used by the smaller appliance and component manufacturers. following an outline of these two approaches, we explain how companies in the industry anticipated changing and new requirements for mCHP. Active Approach Established companies usually started with an initial identification of areas of requirements that apply to the technology. At a very early stage when one defines the product specifications, it has to be clear which standards need to be fulfilled. (translated from German) This involved the question which European directive(s) applied. Although the characteristics of the technology meant that a number of directives were already set for mCHP (see Table 3.2 for an overview), companies had some leeway in deciding which of them should be the "leading directive" (translated from German). All of the interviewed companies chose the Gas Appliance Directive for this purpose, due to their experience with previous products that had been certified based on this directive. This primary choice of directive(s) then guided much of the further search for standards. The following quotes from different interviews illustrate this approach: Before we address standards, one actually has to go a step back. Before one does this at all, one has to say in today's environment 'which directive do I even want to comply with?'. (…) And accordingly, I then have to look which standards are available. (translated from German) for us, it was clear relatively quickly that we want to work according to the Gas Appliance Directive. The Machinery Directive was also being discussed. But since we certify all our other appliances according to the Gas Appliance Directive, it was actually clear quite soon that we want to go in that direction. (translated from German) It always has been clear that the Gas Appliance Directive plays a role because the appliance will always have a gas connection, that the Low Voltage Directive will play a role because the appliance always will have an electricity connection, that the EMC Directive plays a role because the appliance has electronic components which can emit or receive electro magnetic interference. These three directive are always a given, they are also always a given for our current heat generators, you always have to go by them. (translated from German) The companies were already familiar with directives from their previous products and they also knew most applicable standards in that context, e.g. for gas safety. In other areas, e.g. related to the electricity producing aspects of mCHP, a relative lack of knowledge and experience meant that additional applicable regulation and standards had to be identified after the initial search. In an iterative approach, the search for regulation and standards was linked to the NPD process where moving on to new technological topics also led to the discovery of new standards and regulation for mCHP. The following quote illustrates this: [At the time] we don't have any experience of or knowledge on electricity generation. So there you're treading a kind of 'terra incognita' and we have to find our way. We're discovering things -some from the outset and we see already at the beginning… 'How does that work with the grid?', 'How to connect with the grid?', 'And what are the requirements?'. And some [topics] we are discovering a bit later, for instance domestic wiring. So, it's a mix in fact of thinking ahead and discovering while you're going your way. Passive Approach Smaller companies relied to a large degree on other parties to identify the applicable requirements for their products. for example, the interviewed start-up appliance manufacturers used the support of notified bodies and/or consultants: Interviewee: At this point […] it was about standards and which standards we have to comply with. And then we hired two consultants, one in [the country where the company's R&D department was based] and one consulting company in the Netherlands. This consultancy company is [name of a notified body]. Interviewer: And they in essence created a kind of list for you of the standards that were relevant for the topic? Interviewee: Exactly. And at this point they have accompanied us very well. (translated from German) Interviewee: We had to find out for ourselves first which standard -if we wanted to have the mCHP appliance tested as a whole with the aim to obtain a CE-mark -which one would apply there at all. Interviewer: And how did you proceed to determine what applies in this case? Interviewee: On the one hand we got in touch with the test laboratories which are active in this area and discussed with them according to which standards they would conduct the tests or which standards apply according to their opinion. And then, in parallel, we also conducted our own search based on these insights. (translated from German) This role of the test laboratories was confirmed by our interviewee at a notified body: The process starts very often with the, we call it pre-assessment meeting, where we (…) discuss (…) the complete overview of relevant standards. Component suppliers also used help from external parties. Because component suppliers were mostly not directly involved in the certification process, they largely relied on the appliance manufacturers to inform them about the requirements arising from regulation and standards. The following quote illustrates this approach: When this specification sheet is created (…) these are on one hand market requirements (…) but of course also legal requirements. Especially for gas and electricity there are clear safety requirements that must be fulfilled. There is no way around this. The thing is that we get this from our cooperation partner -because he is responsible for bringing [the appliance] in circulation -in a relatively nicely condensed way from one source. That makes it easier. (translated from German) This reliance on appliance manufacturers to provide lists of applicable standards is partly explained by their ultimate responsibility for the entire product's safety but also by their better knowledge of the application area. for example, one fuel cell manufacturer supplied fuel cells to both mCHP and automotive applications. Our interviewee at that company noted that the standards and regulation in these areas differ to a large extent, making it difficult for suppliers to stay up-to-date and understand the specific requirements without their customers' support. Anticipating Future and Changing Requirements In addition to identifying current standards and regulation for mCHP, companies in the industry also needed to anticipate future requirements for the technology: If suddenly any new requirements, which impact on our development, come out of the standard, then it is extremely important to know this at an early stage. (translated from German) Because mCHP's development took several years and the products needed to be certified according to the requirements in place at the time when they were released to the market, it was essential to already anticipate these requirements during the design process. Participating in standardisation and other working groups is key for learning about-and influencing-these developments (see Chapter 5). In addition to information about upcoming standards and regulation, this participation also provided the companies with further knowledge. In many cases, participation in standardisation committees brought them in contact with stakeholders outside the heating industry. This provided insights into these stakeholders' needs, their views on mCHP, and implications for the products' design in order to make the technology acceptable for these external stakeholders and even provide additional value for them (e.g. in the context of electricity grid stability, see Sect. 5.2.1). While much of this information about upcoming requirements and other stakeholders' views was obtained by participating in standardisation, the participation's resource intensiveness sometimes made this unfeasible. Established companies sometimes relied on external consultants who participated in standardisation committees on their behalf whereas the smaller companies again largely relied on notified bodies to obtain information before new standards and regulation were made publicly available: Especially for the smaller companies with insufficient resources, this was the only way of accessing advance information about upcoming standards, putting them at a disadvantage compared to established players who could directly participate in the process or hire consultants to do so on their behalf: Of course, we always got access to this [information about developments in standardisation] a bit later. This is clear. I would say that there have been tips from time to time in which direction this goes or similar things. But this is, as I already said, a process which you have to accompany continuously if you want to be really close to it. And this does not always work when you also have to deal with every-day problems. (translated from German) 4.2.2 Specifying the Product following the identification of requirements for mCHP, their implications for the product needed to be specified. This specification of the requirements had far reaching consequences for mCHP's further development, the product's viability, and thus eventually also the technology's success. A first step in specifying the requirements was 'translating' them into concrete technical terms and including them into the product's specification sheet, which took substantial effort in itself: We had requirements from the standards but the process [within the appliance], the appliance, the concept must first undergo a risk analysis from which requirement specifications are derived: 'What do the controls look like? Which sensors are required? What is the performance? Which failure models?' (translated from German) As part of this activity, the established companies 3 also faced the question whether to apply the existing standards and regulation to the technology or whether to attempt influencing the requirements (see Chapter 5 for a description of how they did do so): You have the product and you have the regulations and finally they have to comply, either by changing the product, adapting the product to the regulations or by adapting the regulations and standards to the product. External Support for Specifying Requirements Because of the importance and complexity of specifying the requirements, most interviewed companies again called on external support, like they did in identifying the requirements. This support came from (1) notified bodies, (2) external consultants, and (3) using pre-specified components. Again, the smaller start-ups relied on notified bodies' help to understand the contents of relevant standards and regulation. Their consulting activities accompanied these players' development of mCHP products and included an important element of explaining the requirements: We started with this pre-assessment, then the consultancy phase, to assist them in understanding the requirements and the standards. Our consultancy is really focussing on the standards, on the content of the standards. Although the notified bodies performed such consulting activities, these activities were limited in scope and could not cover the full specification process in order to avoid conflicts of interest when eventually certifying an mCHP appliance. The notified bodies could not go as far as proposing design solutions or supporting the companies' risk assessment, which were assessed at a later stage in the certification process. This made some of the notified bodies' consulting work as 'grey area', as our interviewee at a notified body acknowledged, and they needed to be careful not to exceed their role: Of course, there is a grey area. (…) We cannot do a risk assessment of an appliance because afterwards we have to assess this risk assessment. That's not allowed, so the consultancy we do is advising them on the requirements in the standards. (…) So, we give them some guidance but we cannot say 'you have to change this'. That's not our role. Because of these limits to the support that the notified bodies could provide, several companies, including all major actors who we interviewed, also relied on an independent consultant in the field. Several interviewees named him as the leading expert for standards and regulation for mCHP. This consultant described his focus as "consulting companies during the development of a safety-related concept" (translated from German). He was involved in various ways in the product development of the different companies to support them in implementing the standards and regulation. Sometimes he was involved only at selected points in the companies' NPD processes to address specific issues, e.g. when notified bodies pointed out problems during the certification process that the companies could not address without help. In other cases, his input into technology development was much more substantial: My development work in many of these projects is writing the safety-related specifications of the requirements. There you write in detail: 'Which standards, which features and how are they implemented?' In some cases, I also write the safety-related concept for the software. (…) My consulting goes up to successful certification. (translated from German) In addition to hiring external experts for support in the specification process, companies could also rely on pre-specified components from suppliers for certain safety-critical parts of the appliance. Especially smaller companies made use of this option. This allowed them to meet key requirements from standards and regulation without spending scarce resources on own developments and specifications: There are certain safety devices. This is, for example, the automatic firing device which we do NOT develop ourselves. This is a purchased part from companies like [company names] which have been established in that area for years. These developments cost a lot of money because they include building failsafe controls and software. They are inspected by a notified body and we then rely on ready-made products. We cannot afford to develop such things ourselves. (translated from German) Evaluating Conformity to Regulation and Standards In order to make their final products conform to the regulation and standards, companies also needed to evaluate this conformity at different stages in the development process. Below, we outline what we learned about (1) the initial evaluation at the outset of their development projects, and (2) the review procedures throughout the development process. Initial Evaluation of Regulation and Standards for mCHP Especially the established companies, with their high awareness of regulation and standardisation and their professional approach to managing the topic, already addressed standards and regulation as an issue in their initial appraisal of mCHP technology's potential. When making the business case for mCHP and deciding whether to invest in its development, an analysis of the degree to which standards and regulation would support or hinder the technology was essential: A certification capability analysis, doing this is a standard procedure. Is this product even capable of being certified at all? Are there any hurdles from a standard or regulatory point of view? This is something one does very early. (translated from German) Such evaluations often did not only consider regulation and standards that were directly relevant for certification but also could be wider in scope. The following example shows how important such analyses can be: One interviewed company first assessed the technology's potential in 2000 when it was concluded that the regulation for feeding electricity into the electricity grid was unfavourable, only allowing an insufficient return on investment for buyers of mCHP appliances. Because of this insight, the company decided not to invest in developing mCHP technology at that point in time. The company then re-evaluated mCHP technology in 2004. At that time, the requirements had changed and it was deemed feasible to manage remaining issues during the NPD process so that regulation and standards would no longer hinder mCHP when the technology would be ready for market entry. following this assessment, the company initiated its development activities. Evaluating Conformity Throughout the NPD Process following the decision to initiate the NPD process for mCHP, most interviewees stressed the need to assess regularly whether the developed solutions were in line with requirements from regulation and standards. At most interviewed companies, this was incorporated into the project management tools used to manage mCHP's development, e.g. by including the topic in the progress evaluation at regular milestones or in the companies' stage-gate processes. Doing so was seen as a way to prevent duplication of effort that would have been caused by not addressing the issue throughout the process and then having to adapt the product in the late stages of development to make it acceptable for certification and market introduction. In several instances, the ongoing evaluations of conformity throughout the NPD process were also advised by the notified bodies and the independent consultant mentioned in Sect. 4.2.2. Especially the smaller players relied on the advice of notified bodies to identify areas that they needed to address before their products were ready for the certification process, as the following quotes from interviews with a start-up and a notified body show: We definitely tried to develop the first prototype in 2004 in a standard-compliant way. We also collaborated with a test laboratory which supported us in a consulting manner but we did not really try to get the CE-mark yet for this prototype because it was clear that we still would need fundamental revisions. (translated from German) And after that [the initial pre-assessment meeting] we dig into the technology itself and we check for what the risks are and where some parts of the system do not meet the standards, so the safety -this is purely focussing on safety. And then what follows is very often a kind of consultancy phase where they are further developing the system. So they say 'we have this safety concept' (…) and then we say 'OK, it does fit for 90% and this 10% does not fit'. Degrees of Freedom for mCHP's Technological Development A final theme related to managing standards and regulation in mCHP's development that recurred in our interviews was the degrees of freedom that the requirements left for developing innovative solutions. As we outlined in Sect. 3.2.2, not following standards carries substantial additional effort for the NPD process. Although "undertaking this effort" can "sometimes [be] worthwhile if one has corresponding cost savings" (translated from German), it became clear during our interviews that companies rarely did so in developing mCHP. Usually, standards were perceived as leaving sufficient freedom to develop the technology, and notified bodies were flexible in interpreting them, as the following quotes show: Standards usually leave the latitude to get equivalent solutions acceptedthis is often the case. (translated from German) [Name of notified body] in this context paid attention to the content of the standards and not the wording of the standards. So the contentsafety -was more important than narrowly [following the standard wordfor-word]. Our engineers enjoyed the product-oriented interpretation of standards. (translated from German) Despite this generally positive view on standards and regulation across all interviewees, we did observe some disagreement on two aspects related to how they should best be handled in the NPD process to provide optimal freedom for the innovation. This disagreement concerned (1) dealing with the missing standards, and (2) the timing of involving standards in the NPD process. Handling Missing Standards in the NPD Process As outlined in Chapter 3, some important standards for mCHP were missing when the industry started the technology's development and key requirements were therefore unknown at the outset of mCHP's development. Some of the interviewed companies saw the resulting uncertainty as a bigger problem for the whole NPD process. They therefore focused their efforts (see Chapter 5) on creating certainty as quickly as possible by engaging in standard development. However, other companies valued this situation as an additional degree of freedom for the engineers in developing the technology. They took this opportunity to experiment with new approaches to product safety, which they later contributed to the standardisation process: Interviewee 1: To the contrary, we could shape the standards very well based on our experience and the freedoms which we had [when the standard was still missing]. Especially not being regulated, overregulated and restrained too much in the beginning gave us much space to develop our safety concepts and develop ideas that we might not have had if there had been a relatively fixed standardisation frame. And this was very positive. As this point, we started using HAZOP analysis (…) a very interesting tool which we got to know in the USA and then brought to Germany (…). And this is now also anchored in the standard. (…) And this has helped us a lot to be certain that we are on a good way with this new technology. Interviewee 2: In collaborating with the Americans (…) -they had a different safety philosophy. (…) And with the standard as we have it now, there is on one hand clearly the European strategy of prevention but through the risk analysis we now have a bit more free space. (translated from German) Timing of Handling Standards and Regulation in the NPD Process A second aspect related to freedom for product development where the views diverged was the question at what stage in the development to start addressing questions related to standards and regulation. In particular one interviewee stressed that doing so too early would restrict the ability develop novel solutions, and that standards only became helpful at a later stage in the process when the prototype-mCHP-appliances were transformed to production models: He [the manager of the development process] attached great importance at this point to avoid restricting the innovation through standards. They In contrast to this strong view, all other interviewees advocated addressing standardisation and regulation early in the development process, as demonstrated by the very early first assessment of requirements outlined in Sect. 4.2.3 and shown by the following exemplary quotes: Interviewee: It's really important that with your first step this pre-assessment [involving the notified body] takes place in a very early stage of the development. Interviewer: So, is there already a prototype or even before that? Interviewee: Even before that is better. But in practice, I think, half of the cases, they already have a prototype. The interviewees, who favoured this approach of addressing standards early, reasoned that this avoided duplicate effort in developing the technology. According to this reasoning, the limitations in freedom for innovation imposed by standards only restrict the development of solutions that are not suitable for certification and therefore would need to be replaced by other approaches at later stages anyway (or require changing the standards). This is also reflected in the experience of one interviewee whose start-up encountered substantial rework in its early technology development projects because of not considering standards and regulation early enough and changed its development approach based on this experience. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. Abstract This chapter provides in-depth insights into the extensive collaboration across multiple actors in the European heating industry during micro Combined Heat and Power's (mCHP) development. Actors in the industry cooperated both in developing mCHP technology and related standardisation/regulation processes. The chapter outlines the role of non-company actors (e.g. industry associations) and the industry's intellectual property rights approach (IPRs) in facilitating this cooperation. This chapter gives a detailed account of the particularly dynamic and contentious processes of standardising and regulating access to the electricity grid and requirements for energy efficiency labels. These examples show how innovators can jointly create conditions that support their innovation, even if major stakeholders (including government) oppose the technology. The examples also show how innovators can handle important policy and societal issues. Keywords Cross-company collaboration · European Commission Energy efficiency policy · Electricity grid access · Intellectual property rights · Co-opetition In addition to the internal activities described in Chapter 4, the actors in the industry also reached outside their companies as part of managing standards and regulation for mCHP. This resulted in extensive collaboration between actors in the industry. In Sect. 5.1 we CHAPTER 5 Industry-Level Collaboration in mCHP Standardisation and Regulation provide an overview of these activities, outlining aspects like the venues where this collaboration took place, the involved actors, the topics of cooperation, and how intellectual property rights (IPRs) were considered in this context. In Sect. 5.2 we then describe how standards and regulation for mCHP evolved as a result of this collaboration and the input of other stakeholders, based on two examples that were central to the case. collaboration across actors in the industry Having identified standards as an important issue for the development of mCHP, the actors in the industry also recognised that successfully bringing mCHP to market would be very difficult if companies tried to do so without collaboration in the industry. for example, the conflicts, which we describe in Sect. 5.2, would have been extremely difficult to resolve by any company from the industry on its own. This awareness resulted in extensive collaboration within the industry, both to develop the technology and its market, and to pursue standardisation and regulation-related activities together. This collaboration took place in a number of formal and informal settings with different aims and varying involved parties, many of which engaged in multiple collaborations with others. Table 5.1 provides an overview of the most important collaborations that were mentioned in our interviews. We outline these collaborative efforts in more detail below. We first consider the initiatives which were specifically initiated for mCHP and included aspects related to technology development, but also standardisation and market development for the technology (Sect. 5.1.1, the four rows at the top in Table 5.1). We then outline the efforts in already established forums (concentrating on industry associations) which focussed much more on standardisation and regulation instead of technology development (Sect. 5.1.2, the two rows at the bottom in Table 5.1). These efforts led to some interesting 'group dynamics' between actors in the industry which we outline in Sect. 5.1.3. finally, such collaboration also raises the question how the involved actors handled intellectual property. We take a closer look at the approach to this topic in Sect. 5.1.4. Collaborating in Technology Development Collaborations to develop mCHP technology began already in the early stages of development before the engagement in standardisation started and took place in settings that were specifically established for mCHP. Throughout our interviews, many instances of collaborating with suppliers and others to develop components were mentioned. Three of these technology development collaborations stand out because of their links to market development, standardisation, and regulation: (1) a collaboration between a Japanese fuel cell manufacturer and a major established Provide a forum to coordinate the industry's input in standardisation committees and a channel for the involved companies to influence regulation for mCHP formal standardisation activities Standardisation committees in European and national SSOs Develop standards to support mCHP German OEM; (2) a German industry forum for domestic fuel cell applications and two associated field trial projects for mCHP appliances; and (3) a collaboration between several parties to develop Stirling-based mCHP technology. In the first example, a Japanese manufacturer of fuel-cell-based mCHP appliances brought its extensive knowledge of the technology into the partnership. While this manufacturer produces entire mCHP appliances in Japan (where the technology has already reached widespread diffusion), it partnered with a German appliance manufacturer because of its limited knowledge of both European market requirements and European regulation and standards for mCHP. In this partnership, the Japanese company supplies the fuel cell components which are integrated into the appliance by the German appliance manufacturer who also has been responsible for questions related to standards and regulation. In the second case, the German industry forum ('Initiative Brennstoffzelle', IBZ) brought together a large number of mCHP appliance manufacturers and other stakeholders, including academic research institutes, utility operators, industry associations, and a German government body in charge of promoting fuel cell technology ('Nationale Organisation Wasserstoff-und Brennstoffzellentechnologie', NOW). Its aims included information exchanges between actors, raising awareness for the technology but also developing technical specifications and political lobbying for the technology (see also Initiative Brennstoffzelle, 2017). The IBZ also had links with two large field trial projects ('Callux' and 'ene.field') which aimed to gain experience with the technology and testing prototypes in the field, but also linked to standardisation and regulation. The field trials relied on standards (e.g. for communication between the involved appliances), and produced findings that fed into further standardisation efforts later on. The third major collaboration in the case aimed to develop Stirlingbased mCHP technology. It involved the major appliance manufacturers which pursued the technology (although some of them have stopped their engagement before bringing Stirling-based mCHP appliances to the market, see Sect. 2.2.2). This collaboration took place in the early stages of development, as the following quote shows: In the beginning, meaning before our actual product introduction phase, we developed this Stirling engine together with competitors, mainly with two competitors from the European industry. And then at some point we separated, so these common meetings eventually did not take place anymore. (translated from German) In addition to the appliance manufacturers, a manufacturer of Stirling engines has been playing a key role in the collaborative development of Stirling-based mCHP appliances, being "very deeply involved in that process, from the very first contact with [name of one OEM] right through to them producing and certifying their first model". In this context, the manufacturer not only developed the Stirling engine as an individual component but also was involved in integrating it into the appliances. This collaboration between the appliance manufacturers and the manufacturer of Stirling engines culminated in the appliance manufacturers jointly buying the Stirling manufacturer together with an external investor when the original owner (a large utility firm) decided to leave the mCHP appliance business. One important motivation for this close cooperation between competitors was increasing the speed at which economies of scale could be reached for mCHP technology. The collaboration allowed them to standardise new components that were not shared with other products, such as the Stirling engine component or control electronics, across manufacturers. In addition, considerations about creating the market and being able to manage standards and regulation were further reasons for this collaboration. An interviewee at the company that initiated this collaboration explained why they decided to share their innovation with others, rather than protect it through patents and licenses: We were also active at that time to enlarge the circle of companies coming with micro CHP. So, we invited competitors because we thought it would be good that, when you have to create a new market for a new kind of product -If it is only the product of [company name] then it would be very much like the regulations had to be tailor made for [company name], for one company. And that was not the issue if it was for a sector. So, we collaborated with these different companies -also in lobbying on the regulations. This sentiment of needing to collaborate in order to jointly develop the technology and the environment in which it is placed was also echoed by other interviewees, as the following quote shows: If I had tried to distinguish myself from a competitor in this way and I wanted […] to prevent him from implementing his technology -that would be absolutely counterproductive. The market first has to develop. The market for mCHP is not developed yet. It is a small plant and it needs to be watered well for it to start growing. (translated from German) Based on these initial technology development efforts with their links to standardisation, the industry also engaged in established standardisation bodies and industry associations to further coordinate their activities in standardisation and regulation processes, as detailed in Sect. 5.1.2. Collaborating in Standardisation and Regulation In addition to the technology-focused collaborations outlined in Sect. 5.1.1, which also affected standardisation and regulation to varying degrees, there were a number of collaborative efforts directly concerning standardisation and regulation. They took place in different forums, such as the IBZ; the national and European industry associations 1 ; and standardisation committees which were only "one part of the network surrounding this technology" (translated from German). While there also was collaboration in the standardisation committees, it is particularly interesting to consider how collaborating in already established industry associations supported the industry's standardisation activities and provided the actors with access to regulatory processes. Especially the established appliance manufacturers engaged in the mCHP working groups at the industry associations but also some smaller players were members. By using the opportunities that these working groups provided, the industry was better able to cooperate in pursuing standardisation and regulation for mCHP beyond what would have been possible by only engaging in committees. Below, we outline how they used their membership in these associations both in the context of (1) standardisation and (2) regulation processes. Industry Associations in the Standardisation Context Several interviewees reported that the actors in the industry used the associations to develop a common position which they could then pursue in standardisation committees, making them a venue to jointly prepare standardisation activities. for this reason, the companies were often represented by the same people in standardisation committees and the industry associations' working groups: It is often the case that there is an overlap of around 70% in people, who are on one hand active in standardisation topics and on the other hand in topics related to the associations. Yes, I would say that between 50% and 70% of these people are identical. (translated from German) In order to facilitate this process, a representative of the European heating industry's associations participated in many relevant standardisation committees as an observer without voting rights. This allowed him to identify potential areas of conflict and facilitate compromises between the association's members in these areas. He also saw it as part of his role to ensure that the interests of smaller companies in the industry, who were not directly represented in standardisation committees, were also taken into account in these agreements. In instances when these interests were at threat in the committees, he intervened in the discussions. The following excerpt from an interview sums up this role: Interviewee: In the expert group, where the standard is being drawn up, only experts are present. This means that everyone has the same weight and everyone may speak or not speak -whatever they want. This role of the industry associations was mostly appreciated by the interviewed companies although a few clashes on minor topics with the association's representative were mentioned by one interviewee. This may also have been related to the representative working for both the German national and the European industry associations, making it sometimes unclear for actors from other countries on whose behalf he was speaking. In addition to these activities related to facilitating compromise and finding common positions for standardisation, the associations played one more role in standardisation for mCHP. Their staff also attended standardisation committees on topics which did not warrant the manufacturers' participation but were nevertheless relevant for mCHP and reported back on progress in these committees. In some (mainly electrotechnical) areas of standardisation that were important for mCHP, this collaboration went even further than only agreeing on common positions for standardisation. In technological fields where actors in the industry sometimes lacked the necessary expertise and direct participation in standardisation would have been too resource intensive, they hired an external consultant through an industry association to act on their behalf in standardisation committees 2 : There is an international standardisation committee where a strong electrotechnical aspect was included. There, we are not directly involved, but only through a consultant who we have mandated, together with our competitors, to represent our interests there. Doing this, with meetings in Tokyo and I don't know where else, is of course very resource intensive. This is why Mr [name of the consultant] is there. And Mr [name of the consultant] is paid for not by us as [company name] but by us as industry to represent our interests in international standardisation. (translated from German) An additional reason for choosing the external consultant, rather than a member of the association's working group, to represent the entire industry was his neutrality resulting from having no links to a particular company: I was approached whether I could represent these bundled interests. It was also clearly said that it is better if a neutral non-producer of appliances does this instead of an appliance manufacturer. (translated from German) Industry Associations in the Regulation Context While engaging in the industry associations was (partly) complementary to directly participating in standardisation committees, it played a much more central role for the manufacturers in order to gain access to regulatory processes. This access was needed in particular when developing a calculation method for energy efficiency (see Sect. 5.2.2). With the exception of one appliance manufacturer which is part of a larger conglomerate that operates its own substantial lobbying presence at the EU level, none of the actors in the industry would have had much clout in policy making on their own. 3 While the European Commission and other policy makers could be accessed by individual companies at industry roundtables and similar consultations about new regulation, the existing contacts of the industry associations helped to get more direct access: I think first they [the industry associations] know the way, they are close to the process, so they know what happens, they have the contacts already and so this is how this usually works indeed. […] I must say, I have also been to -sometimes the European Commission themselves are organising a kind of round table meeting where you can register yourself. I have also been to that meeting but then there were 25 people in too small a room, and no individual talks. In such instances, when members of the industry got access to policy making through the channels of the industry associations, they did so after a common position had been determined between the members of the associations' working groups. They were then speaking on behalf of the entire group, also reflecting the reasoning for collaboration quoted in Sect. 5.1.1: The first time I was there [at the European Commission], that was through EHI -also with other people -and representing EHI. I've also been there later when EHI and COGEN Europe joined forces. I was there on behalf of and also together with people of EHI and COGEN Europe. So the general secretary of EHI was there, a colleague of [name] was there, […] the general secretary or director of COGEN Europe was there together with someone who was responsible for micro CHP and I was there. In particular the interviewee who initiated much of the collaboration in the industry, and also was described as the leading force behind many of the common activities by others, was chosen to represent the industry together with staff of the associations (and-in some cases-additional external experts who were jointly hired by the industry) in this manner. 'Group Dynamics' in the Industry Resulting from the Collaboration All interviewed parties who were involved in the collaborative efforts outlined above described them as very trusting. This trust was built throughout all of these efforts (i.e. technology cooperation, standardisation activities and collaboration in consortia and industry associations). The following quote from our interview with an academic engineering researcher, who participated in the process without commercial stakes and therefore played a more neutral role, sums up this sentiment: The nice thing about standardisation is that one tries there to work together and not against each other. This means that the idea of competition is secondary in a standardisation committee once the door closes. Evidently, everyone represents the interests of their company. This is clear. Nevertheless, one knows 'okay, one somehow has to enter compromises', otherwise nothing comes out and one eventually wants to have something on the table. This is similar to conducting a common research project where it is clear that one enters the whole thing as partners and tries to do something together. And this is the same in standardisation, at least in the micro CHP area, where -according to my experience -there are fewer conflicts and diverging positions. Instead, the industry is saying -especially at such a new technology -'okay, we pull together and we want to advance our niche products and our not yet established technology'. (translated from German) This was sometimes also described as resulting in strong 'group dynamics' where all involved actors know each other very well and it may be difficult for outsiders to join these efforts. Some interviewees also saw these collaborations not only as a way to facilitate mCHP's development but also to fend off demands for requirements in the standards which would have been problematic for the technology. for example, one interviewee mentioned NGOs who participated in standardisation committees and who tried to raise the minimum levels for safety and exhaust emissions in the standards to such a high level that the industry would not have been able to produce mCHP appliances at a price point with sufficient market demand. A final purpose of these collaborations was strengthening mCHP's position in the competition with other technologies, such as heat pumps. The following excerpt from an interview illustrates this: This means that we need to show the competition which has competing products, for example heat pumps, that our technology is a good one. And then, once out technology -micro CHP -is established and has reached a certain market penetration, we can start competing against each other once again. (translated from German) Particularly one interviewee, who was leading many of the efforts to cooperate to promote mCHP, stressed repeatedly that the aim of these efforts was to achieve a fair treatment for mCHP vis-à-vis other technologies whose backers he accused of using unfair practices in some instances to give these technologies an unfair advantage over mCHP or disadvantage mCHP unfairly. Many of the activities outlined in Sect. 5.2 were driven by this motivation for which the following quotes are exemplary: We don't need a bonus, we only need a fair treatment. And the advantage shouldn't come and isn't from the standard, but the advantage is from the real world and the standard should reflect the real world in a fair way. I had the suspicion that they wanted to get a privileged position of, for instance, electrical heat pumps by pushing micro CHP down. Industry Actors Not Supporting mCHP Despite these observations of broad collaboration in the heating industry to drive mCHP forward, this did not concern the entire industry. One major appliance manufacturer with little involvement in mCHP technology was critical about these efforts. Representatives of this company participated in standardisation committees and working groups at the industry associations in order to prevent what they saw as formulating rules which would give mCHP an unfair advantage over other technologies. An interviewee working for this company relayed the opposite narrative to that of the supporters of mCHP, claiming that their activities were geared towards giving mCHP unfair advantages over other technologies: I am not a friend of the manner how one tried this [Stirling-based] appliance with the corresponding label 4 -because all of this no longer has anything to do with physics. This is just about marketing. And in this place -I know we also have to sell our products -but we as [company name] still try it in a reasonably fair way and this is not fair anymore. (translated from German) The interviewee voiced his admiration for what he saw as one company with particularly strong interests in the technology pulling an entire industry on their side. He claimed to also speak on behalf of other companies that were sceptical about the rest of the industry's efforts but which were too small to effectively participate in the activities related to standardisation and regulation. This difference in viewpoints about mCHP technology and the cooperation in the industry then led to major conflicts during the development of standards and regulation (see Sect. 5.2.2). The Role of Intellectual Property in the Industry's Collaboration Based on our literature review, we expected IPRs to play an important role in the collaboration between different actors in developing mCHP. In particular, we assumed that they would be important in standardisation for mCHP. We therefore specifically asked interviewees how they had dealt with IPR as part of their NPD and standardisation activities. Protecting Intellectual Property Related to mCHP Technology The interviews show that IPR was indeed an issue that they considered and that they aimed to protect their innovations where possible. Based on these observations, the interviewed companies can be divided into (1) two companies which considered IPR an important strategic issue and (2) a larger group where IPR was dealt with as a lower-level issue. Two of the interviewed smaller start-ups stressed that it had been essential for them to think about IPR strategically while building their business. One of them was initially launched with the aim of building entire mCHP appliances but later focused on supplying advanced fuel cells to others in the industry. In this role, keeping the IPR of the fuel cell designs and either producing them on behalf of the customers or licensing the designs was key to the company's business model. The other company in this group also carefully considered how to best use IPR protection to support their business, as the following quote shows: We talked about the GSE board, the burner control and the essential air sensor where we place great importance on having the [intellectual] property ourselves. We therefore have patents. We are interested in the Hot BOP, Hot Balance of Plant, we wanted the stack ourselves. There we wanted to have ownership. In this area, in coatings, in compositions and the burner itself, we have patents. We want to be the owner of key parts. But otherwise -and this is part of our strategy, also to keep costs down in this area -we developed the relevant parts together with our suppliers. We have often done this and then afterwards made the part available to our competitors or other actors in the market. (translated from German) The larger part of the interviewed companies, including the large established players, treated the IPR issue in a more matter-of-fact way. They saw the topic as one that needed to be taken into account when managing mCHP's development but did not portray it as a topic with strategic relevance similar to how this was seen by the first group. The following quote illustrates this approach: In some parts we built [intellectual property] ourselves and applied [for patents] ourselves. And we naturally conducted patent searches. This is even more important, to make sure that you do not introduce something as a product which you may not introduce, quasi conducting a patent violation with the product. This is something which belongs to a product development process by default. The patent search about what one wants to introduce, what one wants to develop. This is an item in the product development process. (translated from German) (Not) Using IPRs in Standardisation for mCHP While interviewees recognised the importance of IPR in developing mCHP in general, they did not consider the topic as relevant for standardisation. Indeed, when asked about how IPR issues were addressed in the standardisation process, interviewees saw no link whatsoever between the two topics and sometimes were even surprised that such a link was suggested. They claimed that practices such as declaring patents as standard-essential and basing standards on an individual party's IP have not been used in the mCHP context and even were unheard of in the European heating industry, as the following excerpt from an interview shows: Interviewee 1: There was no such thing [attempts to place IP in standards] here, no. Interviewer: Okay, this means that this is not common in your industry? Interviewee 2: No. In any case not in the context of standards. Of course, obviously one tries to protect one's intellectual property, maybe also if one sees that one can trigger something at the competitor. But especially in the fuel cell area and standardisation, or CHP and standardisation, this was not a big topic. (translated from German) Beyond this, the interviewees even considered bringing IPR issues into the standardisation debate as counterproductive and as being contradictory to the purpose of standardisation. They shared an approach to standardisation which strived to write standards that support all companies in designing their own mCHP appliances, rather than applying solutions that were covered by one party's IPRs. Interviewees also argued that it would not be in their own long-term interest to place their IP in the standard, thereby limiting other companies' options in developing their technological approaches for mCHP, because this would weaken the development and eventual chances of market acceptance of the technology as a whole. The following two excerpts from interviews exemplify these arguments: The reason why such an approach was seen as weakening the innovation was that it might have caused other actors in the industry to lose interest in mCHP. following on from the reasoning for collaborating across the industry (see Sect. 5.1.1), this was seen as a potential problem because it would have left the company alone in promoting the technology, e.g. in discussions with government, which would have been unlikely to succeed: It would have been an extreme risk to weaken the technology in this way and suddenly being left as the only vendor, which would definitively not have been constructive. If the entire [German industry association] had not been interested, [company name] could also not have gone to Berlin on its own to accomplish anything there. Because of this, the others, the competitors had to remain interested in the whole thing. (translated from German) The Overall Impact of IPR on mCHP's Development Overall, IPRs were considered an important element of managing mCHP's development by the industry. We observed broad consensus among interviewees that protecting own technological developments was important, also when cooperating with other parties. However, there was equally broad consensus among interviewees that IP had no place in the development of standards for mCHP. The interviewees who spoke on this topic all agreed that including proprietary knowledge in the standard would have been counterproductive and eventually resulted in substantial difficulties for the technology's development and eventual success. conflicting interests in standardisation and regulation for mchP As outlined in Chapter 3, several standards needed to be changed or newly developed in order for mCHP to be sold into the European market with the intended value proposition. On most questions, such as electrical installations in buildings, other players in standardisation committees adopted a constructive approach towards the innovation. With their support, standards were adapted so that they would accommodate mCHP and provide a basis for the technology's safe and efficient operation. However, two areas of standardisation turned out to be controversial because of competing interests by actors from other technological fields: (1) Questions related to connecting to the electricity grid and (2) developing a calculation method for mCHP's energy efficiency based on the European Union's requirements for energy labels (part of the product standard EN 50465). In addition, several interviewees identified reuse, recyclability, and reparability (RRR) as a new field of standardisation with relevance for mCHP where they expect potential conflicting interests in the future: According to a new mandate, RRR -meaning reuse, recyclability and reparability requirements -must also be included in the standard. What exactly this contains is now under discussion. (translated from German) Because the questions related to the electricity grid and the efficiency calculation method are recurring themes across our interviews and many interviewees stressed their importance for the development of mCHP, we focus our discussion of standards' and regulation's evolution on these two areas. Standards and Regulation for Connecting to the Electricity Grid As outlined earlier, being able to connect mCHP appliances to the electricity grid and feeding the generated power into the grid were key to implement the innovation's value proposition. This key importance made the topic one of the focus areas in the standardisation and regulation efforts. During this engagement, the actors from the heating industry encountered a range of stakeholders from other industries, most importantly the electricity grid operators, who were used to a different approach to standardisation: There are various actors, typically settled in the energy business, or around the energy business. And for them [the actors from the heating industry], these are quite uncharted waters although meanwhile they have been acting more and more confidently. (translated from German) feeding into the electricity grid is usually shaped monopolistically because utility companies typically used to have monopoly structures. (…) They were not used to developing standards in the same way as, for example, in the gas or (…) household appliance industries, where notified bodies, manufacturers and users sit together in standardisation committees and are looking for compromises. for feeding into the grid, this is different. It has been a long process and we have not yet arrived at the goal that there is equal representation in committees (… In the remainder of this chapter, we describe the industry's efforts in dealing with the opposing interests in this field. We start by outlining the environment in which the industry found itself and the conflicting and converging interests resulting from this. We then explain how the stakeholders interacted and how the conflicts between them were eventually resolved. Background: Electricity Grid in Transition At the time when mCHP's developers worked on the topic, several parallel developments occurred, such as the spread of renewable energy sources and the exit from nuclear power in Germany. These developments had (sometimes substantial) implications for the electricity grid. Traditionally the electricity grid was built around a small number of large power stations, meaning that electricity production could be relatively easily balanced with demand for electricity. With the new developments, a large number of small electricity producing appliances (including mCHP appliances, solar panels, wind turbines, etc.) started appearing in the grid which resulted in substantial changes to the grid's structure: Around 20 years ago, we had maybe, say, 1000 generators in Germany and now we have 20 million or 15 million or some number in that range, if you include all the solar panels that feed into the grid. (translated from German) furthermore, the spread of renewable energy also means that parts of the electricity production can no longer be adjusted to demand fluctuations because it depends on factors like sunshine and wind. This made mCHP one of several factors 5 in a major transition, which challenged grid operators' and utility firms' traditional approach to managing the electricity grid. According to most interviewees, mCHP was therefore met with certain degrees of resistance by some of these actors, while others participated in partnerships to develop the technology (see below). If you look at what the four big [German utility companies] have lost in market capitalisation through shutting down nuclear power stations, through the increase in photovoltaic, through the prioritisation of renewables before [other energy sources], and the fact that for economic reasons the most modern gas fired power stations are not operated anymore today, even though they would produce the lowest emissions out of the fossil [fuels]. And then, politics exerted such a massive influence on the industry that they [grid operators and utility companies] fight helping any other sector tooth and nail. They have so many problems of their own (…) and that's why they resist helping even the smallest CHP or even developing understanding. If you want to see it positively, it is slowly beginning [to change], but much too slowly. (translated from German) Given this background, some interviewees reported that the established players in the grid field sometimes made demands based on their experience with large power stations, which the interviewees interpreted as aiming to hinder mCHP's development by imposing unreasonable requirements in the standards and regulation: Interviewee 1: In standardisation and regulation on the electrical side (…), they crack nuts with sledgehammers and we often came across attempts to prevent technology through standardisation. Interviewee 2: They really put obstacles in one's way. I am thinking of one example regarding how the amount of electricity that is produced by an mCHP appliance should be measured and where the measurement device should be placed. Traditionally, it is clear that, if you build large equipment, then you have some (…) measurement device (…) and if this is not directly on the turbine it is in an electrical cabinet far away. And one tried to transfer this concept to a small electricity generator [even though there] you do not have a separate electrical cabinet (…) but everything that is needed for the operation has to be built into the appliance, into one enclosure. (translated from German) On the grid connection side we had the occasional discussion because the utility companies inherently have a different view on the technology. I remember a discussion (…) where the utility companies (…) wanted to draw upon a standard to enable communication between the fuel cell and a higher-level control unit to create a 'virtual power station' (…) and where we said 'wow, that's totally excessive, they want to impose a standard on us that can communicate with a network control centre and that would ask way too much from our appliance'. (translated from German) Converging and Competing Interests with Other Technologies As the development of mCHP coincided with other technologies' emergence, the actors in the heating industry were not only confronted with the traditional grid operators and utility firms, but also with the interests of these other technologies' developers. Most importantly, the needs of renewable energy sources (which also enjoyed some political support) were a major factor in the development of standards and regulation for grid access. In some cases, the heating industry's interests converged with the ones of these other actors. for example, mCHP was seen as a potential technical solution to ensure grid stability in the future when renewable energy would make up a large part of the electricity generating capacity, thus providing complementary value: The idea is basically that one can smoothen the volatile energy production of renewables a little bit with a large number of mCHP appliances in the grid. Because when you look at the energy generation curve of an mCHP appliance, this is quite complementary to a photovoltaic module. (…) When the sun is shining heavily, I don't need heat and the mCHP appliance does nothing. When a lot of heat is required -usually in the winter, in the evening, or in the morning -then I have electricity generation from the mCHP appliance. (translated from German) The interests of mCHP's developers and other technologies' proponents conflict ed on other questions. One example that was mentioned in several interviews is the requirements for dealing with frequency changes outlined in Sect. 3.4.2, which poses a substantial hurdle for Stirlingbased mCHP appliances. The introduction of this requirement was driven by the expectation that large sudden changes in wind or sunshine would make the grid frequency volatile when many renewable energy electricity generators are connected. Activities in Standardisation and Regulation for the Electricity Grid Given this background of an electricity grid in transition and other technologies developing in parallel, the interviewed actors aimed to influence standards and regulation so that workable solutions for mCHP could be found. Our interviewee at the European industry association summarised this goal as follows: To be able to feed the one kilowatt [of an mCHP appliance] into the grid, the supporting conditions must be right. There must not only be supporting conditions for 500 kilowatt [appliances]. This is like traffic on the roads. If you have lots of racing cars on the roads, they of course have other interests, they drive at different speeds than (…) a small car in between which can only drive 100 instead of 250. (…) And therefore, a compromise has to be found where we say 'he may also use the road, but he may only drive in the right hand lane'. (translated from German) To reach this goal, the actors engaged in standardisation and regulation pursued various activities to increase the impact of this engagement. These activities can be grouped as (1) forming coalitions, (2) establishing evidence about the technology and informing other stakeholders about its needs, and (3) adapting mCHP technology itself where necessary and possible. The first group of activities (coalition forming) was in many cases based on the collaboration forums outlined in Sect. 5.1. for example, the 'Callux' project that was undertaken as part of the IBZ in Germany included several energy suppliers as collaboration partners. Especially smaller, local energy suppliers sometimes saw mCHP as an opportunity to shift the balance of power generation away from centralised power stations owned by their large competitors. Gas suppliers who "were interested in selling gas" (translated from German) were also supportive of mCHP in questions related to grid access. However, being able to form these coalitions and operate these field trials was not always easy, as the following quote shows: It already started with having to find people who conducted field trials together with us. Of course, these appliances then also have to be approved, that is clear. But these were people who, let's say, accommodated us with a certain goodwill and then maybe also interpreted grid connection rules generously and did not make it impossible from the start. Because they knew that these were small appliances with initially small quantities. (…) [And these people] also saw new business opportunities in the technology [although] it took a while for the utility companies to recognise these opportunities. (translated from German) Such collaborations across stakeholders also were directly linked to informing stakeholders, making them aware of the technology, and establishing evidence about it. This second group of activities was necessary because many actors involved in developing requirements for grid access were unaware of the technological characteristics of mCHP: But they [the grid operators] of course have their large power stations and rotating machines with their inertia in mind. feeding into the grid with a small appliance -the needs that exist there were not in their focus. And there we needed to vehemently [argue] on the European level when the Network Code Requirements for Generators [were developed]. (…) And it was not easy to convince these circles that mCHP behaves in a special way. When you switch an mCHP appliance off, you need to restart the thermic process. But they assume that the rotating machine runs anyway or that a solar panel can immediately feed electricity into the grid when you switch the semiconductor. (…) A fuel cell needs to be restarted. This takes minutes and they want to switch it on immediately at the right frequency. These are basic principles which are difficult to convey. (translated from German) Neither had we experience with the electricity generating sector, nor did the electricity generating sector know anything about these small generating appliances. And only once the electricity producers realised that these small generating appliances must be taken seriously, that they are not a temporary phenomenon (…) [but] actually enter the market, then one also reacted accordingly in that group, respectively started trying to establish the rules. (translated from German) To support this information of other stakeholders, the developers of mCHP relied on evidence created by field trials, such as the 'Callux' project mentioned above where "a few hundred fuel cell mCHP appliances were brought into the field" (translated from German) and their effects on the electricity grid were measured on behalf of utility companies by an independent research institute. finally, the developers of mCHP also adapted their technology to make it more acceptable to other stakeholders in the electricity grid. Some interviewees stressed that the interaction with these stakeholders helped their understanding of the issues faced by the electricity grid operators and mCHP's possible positive and negative impacts. This increased awareness allowed them to facilitate these other stakeholders' concerns and sometimes even work out technical solutions jointly with these actors, as the following quote shows: for example, there was the need to cover wider scopes of grid frequency and different technical solutions existed for this [issue]. And the one which we preferred and also finally implemented (…) [was based on] considerations which we worked out together with the grid operators and the power station operators in this VDE [Verband der Elektrotechnik, German association for electrotechnology] committee. (…) [And there would have been other solutions which] would not have been so accommodating for us, which would have been much more expensive. (translated from German) Limited Influence on Standards and Regulation for the Electricity Grid Despite the efforts to influence the development of standards and regulation, the actors in the heating industry remained relatively small players in the field with limited influence on the process. Some interviewees acknowledged this as a problem for dealing with issues related to these requirements: Consequently, the actors in the heating industry were not entirely successful in reaching their goals. The rules for dealing with grid frequency changes mentioned in Sect. 3.4.2 are an example where the heating industry's limited influence on the process made it unable to prevent a change in the standard that was against their interests. These rules were introduced during the development of mCHP, replacing earlier requirements that were easy to fulfil for Stirling-based mCHP appliances: The requirements for connecting to the grid. (…) There was a standard and we complied with that standard and then what was previously required was now forbidden or the other way around. So there, the standards are not fixed situations, they are temporary. Technical solutions to design Stirling-based mCHP appliances in line with these changed requirements have a high impact on the devices' costs and efficiency. At the time when we conducted our interviews, the companies using Stirling engines relied on provisions in the grid access regulation which exempt new, innovative technologies from certain requirements and allow them to continue operating according to the old requirements (see European Commission, 2016, secs. 66-70). However, these temporary provisions only apply until a limited number of appliances using the new technology have been connected to the electricity grid. Consequently, the actors relying on Stirling technology were still in the process of working on this issue at the time of our interviews: We've been fighting that [the new requirements] for two years and there's hopefully a special dispensation within that. Conflicts Surrounding the Calculation Method for mCHP Appliances' Energy Labels A second major topic of standardisation was the calculation method for assessing mCHP appliances' energy efficiency, which underlies the efficiency label that each appliance needs to carry according to the ErP and Energy Labelling Directives (see European Parliament & Council of the European Union, 2009. The topic was particularly important and contentious due to its relevance for European legislation and the European Commission's involvement in the standardisation process. The calculation method is part of the product standard (EN 50465, the latest version of which was published in 2015), which did not yet exist when the technology's development started (see Sect. 3.1). 6 This standard "specifies the requirements and test methods for the construction, safety, fitness of purpose, rational use of energy and the marking of micro Combined Heat and Power appliance[s]" (CENELEC, 2017). While development of most of the standard's elements proceeded relatively smoothly, there were major conflicts regarding the energy efficiency calculation methods: Within standardisation, the range of opinions about calculating the efficiency was, in my opinion, the biggest problem. (translated from German) These conflicts related to two fundamental issues: (1) There was disagreement about the formula which underlies the calculation and for which different options were being discussed. (2) The way in which the European Commission was involved in the process was seen by most actors as exceeding the role that it should play in developing harmonised standards (also see the explanation of harmonised standards in Sect. 3.2.1). Actors from the heating industry were the major players when developing EN 50465. Because this standard only covers mCHP appliances, parties who had high stakes in the technology (mostly overlapping with the actors covered in Sect. 5.1) dominated the relevant committees where it was developed. In addition, European consumer and environment protection NGOs were involved although, according to the interviewees' depiction of the process, these actors did not have a major impact on the outcomes. The European Commission was not represented in the committees but nevertheless influenced the standard's development in a major way. Below, we first outline the conflicting positions regarding the calculation method. We then summarise the conflicts between the heating industry and the European Commission during the development process. The chapter then ends by describing the process's outcome and giving an outlook to future developments expected by our interviewees. Conflicting Positions Regarding the Calculation Method Deriving a calculation method to assess mCHP appliances' efficiency was not trivial because this formula needed to incorporate both the heat and electricity produced by mCHP appliances and at the same time give a result which would allow a meaningful comparison with other heating technologies for consumers: And now you have an additional problem: How do you grade this new segment, which delivers two forms of energy as an output, among the existing heat generators and energy products? (translated from German) Consequently, there were different views regarding how the electricity produced by an mCHP appliance should be rewarded when assessing the appliance's energy efficiency: There were companies who wanted to have this calculated in specific ways. We even had three different methods before we finally agreed on one in a compromise [within the industry association]. (translated from German) Most of the industry agreed on this compromise, which was developed in standardisation committees and industry association's working groups. However, a minority of industry actors including one major appliance manufacturer (also see Sect. 5.1.3) was in favour of a different method, which was also supported by the European Commission. These different preferences for calculation methods resulted from different views on how to consider aspects like the produced electricity, reduced needs for electricity from (relatively inefficient) power stations, and where to draw the boundary of the system for the purpose of assessing its efficiency: There were long discussions about where the system boundary of the appliance lies. How do you actually calculate the efficiency of such a Stirling product? Do you include the efficiency of the boiler or do you only take the efficiency [of the Stirling engine]? And finally, we brought ourselves to write into the standard that the entire system is considered. (translated from German) The parties disagreeing with the industry compromise argued that using this formula is inappropriate for assessing an mCHP appliance and that the underlying approach would only be suitable for assessing the energy efficiency of an entire building but not of a standalone heating appliance. They accused other actors in the industry to push this formula through in order to make their appliances look more energy efficient than they actually are, stating that "this no longer has anything to do with physics [and] is all about marketing" (translated from German). On the other hand, interviewees supporting the industry compromise argued that this was the best way to reflect physical realities and ensure that the results enable consumers to compare mCHP to other technologies. They claimed that the alternative formula did not sufficiently factor in the electricity produced by mCHP appliances in addition to heat. And this [the alternative formula] was in such a way that electrical heat pump s were clearly treated preferentially in the resulting efficiency values, compared to micro CHP. And then we intervened and said: 'The micro CHP appliance cannot be nearly put on the same level as classic condensing boilers. And a heat pump has an efficiency value up to a third higher compared to the micro CHP, this is not reasonable.' That a heat pump has a higher efficiency than a classic condensing boiler is clear. (…) This is absolutely OK. But how does an mCHP appliance fit into this? (translated from German) This view of the alternative calculation method being wrong was also supported by an interviewee at an academic engineering research institute based at a German university: One of the colleagues made a nice example calculation. (…) Same primary energy in, (…) identical amount of useful energy out. And then he (…) applied the EU calculation for the labels. And for a heat pump-based solution he got an A++ and for the micro CHP-based solution, he got an A+ . This means that the methodology of the European Commission is wrong insofar that two different technologies generate the same useful energy with the same input of primary energy but get different labels. And there, the working group said: 'No that cannot be the case, this is physically wrong. And it is also confusing the customer.' (translated from German) Interactions Between the Industry and the European Commission Throughout the standardisation process (including before a formal standardisation request was made to CEN/CENELEC), the European Commission promoted an-in most interviewees' eyes-unjustified calculation. Together with the 'group dynamics' outlined in Sect. 5.1.3, this caused strong resistance among mCHP's developers and also made the topic highly emotional for some of them. In their view, the European Commission had overstepped their role in supporting this contentious formula which they saw as problematic: There was a high level of frustration within the standardisation committee because the engineers simply said: 'Hey, we are (…) calculating in the physically correct way. And if anybody can calculate correctly, that is us, the engineers, and not the civil servants. (translated from German) It is not so easy for them [the European Commission] to see what their real role is. You see a kind of imperialistic approach. On the one hand, the Commission wants to regulate technical details and technical content which is not according to the New Approach and where they don't see their role. Are they a stakeholder? Are they forcing something? So, I think (…) there's a problem area here. Initially in the process, the industry faced unclear guidelines from the European Commission: At certain moments in that standardisation group we saw [that] we seem to be shooting at a moving target. There was from the side of the Commission and the consultant, which the Commission had appointed, a kind of calculation model which became more complex and more complex and more complex (…). And then, at a certain moment, the Commission changed their ideas about the calculation procedure and then it seemed that we were (…) shooting at a moving target. So then, in the standardisation committee, we said 'we will put this on ice for a certain time, first see where the Commission will move and where the negotiations between the associations and the Commission will move'. And then, finally, we had an agreement with the Commission that we would propose a standard and then we would discuss it. And then we went ahead and took the initiative again. As the ambiguity of the European Commission's position on this issue eventually ended, it became obvious that the European Commission favoured a different calculation method than the compromise supported by most of the industry (see above). Given this situation, the members of the standardisation committee nominated two representatives (one of our interviewees at an appliance manufacturer and the consultant who accompanied the industry) to negotiate directly with the European Commission (also see Sect. 5.1.2). Both of them described these negotiations as very difficult because the process was lacking transparency from their perspective. They had the impression that other parties' lobbying and political interests not directly connected to mCHP influenced the European Commission's position to a large extent, but it was not transparent to them who was behind this influence and which arguments were used by these parties. Nevertheless, there was a clearly visible bias in favour of renewable energies at the expense of mCHP: I have seen many drafts [from the European Commission] of these requirements over the last five years. And in one draft, they had an explanatory memorandum. And there (…) they said: 'Micro CHP is an efficient technology but it is not renewable, it is not solar or wind power (…). And therefore (…) it should come to a result which is lower than renewable.' And then they said 'renewable is defined if the efficiency is at minimum 115%, so the efficiency should be below 115%'. Completely not logical, and it shows indeed that they were very biased. And finally, at some point there was a comment from the European Commission -of course only verbally and not in writing -'we don't need to discuss this anymore, micro CHP ought not be better than A + , full stop.' (translated from German) The European Commission's support for its preferred calculation method was documented in Commission Communication 2014/C 207/02 (European Commission, 2014). 7 This communication took many actors in the industry by surprise: I saw the latest draft which was going to the parliament and then I saw these words and I thought: 'Oh, what now? Now they're choosing already although we had the agreement that we would first have a discussion and then be able to exchange arguments etcetera. And now they have done it this way.' So at first instance, it was very disappointing. Around ten months after publishing the Commission Communication with its preferred calculation method, the European Commission released a formal Standardisation Request on the matter (European Commission, 2015). 8 This request asked industry, among other things, to develop a standard that specifies energy efficiency calculation methods for mCHP. Several interviewees pointed out that this request was released with a tight deadline and "came when the standard was finished almost". furthermore, they mentioned that the earlier events implied that the standard was expected to use the European Commission's calculation method as a foregone conclusion. While this conflict with the European Commission was ongoing, there were also discussions within the industry about the best way to proceed. As part of this process, some actors sought expert advice about the legal implications of a Commission Communication, which revealed that it was only an opinion of the European Commission and was not legally binding. This encouraged these actors to keep pursuing the compromise found earlier within the industry. However, other actors were in favour of proceeding with the European Commission's formula, as the following exemplary quote from our interview with a representative of the industry association shows: There were definitely also different opinions [in the industry]. And some also gave up and said: 'No, this is not the way it goes. I am sticking my head in the sand, just do whatever you want.' Again, the standard is [based on industry] consensus and all [industry actors] committed to it. But especially for the efficiency calculation [where] the Commission had different ideas, there also were actors [who said] 'it doesn't matter what our opinion on this is, the Commission wants this and then we do this'. And there were others who said: 'No, we don't do it this way. We got an answer from the Commission which (…) in our opinion is completely wrong. We want it our way.' (…) We had two meetings with heated discussions about which method is more correct. (translated from German) Much of this discussion revolved around whether to prioritise the standard's harmonisation or a physically correct calculation of mCHP's energy efficiency. One interviewee highlighted that it was foreseeable that the European Commission would not harmonise a standard with the formula favoured by most of the industry. According to this position, which was shared at the time by the British national mirror committee, it could not be in the interest of anyone in the industry to develop a standard that would eventually not be harmonised by the European Commission. Other interviewees did not see this as a major problem. Because the energy labels are based on self-declaration, 9 appliance manufacturers would be able to choose which formula to base their labels on, even if the standard was not harmonised. In this scenario, it was uncertain whether and how the national market surveillance authorities would react but the majority of the industry considered the risk of negative consequences small. They expected that applying a standard developed by an ESO would give them good arguments in a hypothetical investigation by the market surveillance authorities, even if the standard was not aligned with the European Commission's position. 10 They therefore saw an-in their eyes-fairer calculation method as more important than the standard being harmonised under the ErP and Energy Labelling Directives. In addition, they expected that the product standard could still be harmonised under the Gas Appliance Directive due to its gas-safety-aspects. At the end of these discussions, the supporters of the European Commission's calculation method were outnumbered and the committee put a draft standard to vote at CEN/CENELEC. This draft included the energy efficiency formula supported by the majority of the industry and was transparent about the issues in the standardisation process. This caused the European Commission to intervene in CEN/CENELEC's voting process, although this intervention was eventually unsuccessful: finally, we have written a foreword to the standard to make completely transparent -for the people who had to vote on the standard -that the standard was deferring from the Commission Communication, which is an opinion of the Commission without binding effect. And the standard was finally accepted but the Commission several times tried to intervene and really obstruct the voting process. So, they first asked -(…) As joint working groups, as technical committee, we had decided 'we are going for a formal vote'. We sent it to CENELEC for formal vote and first the Commission asked CENELEC not to send it for formal vote but CENELEC did. Then, they asked CENELEC to stop formal vote, even in the middle of the process. And finally, in the last step, after the vote was positive, there was a ratification by the technical board of CENELEC. And they tried to influence the technical board not to ratify the standard. So, in fact, three times they really tried to obstruct the standard and they didn't succeed. There also was the story that CEN/CENELEC published the standard and the EU Commission reprimanded CEN 'how can you publish something that has nothing to do with our mandate?' Whereupon the top level of CEN got into the game and said: 'Just a moment, slowly. You may give us a mandate but we are completely independent about how we write our standards and what we write in them. Because it is us who have the technical expertise, and you don't.' There was a quite interesting exchange of letters between the Commission and CEN where the top level of CEN distanced itself and said (…): 'We are writing technical standards. And if our engineers consider this standard correct from a technical point of view, then it is correct from a technical point of view.' (translated from German) Outcome of the Conflicts and Outlook to Future Developments Looking back at the process, most interviewees remained critical of the European Commission's role. However, two interviewees in particular also reflected critically on the industry's activities. One of these interviewees questioned whether it was wise to accept the European Commission's standardisation request, given the development of the process up to that point: The problem is that one does not (…) occupy oneself sufficiently with the mandates [before accepting them]. The mandate goes to CEN/ CENELEC, goes to the working groups [and] the committees, there is an appeal period when one can say 'this is nonsense, we are not interested'. This did not happen in this case and then, at some point, [the mandate] is accepted. And then it is on the table and one is stuffed. (translated from German) The second interviewee concluded that involving additional stakeholders in the process might have been helpful in addressing the issues with the European Commission: This clearly is something that did not go well. Maybe, we would have had to involve the national governments much stronger? Because the Commission is not deciding on its own and it is always easy to say 'yes, the European Commission (…), that circle does not appreciate our course of action'. But if we had activated the country representatives of different countries at an early stage, for example [commissioner] Oettinger in our case… (translated from German) Nevertheless, EN 50465 was eventually published including the calculation method favoured by most of the industry. As foreseen during the standardisation process, this meant that the European Commission did not harmonise the standard under the ErP and Energy Labelling Directives. When the standard was published, the UK mirror committee included a national foreword in line with its earlier position in the British version of the standard, advising against the use of the calculation method included in the standard: The UK committee advises, for the calculation of µs and µson of cogeneration space heaters the methodology described in the Commission Communication, reference 2014/C 207/02 should be used. This method is robust, scientific, provides a fair comparison across all technologies and is aligned with the established methods for assessing and comparing cogeneration performance. (BSI, 2015) 11 11 Clearly, the foreword to the standard was written before the Brexit referendum… Nevertheless, some interviewees also found this remarkable: Interviewee: As I already said, as often in Europe, the Brits think that they need to do their own thing. And they do this thoroughly. Interviewer: (Laughing) Only this time with the unique situation that they share an opinion with the European Commission. Interviewee: Yes, in this case they agree with the European Commission. This really is -one should make a big poster of this and put it up on the wall somewhere. Happens seldom enough… (translated from German) Despite this standard not being harmonised, most companies in the industry have so far been using it in calculating their appliances' energy efficiency for the self-declared energy label without negative consequences from the national market surveillance authorities: The Ecodesign and the Energy Labelling Legislation have started to be applicable from September 2015, so that is two years ago now. And I think (…) the vast majority of companies have been using the standard and also the calculation method of the standard. I know of one exception which is using the Commission Communication and the regulation and which really, I think, is using it to their own advantage. In our final interview in August 2017, we also learned that the European Commission has in the meantime started its regular review of the directives in question. As part of this review, the Commission also ordered an assessment of the directives' impacts: Interviewee: Currently, the process of review of the legislation is starting, or has started some months ago. The European Commission has already announced that to us as CHP representatives. Now the regulation is written but then you have new chances. They had their attempt to change physics but they were open for review and improvements of the legislation during that official review, which was announced that it should be ready, I think, five years after adoption of the regulation. (…) At least, they have ordered a consultant to make an evaluation. (…) Interviewer: And then, potentially it could be harmonised after the review changes this legislation? Interviewee: Yes, perhaps. Or, perhaps, the legislation will even be changed more so that the other standards have to follow anyhow. Depending on this assessment's outcome, the European Commission may therefore change its position on the calculation formula. In addition, fundamental changes to the directives are also possible, if the review finds that they need to be improved. This outcome would possibly also require the industry to develop entirely different standards. The future development of this issue is therefore still open. interviewees' evaluation of the mchP case In Chapter 3, we presented the various ways in which standards and regulation influenced the development of mCHP, which triggered the extensive company-and industry-level activities depicted in Chapter 4, Sects. 5.1 and 5.2. We also asked every interviewee to evaluate the effects of these activities on mCHP and the relevant standards and regulation. Because all mCHP appliances must fulfil the same set of requirements, these evaluations were similar across manufacturers despite the sometimes-different approaches to managing standards and regulation. Most applicable standards and regulation were already available and supported mCHP's development before the industry actors initiated their activities (see Chapter 3). These activities therefore mainly focussed on topics where standards and regulation were still missing and/or not supporting mCHP. Because of these efforts, standards and regulation now support mCHP technology in three additional ways: (1) The new requirements for access to the electricity grid provide a workable solution to connect mCHP appliances to the grid. (2) The new product standard defines requirements for safety, energy efficiency, and related topics for mCHP, which support conformity assessment of the technology. (3) Despite the conflicts with the European Commission detailed in Sect. 5.2, the energy efficiency calculation methods in the product standard support the industry in fulfilling the requirements of the European directives related to energy efficiency. furthermore, some interviewees also mentioned supporting effects of these new standards beyond now being able to fulfil regulatory requirements. They also help the companies in the field to communicate the technology's benefits to their customers and provide confidence to adopters of the innovation. These changes in standards and regulation enabled the industry to market mCHP appliances in Europe. All interviewees at major manufacturers stressed the importance of aligning their company-level management with the industry-level work to reach this outcome, estimating that they might even not have been able to sell mCHP products at all in the European market without the activities at both levels: Interviewer: Can you already estimate whether this collaboration between new product development and standardisation was successful or not? Or is the result still pending? Interviewee 1: This is positive. Interviewee 2: Yes. Interviewee 1: It definitely is. We can say that we most likely would not have a product if one had not intensively worked on this. This is definitely very, very crucial, also specifically the network connection requirements (…). It could absolutely have been the case, if we had not worked on this topic and had not been interested in it, that we would not have had a product at some stage. Or a product that does not conform to these standards. Interviewee 2: This could have happened, yes. Interviewer: OK, this means that the worst-case-scenario would be that you could not sell it? Interviewee 2: Yes, exactly. Interviewee 1: Exactly, exactly. (translated from German) Consequently, apart from one company which favoured other technologies in its product portfolio, the interviewed major appliance manufacturers have mCHP appliances in the market at the time of writing. While some companies exited the development of Stirling-based mCHP appliances (see Sect. 2.2.2), this was due to reasons unrelated to standards and regulation. Although the smaller companies did not participate in the industry-level activities to develop standards and regulation, they still benefitted from the changes that resulted from these activities. While the interviewed start-ups did not yet produce mCHP appliances at full commercial scale when we interviewed them, they were confident that their products could be marketed under the partly revised requirements from standards and regulation: Last year, we reached a milestone which was important for us. We received the CE batch approval for the system. This means that we can install the system in limited numbers across Europe. The next step, which we are taking in parallel to the system's market introduction, is that we seek the full CE mark. This means that we can build an unlimited number of appliances but on the other hand we may then change nothing on the appliance [without having to re-certify it]. (translated from German) As I already said, we are now at the stage of commercialising [where] it [the appliance] goes to the first customers and the first field tests [and] once it goes out, everything will be 100 per cent adapted to the standards. (translated from German) In line with these results, the interviewees generally were very happy with the outcomes of their activities but had reservations about the needed steps to get there, as the following quote summarises: I'm happy with the results [of the process], I'm not often happy with what we needed to do to get these results. Sometimes, it was really tough and time-consuming, and involving a lot of lobby work and convincing people etcetera. It would have been nice if that had been more efficient. references indication by labelling and standard product information of the consumption of energy and other resources by energy-related products. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. Abstract This chapter combines the patterns identified in the earlier chapters into a generalisable grounded theory and identifies the relationships between them. This grounded theory is based on a framework of three nested levels: (1) the company, which is part of (2) an industry, which is in turn part of (3) its wider context. The theory focuses on supporting factors and activities needed on the company-and industry levels to facilitate effective management of standards and regulation in innovation contexts. This chapter also shows how the three levels are linked together. The grounded theory explains how innovators can deal with demands and influences from the wider context by engaging in industry-level collaboration. Keywords Innovation management · New product development Cross-company collaboration · Co-opetition · Managing standards and regulation · Managing societal needs The empirical insights presented in the earlier chapters provide an excellent base for building theory on our research question and allows us to address the theoretical gaps outlined in Sect. 1.2.4. To do so, we develop a process model of the management. This model includes the activities needed to successfully introduce an innovative product to a regulated market where standards are needed, and a number of underlying structural elements that enable these activities. CHAPTER 6 Building a Grounded Theory on Managing Standards in Innovation Contexts As we already expected in Sect. 1.2, these activities occur at different levels. figure 6.1 shows our general framework of the three nested relevant levels. In this framework, (1) a wider context encompasses (2) several industries, which in turn are made up of (3) a number of companies. Concerted activities on all three levels are necessary to align innovation and standards/regulation as achieved in the mCHP case (see Sect. 5.3). Our further theorising fills in the blanks of fig. 6.1 by looking closely at each level and identifying the factors which eventually lead to such an outcome. We build detailed theory about the company level (Sect. 6.1) and the industry level (Sect. 6.2). finally, we consider how all of this relates to developments and the associated processes that occur in the wider context of an innovation (Sect. 6.3). following these theory-building efforts, we end the chapter with some final thoughts on our findings (Sect. 6.4). managing standards and regulation on the comPany level The different types of standards'-and by extension also regulation'sstrong implications for innovations make them key issues to manage in NPD contexts. We first consider the company level. In general, the observations from our case show that a number of supporting factors need to be in place as necessary conditions to form the foundation for managing standards and regulation successfully (shown in the bottom half of fig. 6.2 and discussed in Sect. 6.1.1). Building on this, companies need to carry out several activities to ensure that an innovation fulfils all standard-and regulation-related requirements (shown in the top half of fig. 6.2 and discussed in Sect. 6.1.2). These activities ultimately determine the degrees of freedom for the innovation, as we show in Sect. 6.1.3. Supporting Factors: Necessary Conditions for Managing Standards and Regulation We observed a number of recurring themes across the interviews (see the data presented in Sect. 4.1), which form the foundation for companies' activities. Having such a foundation in place appears to be a precondition for successfully addressing standards and regulation. On the most fundamental level, companies exhibit three key characteristics (awareness of standards' and regulation's importance, expertise, and availability of financial resources). These three key attributes drive the degree to which the company adopts a strategic orientation which in turn influences the organisational support structure for managing standards and regulation. We provide more detail about each of these aspects below. Key Characteristics: Awareness, Expertise, Financial Resources Awareness of standards' and regulation's importance is the first key characteristic of companies that our data shows to be relevant. Our interviews demonstrate that companies differ substantially on this aspect (also see Sect. 4.1.1 and the characterisations of companies in Table 4.1). Some degree of awareness about this topic's importance is likely to emerge in any company by the time that the product enters conformity assessment. However, our case shows substantial variation in how aware companies actually are. Some firms' awareness was limited to the regulation-related aspects and only emerged once they addressed their product's certification. Companies at the other end of the scale showed deep knowledge of standards and regulation. Expertise is a second key characteristic: Relevant knowledge can be grouped in two main categories: (1) operational, and (2) strategic. The operational expertise covers technical knowhow (which companies that are able to develop an innovation are likely to have) and topics related to effective participation in standardisation committees and industry collaborations (e.g. negotiating skills). We observe much more variance in companies' strategic expertise (e.g. abilities related to coordinating standardisation activities for different technologies in the company's portfolio, and contributing to the industry-level processes discussed in Sect. 6.2). This strategic expertise is needed for assessing the effects of standards and regulation and effectively managing the company's input in standardisation. While much of this expertise is company-internal, all interviewed companies also relied on external expertise in areas where their knowledge was insufficient (in our case mainly coming from consultants and notified bodies). This observation suggests that being aware of the limitations of one's own expertise and seeking outside help where needed is important for successfully managing standards and regulation for innovation. It also suggests that a company's ability to manage these topics relies to some extent on the industry structure, and in particular the supporting institutions (see Sect. 6.2.1), which can substantially facilitate the company's work. Providing support for the company is hence one key pathway through which the industry level impacts the company level. Financial resources are the final key element underlying the management of standards and regulation that we identify in our data. Here, we see a contrast between established companies and the smaller start-ups whose limited financial resources constrain their ability to participate in standardisation and lobby for changes in regulation. Strategic Orientation and Organisational Support Structure The three key characteristics of companies identified above determine to what degree they are able to orient their standards-and regulationrelated work strategically. Our observations in Sect. 4.1.3 suggest that companies with little awareness, expertise, and financial resources tend to take a less strategic and more ad hoc approach. We therefore infer that these elements' presence is a necessary condition for a strong(er) strategic orientation. This manifests itself in aspects of the management, such as the degree to which standardisation activities are coordinated across the company and planned in advance. This strategic orientation also forms the basis for an organisational support structure, which helps ensure that the innovation is systematically developed in line with requirements. An important function of this structure is assigning responsibilities both for operational management of standards and regulation, and for coordinating these activities across the company. In all interviewed companies, responsibility for operational tasks was tightly linked to the engineers developing a product. This appears to be good practice because of these tasks' technical nature and the close relationships between technical development work and standardisation/regulation efforts (see Sects. 4.2 and 6.1.2). In companies with a strong strategic orientation, the organisational support structures also encompass clearly defined responsibilities for tasks related to planning and coordinating standardisation/regulation-related work. 1 In our case companies, these roles were attached to various organisational functions, including the new product development, regulatory affairs, and certification departments. Our data does not indicate that any of these affiliations is preferable per se, as long as the staff fulfilling this role are sufficiently influential within the company. furthermore, companies can strengthen this organisational support by investing additional resources in full-time staff and tools supporting their work, such as the database tracking expertise related to specific standardisation/regulation topics that we observed at one company. Activities for Managing Standards and Regulation The factors discussed in Sect. 6.1.1 provide the basis for effectively managing standards and regulation in the innovation. The activities (depicted in the top half of fig. 6.2) can be grouped into (1) core activities that are directly related to new product development (identifying regulation and standards, specifying the product, evaluating conformity to requirements) and (2) activities related to engaging at the industry level. Core Activities: Identifying Regulation and Standards, Specifying the Product, Evaluating Conformity to Requirements Based on the data outlined in Sects. 4.2.1, 4.2.2, and 4.2.3, we identify three core activities for managing standards and regulation which are part of the new product development process: (1) identifying applicable regulations and standards, (2) specifying the product, and (3) evaluating the product's conformity to the requirements. Carrying out all three in some form is necessary to ensure that the final product conforms to all applicable requirements. Nevertheless, we observe variation in how exactly firms pursue these tasks. This has implications for the degrees of freedom in new product development, as we outline below. Before firms can take any action towards addressing standards and regulation in their NPD process, they need to know which requirements apply to their product, making identifying regulation and standards an essential task. Our observations suggest that companies should do so at a very early stage, possibly already when deciding whether to invest in a new technology. This enables them to shape their product in a way which meets the requirements from the outset. firms need to continue identifying requirements throughout the NPD process because rules are subject to change, and because not all technological aspects where standards/regulation apply may be foreseeable at the outset of the NPD process. We also observe that not all companies are able to do so on their own, due to lacking awareness and expertise. This may result in an ad hoc approach to the topic and missing organisational support. However, such firms can rely on supporting institutions from the industry (see Sect. 6.2.1) to 'outsource' this activity and rely on third parties (e.g. consultants, notified bodies, and-in the case of component suppliers-clients) to identify relevant requirements on their behalf. However, our case shows that doing so has two drawbacks for the subsequent activities: (1) In some situations, companies may have discretion over which standards and regulation that they apply to their innovation, e.g. when multiple directives could be applied. To take advantage of this opportunity, they need to be aware of potential alternatives and evaluate the alternatives' consequences. (2) Relying on an external party to stay informed about changing requirements may delay the point in time when companies learn about new developments. Consequently, all companies in our case that followed a strategic approach to managing standards and regulation emphasised the importance of identifying regulations and standards for the subsequent activities. The requirements identified in this first step are fed into the process of specifying the product, which includes 'translating' the contents of standards and regulation into concrete requirements, and designing the product in such a way that it meets these requirements. The case shows that especially requirements related to safety often take a very high level of expertise to implement and consequently all interviewed companies relied to some degree on external expertise in this step, and also used standardised components which were proven beforehand to meet the requirements. This activity therefore, again, benefits from a well-developed industry structure with supporting institutions (see Sect. 6.2.1). finally, companies need to evaluate their product's conformity to the requirements as part of the NPD process. Our case shows that firms should ideally carry out a first evaluation when deciding whether to invest in a technology and then repeat the assessment at regular intervals throughout the process. An initial appraisal of the innovation's potential to conform to the requirements enables companies to estimate the needed effort to address the topic in the NPD process and-in the worst case-prevents them from investing in technologies that cannot be marketed due to barriers discussed in Sect. 3.5. A firm's ability to effectively conduct such an initial appraisal relies on its strategic orientation, because of the understanding needed to assess factors, such as the likely impact of standards and regulation and their potential future developments. Once companies invest in developing a technology for which standards and regulation are relevant, the case suggests that they should regularly review its conformity, potentially with the help of industry-level supporting institutions if the company's own expertise is insufficient. Doing so throughout the process reduces the need for duplicating development work if the results are fed back into the product specification process in a timely manner. Engaging in Standardisation and Regulation Engaging in standardisation and regulation is an additional, optional outward-looking activity (see Sect. 4.2.2), which provides the main path for companies to influence their environment. The examples of the smaller start-up manufacturers in our case show that developing a product which is acceptable for the market is possible without directly influencing standards and regulation. However, doing so opens up additional opportunities because it allows companies to contribute to developments on the industry-and wider context levels and provides them with the additional option of attempting to adapt standards and regulation rather than the innovation when conforming to them is impossible or difficult (see Sect. 3.5). These activities rely heavily on a strong foundation (see Sect. 6.1.1) because they are relatively resource-and knowledge-intensive (both in terms of money and expertise), and also require the company to adopt a strategic outlook on the technology. The hurdles for mCHP's market introduction would most likely have been too high (locking the technology out of the market) if none of the companies had taken the initiative to develop standards and influence regulation. Although this is clearly a benefit of this engagement, actors who did not contribute also benefit to a large extent from the results (see Sect. 5.3). This implies that companies need a high degree of strategic vision and long-term thinking, aiming to develop a 'large pie for everyone' rather than a 'small pie for themselves' (at the risk of 'having no pie at all'), to invest in influencing standards and regulation for a new technology. Such long-term thinking, both within the company and at industry level, is also needed to successfully navigate the dynamic processes related to this topic (see Chapter 5, Sects. 6.2 and 6.3). Degrees of Freedom for New Product Development The aspects outlined so far have strong implications for the degrees of freedom for developing a new product. Depending on how they are handled, companies may enjoy a large scope for developing their own solutions or may be somewhat more restricted in key areas. The company in our case that perceived standards mainly as limiting its freedom in developing mCHP (see Sect. 4.2.4) is also the one that was the least invested in the activities outlined above and relied to a very large degree on notified bodies and consultants (also see Table 4.1). Even though the interviewee at this company commended the notified body for its flexible approach in conformity assessment, the company's relatively low level of activity made it more dependant on external parties. This may have contributed to reducing the room to implement its own solutions. The data clearly shows the benefits of taking an active approach towards the tasks outlined above. By doing so, firms can create a substantial amount of 'space' for innovating. In particular, three factors explain how this 'space' can be created: (1) The leeway in identifying regulation and standards (see the discussion earlier in this chapter and Sect. 4.2.1), (2) the open nature of many standards and different ways of demonstrating conformity (see Chapter 3), and (3) the potential to influence standards and regulation (see the discussion above and Sect. 4.2.2 and Chapter 5). Companies in the case who managed the topic strategically combined these factors in various ways (see e.g. the example of bringing new methods for ensuring product safety into the standard in Sect. 4.2.4) in order to develop innovative solutions while ensuring the final product's fit to the requirements. Consequently, all interviewed actors who followed such an approach agreed that they enjoyed a relatively large degree of freedom for developing the innovation while benefitting from the relatively stable basis offered by standards and regulation described in Chapter 3. industry level structure and Processes following the theoretical analysis of the company-level management in the previous chapter, we now turn our attention to the industry level. Activities on the industry level are likely to focus on the standards which have the strongest impact on an innovation. In highly regulated markets, these standards are often linked to regulation (see Chapter 3). figure 6.3 summarises our findings regarding the work at the industry level. Again, we observe a number of underlying factors which contribute to an industry structure that facilitates activities in which standards and regulation are addressed (see bottom-half of fig. 6.3 and Sect. 6.2.1). These activities are shown in the top of fig. 6.3 and Fig. 6.3 Industry-level structure and processes for addressing standards and regulation discussed in detail in Sect. 6.2.2. furthermore, developments in the wider context influence the industry-level activities and vice versa, as we show in Sect. 6.2.2 and discuss in more detail in Sect. 6.3. Key Elements of the Industry Structure Our case clearly shows that the industry-level activities happen on the background of certain industry structures that may support (as we observed) or hinder the process. While the industry structure obviously consists of many elements, most of which are beyond the scope of this study, the data presented in Sect. 5.1 reveal three fundamental elements: supporting institutions, approach to IPR, backing for the innovation among firms (shown at the bottom of fig. 6.3). These elements explain much of the success that we observe in our case. Below, we elucidate them and show how they contribute to an industry structure that is conducive to addressing standards and regulation for an innovation. We also briefly consider how such an industry structure can emerge. Fundamental Elements: Supporting Institutions, Approach to IPR, Backing for Innovation first, throughout our data in Chapters 4 and 5 it becomes apparent how crucial a number of supporting institutions were for all aspects of the case. Their influence extends to company-internal management (as discussed in Sect. 6.1), industry-level collaboration, and attempts to influence standards and regulation. Table 6.1 summarises the supporting institutions which we encountered in the mCHP case and the functions that they fulfilled. The list of institutions and functions in Table 6.1 is specific to our case and therefore unlikely to be exhaustive. for example, it is conceivable that NGOs could support an innovation with social and/or environmental benefits, and contribute to the management of standards and regulation by influencing policy makers and the public debate in the wider context (see Sect. 6.3) in that technology's favour. Although the composition and functions of supporting institutions are case-specific, presence of such institutions in general is likely to be important in managing the co-evolution of innovation, standards and regulation. Our case suggests that these supporting institutions' contribution to the process is even larger than the sum of the individual functions listed in Table 6.1. One reason for this is these institutions' lack of a direct (financial) interest in the technology's success, which lends the industry's claims and actions credibility. In addition to them facilitating much of the necessary work, both on the company-and industry level, they can therefore be seen as amplifying the impact of the innovators' own activities. Second, we identify the approach to IPR as core to an industry structure which supports managing standards and regulation effectively. As we show in Sect. 5.1.4, actors in the case placed a high importance on IPR in technology development partnerships. However, they consciously decided to leave the topic out of activities directly related to standards and regulation. While the best way of handling IPR issues may be case-specific, our data shows that an industry needs to ensure that the chosen approach does not discourage others from joining the industry's efforts. Because collaborating in technology development and standardisation/regulation is key to the industry activities (see Sect. 6.2.2), the IPR regime must support them. This means that on the one hand all contributors' IP must be protected. On the other hand, no party should be able to use its IP for dominating the cooperation in a way that causes potential developers to refrain from or stop contributing to the technology. In addition, such domination by one party would likely also make the resulting standards unacceptable to other key stakeholders on whose support the innovation depends. Especially when these standards are linked to regulation (see Table 3.3), the approach to IPR must also be acceptable to regulators and other stakeholders. for example, standards which are used to specify essential requirements under the 'New Approach' should not incorporate IP that is subject to licensing. When addressing standards with no link to regulation, approaches to IPR that involve standard-essential patents (as commonly discussed in the literature, see Sects. 1.1 and 7.3.2) may be more acceptable. The case suggests backing for the innovation among firms to be the third key element of the industry structure that determines to what extent the processes for addressing standards and regulation can be effective. Whether the majority key firms in the industry or only a few players support the innovation influences the extent of industry-internal conflicts, and how the innovation's legitimacy is perceived by outside actors. furthermore, the degree of backing has ramifications for the 'group dynamics' that we discuss in Sect. 6.2.3. Emergence of the Industry Structure The three fundamental elements discussed above make up the parts of the industry structure that are relevant for the processes that we discuss in Sect. 6.2.2. When, as we observed in our case, these attributes are well aligned (i.e. a good network of supporting institutions is available, a fitting approach to IPR is employed, and there is widespread backing among firms) this structure provides a solid foundation for these processes. On the other hand, if some of the elements identified above are missing, this is likely to hinder the industry-level work needed to ensure alignment between the innovation and standards/regulation. In addition, such missing elements may have negative implications for company-level work. Although our data does not offer detailed insights into how this industry structure has been built over time, it clearly is the result of a long-term development on which the companies were able to draw in the present case. Ultimately, this long-term development is likely to have been driven to a large extent by the individual companies in the industry who have been contributing to setting up supporting institutions, such as industry associations, and establishing an effective approach to IPR. Also the backing for the technology requires a long-term commitment, as our case shows. Individual companies can try enlisting their competitors in contributing to establishing these key fundamental elements, but are unlikely to succeed in building them on their own. furthermore, some elements that can be leveraged in this context (e.g. NGOs as supporting institutions) may also appear without industry-actors' direct involvement. Industry-Level Processes for Facilitating the Innovation The elements of the industry structure outlined in Sect. 6.2.1 underlie the joint industry-level activities that eventually lead to changes in standards and regulation needed to support an innovation. In our case, we categorise industry-level activities (see Chapter 5) into three core processes: (1) collaborating in technology development, (2) collaborating in standardisation and regulation, and (3) resolving conflicts. As the case and our further discussion below show, it is essential for achieving the needed changes in standards and regulation that these processes are jointly driven by companies from the industry (unless one innovator is strong enough to 'push them through' alone), and that need to be coordinated well in order to deliver the desired results. The findings from Sect. 5.1 suggest that collaborating in technology development both helps actors in the industry to jointly overcome technological challenges in some areas and also provides a basis for the further activities. Through their joint engagement in developing an innovation, actors in an industry (1) share a strong interest in the technology's success, (2) develop a common outlook on standardisation and regulation issues, and (3) can more easily address technological issues, that arise in the process of developing standards/regulation, together. These points also contribute to a tight link between technology development and collaborating in standardisation/regulation. for example, evidence created in technology development cooperation projects was directly used in discussions on standards with other stakeholders in the mCHP development process (see Sect. 5.2.1). Both types of collaboration benefit from a well-developed industry structure (see Sect. 6.2.1). Supporting institutions facilitate the cooperation because they provide already established forums where the work can take place, help coordinate the activities, and provide expertise and access to policy makers. An appropriate approach to IPR ensures that participating in cooperation is viable in terms of protecting one's own input while avoiding that certain actors can dominate the technology's development through their patents. Nevertheless, even when these factors are present, some conflicts may occur. Conflicts are particularly likely if important actors in the industry do not back the innovation (as could be observed in our case, see Sect. 5.1.3). furthermore, the developments in the wider context about which we theorise in Sect. 6.3 may also contribute to conflicts, as could be observed in our case. This makes resolving conflicts a final key activity on the industry level to ensure that the changes in standards and regulation needed for an innovation can be achieved. Also for this key activity, our data shows the industry structure's importance for this issue, with supporting institutions playing key roles in helping to solve these issues (see Table 6.1). Individual Companies' Contribution to Industry-Level Processes The industry-level processes are chiefly driven by individual companies' contributions. Although the case shows that these processes often last several years and companies need a strategic long-term view to navigate them effectively, their results are much more immediate than building the industry structure outlined earlier. furthermore, the industry-level processes enable companies to collaborate on those activities that are needed to align the technology, standards, and regulation, which cannot be carried out at company-level. Especially for companies which have insufficient clout on their own for driving changes in standards/regulation and engaging with the wider context (see Sect. 6.3), contributing to these processes is the key path to influencing developments at the industry-and wider context levels. 'Group Dynamics' in the Industry As we observed in Sect. 5.1.3, the industry structure and collaboration processes in the mCHP resulted in certain 'group dynamics'. In our case, the strong support among industry and the obstacles to implementing the innovation, which were perceived in common across most involved actors, created mCHP's backers forming a very closely-knit group. They adopted a strong 'us vs. them' mentality when dealing with any parties not supporting the innovation. On the other hand, a lack of support and conflicting perceptions of the technology's environment may result in very contentious 'group dynamics'. Our case shows that such 'group dynamics' cause the involved companies to adapt a common outlook on the technology and what was needed to make it successful. Consequently, in such a setting, few disagreements between firms are likely to occur and the processes for resolving conflicts are mainly needed in dealing with the wider context instead of addressing industry-level issues. This common outlook and 'us vs. them' mentality also enables an industry to speak with one voice when addressing topics in the wider context. However, on the other hand such a closely-knit group of actors also may have drawbacks. first, it may endanger the industry of entering a 'groupthink' mode of acting. More importantly, it may impact on how the industry is seen by stakeholders in the wider context. 'Group dynamics', such as the ones observed in the mCHP case, carry the risk that the industry is perceived as a colluding group, which writes its own rules and engages in regulatory capture. Our data does not show whether mCHP's backers were indeed perceived in this manner, but the discussion on how to interpret the industry's own energy efficiency calculation method in the wake of the Volkswagen Diesel scandal (see Sect. 5.2.2) shows that some actors were aware of this risk. Potentially, the credibility given to the technology by some of the supporting institutions (see Sect. 6.2.1) may also counter-act this threat, although more research is needed to investigate this. Despite these possible pitfalls of acting as a too closely-knit group on the industry level, our case suggests that doing so generally supports the industry-level processes. The benefits of reduced conflicts and 'speaking with one voice' are potentially substantial and supported mCHP's development considerably. The collaborations to develop the technology and in particular the successful handling of the European Commission's intervention in the energy-labelling issue would have been hampered by other possible constellations of actors. Similar benefits are also likely to apply to other cases. develoPments and associated Processes in the wider context As a final area within the three levels of our framework (see fig. 6.1), our case shows the importance of developments in the innovation's wider context beyond the industry, and the associated processes of managing them. All our interviewees repeatedly stressed the importance of managing links with interests and actors outside the industry, such as regulators and developers of other technologies. furthermore, our data reveals the aspects of standardisation related to the wider context to be both the most contentious topics in the mCHP case, and the ones demanding the most attention of the innovators (see the introduction to Sect. 5.2). In the mCHP case, we observed three such important developments, which also were intertwined at some points: (1) One related to changes in access to the electricity grid, (2) trajectories of other innovations that were emerging simultaneously in that space (e.g. renewable energy generation, see Sect. 5.2.1), and (3) events related to political agendas and policy objectives that drove regulators' activities (e.g. reducing CO 2 emissions and promoting renewable energy, see Sect. 5.2.2). In addition, several interviewees expected trends relating to re-use, recyclability and reparability (RRR) to become similarly impactful in the future. Beyond these examples, other types of developments could play similar roles in other cases. for example, both important societal debates, 2 and scientific findings on risks associated with an innovation 3 could have substantial implications for a technology's standards and regulation. Overall, these types of trajectories in the wider context are therefore highly relevant elements for theorising as part of the three levels in our framework. Our case offers a clear picture of how these developments interact with the activities on which we focus in this study. While the case does not provide detailed insights into these trajectories themselves, it does thus offer an excellent basis for theorising about their interactions with standards in an innovation's development. figure 6.4 shows these interactions and provides a more detailed look at the link between the industry level and the wider context shown in the topmost part of fig. 6.3. In Sect. 6.3.1, we discuss the relevance of these developments further and shed light on their effects on an innovation's development. We then theorise in Sect. 6.3.2 about strategies that actors in an industry can use to influence developments in the wider context. Relevance and Effects of Developments in the Wider Context The types of trajectories outlined above are driven by interests, which in many cases may not be aligned with the needs of a specific innovation, and can directly lead to new requirements. for example, the data presented in Sect. 5.2.1 shows how designers of renewable energy generation technologies and grid operators drove changes to grid connection standards with which mCHP had to comply. In terms of standards that innovators may encounter (see Sect. 3.5), such processes in the wider context are by definition always relevant for standards that relate to regulation (which is made by policy makers and other actors who are part of the wider context). However, work in areas with no link to regulation may equally be impacted by the wider context, for example when standards define interfaces to a larger system, such as the electricity grid in the mCHP case. Such external influences can be positive or negative for the innovation, and may therefore ultimately lead to conflicts. This depends on the interests that are at stake. In our case, we identify six relevant types of interest (see Table 6.2 for one example of each from the interactions concerning mCHP's grid-access 4 ). (1) Innovators have their own interests in how the wider context should develop. (2) These interests may be shared with other actors who have a common interest. (3) Actors may also have complementary interests, which can be supported by developments that are in line with the innovators' own interest. On the other hand, there may be (4) competing interests which aim to achieve an outcome that is incompatible with the innovators' needs. finally, there may be (5) conflicting interests that collide head-on with the innovators' goals. In addition, there may be (6) indirect interests, which are only indirectly linked to achieving outcomes in the wider context that support the innovation. As the examples in Table 6.2 show, the interests and associated actors that are involved in the industry's wider context are likely to be highly diverse, making the developments that take place there very dynamic. Depending on how these interests are distributed among the actors in the wider context, these developments may be contentious issues. This requires an innovation's supporters to adopt a careful approach, as we outline in the following chapter. Influencing Developments in the Wider Context The kinds of development outlined in above are often embedded in major movements, such as the efforts to reduce CO 2 emissions. They may involve many stakeholders with diverse interests from different industries, governments, NGOs, consumers, and other actors. Also the logics of change in different wider context s vary and may not always be completely transparent, as the interaction with the European Commission in our case shows (see Sect. 5.2.2). Consequently, innovators tend to hold relatively little sway over external developments, although the exact extent to which they can influence them is case-specific. for example, the developers of mCHP had a much smaller influence in developing standards for access to the electricity grid than when handling the requirements for energy labelling (see the data in Sect. 5.2). Within the bounds of this influence, innovators can take an active approach to managing these developments as part of the process of resolving conflicts (see figs. 6.3 and 6.4). Our case exhibits four basic strategies that can be used as part of such an active approach, which we summarise in Table 6.3. 5 These four strategies are not mutually exclusive. They can be used in parallel, even for influencing one development in the wider context, as the interactions with the developments regarding grid-access standards in our case show. This reflects the multitude of interests and associated actors involved that we outlined in Sect. 6.3.1. Each of the four strategies has certain prerequisites, which to a large extent relate to interests of other actors and the structure of the wider context (see Table 6.3). Actors with common or complementary interests can therefore be involved in coalitions, whereas competing and conflicting interests may be addressed by lobbying (if the associated actors are open to discussions) and/or adapting the technology accordingly. furthermore, actors with competing and conflicting interests may sometimes also not be able to act on these interests. In these cases, persisting with own preferences may be an appropriate course of action. Through the consequences named in Table 6.3, the four strategies contribute to the outcome of innovators' attempts to resolve conflicts. Three such results are possible: (1) In the best case, conflicts with actors in the wider context are resolved, leading to the development of standards that are suitable for the innovation (i.e. standards with which the innovation can conform, see Sect. 3.5). In our case, we observed this outcome in many technical areas which were key for grid-access where small generators could eventually be connected to the electricity grid (see Sect. 5.2.1). (2) In addition, suitable standards can be developed following innovators persisting with their preferences. In this situation, which we observed in our case on the efficiency calculation issue (see Sect. 5.2.2), latent conflicts with other actors in the wider context may remain. Even though this outcome initially supports the innovation's market introduction, any latent conflicts may re-emerge later on and potentially lead to new problems. for example, in resolving the questions related to the calculation method in our case it was initially unclear how market surveillance authorities would treat the industry's use of its own standards instead of the European Commission's method and whether this would lead to further issues. (3) finally, industry actors may also fail to resolve conflicts to their satisfaction and face resulting standards with which the innovation cannot easily conform. As we observe on the issue of grid frequency (see Sect. 5.2.1), this is a likely outcome for issues where there are insufficient actors in the wider context with whom alliances can be formed and competing/conflicting interests are too strong. In conclusion, developments in the innovation's wider context are driven by a large variety of actors with diverse interests that may favour an innovation or oppose it. Depending on how these interests are eventually balanced, this context can boost an innovation or pose substantial barriers. Innovators tend to have limited influence on the wider context, which also depends on factors like the interests at stake, and the logic according to which changes in a development happen. While avenues for actively influencing these developments are available, their success ultimately depends on the characteristics of the specific development. final thoughts on our grounded theory In the introduction to this chapter and fig. 6.1, we claimed that innovators' activities on the company-, industry-, and wider-context levels need to be concerted in order to achieve alignment between an innovation and the applicable standards/regulation. Our discussion shows this to be true. While an innovation is ultimately driven by individual companies that develop the technology, any needed changes in standards and regulation require action on the other levels. We already expected the link between the company-and industry levels but also discovered the significance of the wider context. As our theory shows, these links mean that the processes which we study are not linear but highly dynamic. They depend on the input of a large variety of actors, in addition to the companies developing the innovation. These actors may have very different stakes in the innovation and diverse functions to fulfil. These functions include, for example, industry associations providing forums for collaboration and supporting lobbying efforts, governments offering stability for the innovation, or consultants and researchers supplying expertise in key areas. furthermore, not all actors involved in the process may be in favour of the innovation. This poses some of the most significant challenges for aligning the innovation, standards, and regulation. Beyond this, our findings also mean that aligning the innovation with standards and regulation is not a goal in itself. The mCHP case shows that doing so may often be a necessary condition for introducing a technology into the market. Additionally, the observations in Sect. 6.3 suggest that the function of standards and regulation goes much further. Arguably, standards and regulation fulfil a key function of translating the large trends and needs in a technology's wider context (e.g. reducing CO 2 emissions, building a stable electricity grid) into concrete technical requirements for a product. This means that aligning an innovation with standards equally contributes to aligning the innovation with the demands of key actors in the wider context on whom it ultimately depends for its success. The theory, which we have built based on the evidence from the mCHP case, offers guidance on how this can be achieved. This makes our theory a theory at the core of developing an innovation, going beyond the theory about managing standards that we anticipated building when we initiated this study. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. A first contribution of our study therefore lies in the new insights it provides into the effects of standards on innovation (see the discussion in Sect. 7.1). It clearly demonstrates their critical implications and provides new insights into some of the causal mechanisms behind the effects. In order to address them, our study shows that managers need to align the innovation with the relevant standards by adapting the technology, standards, and/or regulation. Our grounded theory approach revealed that this 'managing', which motivated our interest in the topic, does not only happen on the company level. In addition, processes that happen beyond the company at the industry level and in the wider context turned out to be more important than expected. We can therefore relate these findings to Van de Ven's (2005) concepts of 'running in packs' and 'political savvy'. furthermore, while our study focuses on the 'managing', it also links to related topics like sociotechnical systems (e.g. Geels, 2004;Smith & Raven, 2012;Smith, Voß, & Grin, 2010), and the functions of standards and regulation in establishing markets (Polanyi, 2001). At the outset of our study, we identified three important gaps in the existing literature (see Sect. 1.2.4) addressing our research question about managing standards, which guide our subsequent discussion: (1) a lack of attention to activities at the firm level, (2) few findings about companies' interactions with the industry level, and (3) limited findings about industry-level dynamics. Our study's detailed findings and open insights allow us to contribute to closing all three gaps. In addition, our study also highlights the importance of dynamics that are associated with the innovation's wider context. In Sect. 7.2, we discuss our theoretical contribution on the company level. Sect. 7.3 addresses the dynamics that affect the industry level and wider context. standards' effects on innovation As we show throughout our study, standards have very profound effects on innovation. Our contribution to the literature on these effects is threefold. first, we show the causal mechanisms behind these effects and demonstrate the importance of coherent sets of standards for an innovation (Sect. 7.1.1). Second, we add to existing findings on the circumstances under which standards are likely to have the strongest effects on innovations (Sect. 7.1.2). finally, we identify the lack of standards as a key source of ambiguity and uncertainty for an innovation (Sect. 7.1.3). Existing Standards' Effects on Innovation In Table 1.1, we summarised extant findings on how standards can support and/or hinder innovation. Our study adds to these findings by providing more detailed insights into causal mechanisms behind the effects already identified by the current literature. In particular, legitimacy and market access (see, e.g. Borraz, 2007;Botzem & Dobusch, 2012;Delemarle, 2017;Tamm Hallström & Boström, 2010) and creating supporting infrastructures (see Teece, 1986Teece, , 2006 are key to our study and illustrated in much detail by our case. furthermore, the mCHP case exemplifies other effects found in extant literature, e.g. standards being an important information source for NPD activities (see, e.g. Allen & Sriram, 2000;Egyedi & Ortt, 2017;featherston, Ho, Brévignon-Dodin, & O'Sullivan, 2016;Van de Ven, 1993) or their role in specifying testing and performance requirements (see Abraham & Reed, 2002;de Vries & Verhagen, 2016;Swann, 2010). Interestingly, some of the effects outlined in Table 1.1 and Sect. 1.1 were not recognised by the experts in our interviews. for example, literature (e.g. Kondo, 2000;Tassey, 2000) states that standards limit available options for innovation. Most interviewees clearly stated that standards as such did not prevent them from any choices that they deemed beneficial for the technology and left considerable degrees of freedom for innovating (see Sects. 4.2.4 and 5.3). What they did criticise was particular standards posing difficult requirements or reflecting strategic moves by other actors who were attempting to use standards for blocking the technology (also see Sect. 7.3). This shows that at least some of the effects identified in the literature (both positive and negative) do not apply to all standards per se. Instead, whether a particular standard has positive or negative implications for an innovation depends on that standard's contents. In particular, it depends on whether the innovation can be designed in such a way that it conforms to the standard (see Sect. 3.5) and how easily this can be done. While each distinct standard that touches on an innovation is relevant on its own in this context, our study and existing literature (featherston et al., 2016;Ho & O'Sullivan, 2017) show that innovations can depend on large sets of standards. Innovations therefore do not only depend on a small number of individual standards but often must incorporate requirements laid down in a variety of standards. Even for a relatively simple technology like mCHP (compared to systemic innovations like autonomous driving or Smart Cities), this set encompasses a substantial number of standards coming from all categories in Table 3.3 and covering multiple economic functions (see Blind, 2004Blind, , 2017Egyedi & Ortt, 2017;Swann, 2010). Even more extensive arrays of standards are likely to become relevant for technologies that are more complex. In many cases, these sets may include different standards formulating requirements for related aspects of a product and/or standards that relate and build on each other. This underlines the need for coherence among standards (see de Vries, 1999;featherston et al., 2016;Ho & O'Sullivan, 2017) and architectures on which individual standards are based (see, e.g. van Schewick, 2010) in order to realise their potential positive effects. Overall, our study suggests that the positive effects of standards on innovation by far outweigh the negative ones. The case clearly shows that standards not only impact on innovation positively in many ways, but may even be a necessary condition for bringing a new technology to the market. This also relates to our observation in Sect. 6.4 that standards fulfil the important function of specifying technological requirements that result from needs of actors in the wider context. There is some previous standardisation literature which relates to this observation: Delemarle (2017), Botzem and Dobusch (2012), and Van de Ven (1993) discuss the role of standards in forming markets and legitimising innovations. Tassey (2000, p. 588) describes standards as "a balance between the requirements of users, the technological possibilities (…) and constraints imposed by government for the benefit of society in general". De Vries and Verhagen's (2016) case of energy performance standards for houses shows how standards that impact on innovation can directly result from demands associated with trends in a technology's wider context. Nevertheless, despite Geels's (2004) recognition of the function that standards fulfil in technological transitions, extant standardisation literature does not explicitly link to this literature. Our observations suggest that standards may fulfil a role in facilitating technology transitions by helping to define technological niches and providing protective space (see Smith & Raven, 2012;Smith et al., 2010). Strength of Standards' Effects on Innovation While all standards that are relevant for an innovation have some impact, our study also shows that the strength of this impact differs across standards. Several such factors can already be derived from the existing literature: Multiple authors (e.g. Tassey, 2000) argue that the progress of the technological trajectory at the point in time when a standard is developed influences the standard's eventual effect on the innovation. Tassey (2000) also points out that 'design-based' standards have potentially much more profound constraining effects than 'performance-based' standards (see Sect. 1.1). Another factor mentioned in this context is the degree to which a technology is subject to network effects and switching costs which determines the degree to which lock-in poses issues for innovations (e.g. David, 1985). Based on the types of standards that we encountered (see Table 3.3), we add the strength of the link between a standard and regulation as a factor that amplifies both potential positive and negative effects of the standard. Increases in positive effects driven by standards that support regulation mainly relate to an innovation's market access. In this context, support from standards goes beyond legitimising innovations in the eyes of potential users and other stakeholders (as already discussed by, e.g. Botzem & Dobusch, 2012;Delemarle, 2017;Tamm Hallström & Boström, 2010). Our study shows that close connections between standards and regulation facilitate the proof of an innovation's regulatory compliance substantially and provide additional (legal) certainty to innovators and other stakeholders alike. Such standards therefore arguably enable the innovation being offered in the market in the first place. On the other hand, closer links between a standard and regulation also make implementing solutions that do not conform to the standard more difficult (e.g. because of expensive documentation and testing procedures to prove such solutions' equivalent performance). Particular standards which might hinder an innovation therefore become difficult to avoid or de facto compulsory in this situation. Whereas a hindering standard with no link to regulation only requires an innovator to invest in developing an alternative solution and/or find other ways of legitimising the product, a hindering standard with strong links to innovation may effectively lock a product out of the market. Uncertainty Resulting from Missing Standards All of the above assumes that the contents of standards are known. However, our study shows that this is not always the case and relevant standards may not yet exist at a point in time when they are needed to support the innovation. As far as we are aware, in the current literature only offer insights about the effects of standards being unavailable when needed for a technology's further development. In particular, they find that missing terminology standards contribute to a proliferation of heterogeneous terminology. Our study goes further by clearly showing that lacking standards are a core source of uncertainty for both innovators and other stakeholders (users of the innovation, component suppliers, complementors, etc.), similar to the ambivalence resulting from regulatory uncertainty (see Hoffmann, Trautmann, & Schneider, 2008). This therefore underlines the argument that markets need clear rules guiding actors within them (fligstein & McAdam, 2012;Polanyi, 2001). Such unavailable standards lead to a multitude of ambiguities for innovation, such as unclear requirements for the technology, risks of supporting infrastructures not fitting the product, and users not understanding its benefits. These ambiguities are further amplified by the importance of the entire set of standards that applies to an innovation (see Sect. 7.1.1). for any missing standard in such a set, aspects like how it will relate to other standards once it emerges, which economic functions it will fulfil, or where it will fall into our taxonomy may be unknown a priori. Such missing standards therefore impact on all stages of the innovation's development, including conceptualising the product, working with suppliers and others on the technology, and introducing it in the market. Once all relevant standards are known, much of this ambiguity is resolved. Although standards are subject to change under some conditions-as both this study and previous literature (Egyedi & Heijnen, 2008;Wiegmann, de Vries, & Blind, 2017) show-they resolve this instability and uncertainty that would otherwise hinder innovation. managing standards, regulation, and innovation Extant literature extensively documents the substantial effects of standards on innovation (see Sect. 1.1), yet it offers few insights about how companies can manage this important topic. Extant literature on company-internal standardisation management mainly addresses companies' engagement in standardisation (e.g. Axelrod, Mitchell, Thomas, Bennett, & Bruderer, 1995;Blind & Mangelsdorf, 2016;Jakobs, 2017;Wakke, Blind, & De Vries, 2015), and the implementation of standards within companies (e.g. Adolphi, 1997;foukaki, 2017;van Wessel, 2010). However, Großmann, filipović, and Lazina (2016) are-to our knowledge-the only researchers who address managing standards in the context of innovation. furthermore, the literature on standards mostly omits the link to regulation that we show to be essential in many situations. Our grounded theory model of managing standards and regulation at the company level (see fig. 6.2 and Sect. 6.1) contributes findings that add to the literature on both counts. Some aspects of these findings resemble existing theory about managing standards, showing that it also extends to the specific context of innovation. for example, our model distinguishes between short-to medium-term activities needed to address standards and regulation, and a number of supporting factors that enable these activities. This resembles the distinction between long-term governance and short-term management activities in van Wessel's (2010) framework, although the elements that make up these categories differ. On other aspects, our model significantly extends the extant theory on company-level management of standards, as we outline below. In particular, our discussion of our model's firm-level parts revolves around three aspects: (1) the company-level support structure for managing standards and regulation (Sect. 7.2.1), (2) firms' approaches to integrating standards and regulation into their NPD processes and these approaches' effects on an innovation (Sect. 7.2.2), and (3) their involvement in external developments through engaging in standardisation and related activities (Sect. 7.2.3). Organisational Support for Managing Standards and Regulation Existing literature already addresses some elements of the organisational support structure needed. Adolphi (1997) focuses to a large extent on how firms integrate standardisation into their functional divisions. Van Wessel (2010) highlights the need for governance, which includes elements such as investment decisions and defining strategies, to support day-to-day activities related to standards. foukaki (2017) identifies distinct 'standardisation management approaches' in companies that drive much of the subsequent activities. In line with this, several authors (Adolphi, 1997;foukaki, 2017;Großmann et al., 2016;van Wessel, 2010) highlight the need for a strategic approach to standardisation. Our study confirms this need. In our theorising (see Sect. 6.1.1), we clearly argue that a strategic orientation towards standards enables companies to build an organisational support structure that contributes to handling standards and regulation in NPD. Our results suggest that such a strategic approach allows companies to coordinate their standardisation activities across their business and exploit the long-term effects of standards. Beyond this confirmation of the need for a strategic orientation, our study makes two further contributions on organisational support for managing standards and regulation to the literature. first, we identify awareness, expertise, and financial resources as necessary conditions for developing a strategic orientation towards standards and regulation. These factors are in line with the findings of de Vries, Blind, Mangelsdorf, Verheul, and van der Zwan (2009) and foukaki (2017) 1 but we add further insights into how they contribute to successfully addressing standards and regulation. According to our findings, awareness of the topic's importance and expertise (in particular strategic) help companies to assess standardisation in light of their business model and innovation activities. These factors therefore help them formulate a standardisation strategy (also see Adolphi, 1997;Jakobs, 2017), which covers aspects such as engaging in external standardisation and lobbying, and identifying areas where existing standards can be used. In addition, financial resources are essential for deriving such a strategy because of the associated costs (e.g. for qualified staff and travelling), which often are beyond the means of smaller companies. Second, we show how a strategic approach helps to build the organisational support structure that underlies day-to-day activities, which may sometimes even be underdeveloped in large, otherwise professionally run companies (see Großmann et al., 2016). In this context, Adolphi (1997) focuses on different models regarding where firms incorporate standardisation work into their functional structures. Our study suggests that the specific organisational function (e.g. the R&D or production department) to which these tasks are attached is of secondary importance. While we observe different approaches across companies in that regard, none of them appears to be preferable per se. Instead, clearly defined responsibilities for planning standardisation work and ensuring that the responsible staff have sufficient influence and authority to ensure that these plans are implemented appear to be important for providing optimal support. Integrating Standards and Regulation into the Innovation Process The organisational support discussed above enables activities related to integrating standards and regulation into the innovation process. On a very fundamental level, we distinguish between active and passive approaches. They somewhat resemble foukaki's (2017) assertive and vigilant approaches to participating in standardisation, but go further because they also touch on aspects like product design and involvement of third-party consultants. Whether a company adopts an active or passive approach is likely to be driven by the commonly held image of standards and regulation within the firm (i.e. whether they are seen as a welcome support or a necessary evil). Companies which appreciate the value of standards are more likely to adopt a (pro)active approach. Such approaches can be implemented, e.g. in terms of using the available leeway regarding which standards and regulation to apply, or exploiting the open nature of many standards (see the data in Sect. 4.2 and our theory in Sect. 6.1.2 for details). Our results suggest that doing so can lead to substantial degrees of freedom for developing an innovation. We therefore question to some extent the commonly held view that "firms need to strike a balance between both flexibility and standardization" (Lorenz, Raven, & Blind, 2017, p. 29). Instead, it appears to be a question of managing standards in such a way that they enhance flexibility rather than constrain it. As we explained in Sect. 1.2.1, existing literature on how this can be done is extremely scarce. We are aware of only one earlier study (Großmann et al., 2016) that explicitly addresses the management of standards during an NPD process. This study therefore forms a 'benchmark' against which we compare our findings. Großmann et al. (2016, p. 322) (integrate standardisation-related activities into a model of a generic stage-gate NPD process (covering six stages from idea to market introduction), which shares the core activities needed with our model (see fig. 6.2) but differs on how these activities relate to each other. They suggest two specific standardisation-related tasks that take place in parallel to the core sequence of innovation development activities: (1) 'screening standards', which takes place in parallel to the early phases of the product's development, and (2) 'participating in standard setting committees', which happens next to later stages. Both closely resemble activities that we identify in our model: 'identifying regulation and standards', and 'engaging in standardisation and regulation' (see fig. 6.2). In addition, our model entails 'specifying the product' and 'evaluating conformity to requirements' as distinct necessary activities in this context. Großmann et al.'s (2016) model includes these activities within the regular stages of the core NPD process ('development', followed by 'testing & validation'). While we find similar necessary activities, our findings challenge the sequential approach of Großmann et al.'s (2016) model. Our theorising (see Sect. 6.1.2) shows that this is unlikely to work in situations which are characterised by factors such as uncertainty about future standards (see Sect. 7.1.3), technological learning by the company, 2 and attempts by actors in the technology's wider context to influence standards and regulation (see Sect. 7.3). These circumstances imply, among other things, that some relevant standards and regulation are not known at the outset of the NPD process and are continuously subject to change (see, e.g. Wiegmann et al., 2017). Therefore, all activities related to standards and regulation need to be carried out iteratively or in parallel and throughout the entire NPD process. Similarly, we also identify testing as a continuous activity. Starting testing early on and continuing it throughout the NPD process prevents potentially expensive re-work to change designs that do not conform to standards at a late stage in the process. Our study therefore highlights the need for an iterative approach in order to reap the benefits of standards outlined above. Addressing External Developments on the Industry Level and in the Wider Context One of Adolphi's (1997) key findings relates to companies facing a 'make-or-buy decision' when they require standards. Our study clearly shows that innovating firms frequently face a similar choice between adapting their technology to standards and regulation or (attempting to) adapt(ing) standards and regulation to the technology. This choice applies in particular when addressing uncertainties resulting from a lack of needed standards (see Sect. 7.1.3). While this choice-to our knowledge-has not yet been documented in the standardisation literature, it closely resembles some strategies identified in studies on regulatory uncertainty (e.g. Engau & Hoffmann, 2011a, 2011bfremeth & Richter, 2011). Such attempts to influence standards and regulation are the core channel through which companies can affect the dynamics on the industry level and in the technology's wider context. In line with earlier findings (e.g. de Vries et al., 2009;foukaki, 2017;Jakobs, 2017), we show that this option is only open to companies with sufficient awareness of the topic, financial resources, expertise, etc. (see the argument above). This means that companies without these supporting factors have a very limited impact (if any at all) on external developments. De Vries et al. (2009) argue that they can be represented by trade associations (as we observed to some degree in our case). However, relying on such proxies implies (1) that this element of the industry structure (see Sect. 6.2.1) is sufficiently developed and (2) that industry associations act in line with the interests of member companies that do not engage in standardisation. Even when there are strong industry associations, the second assumption may not always be true: Our case shows associations are likely to be dominated by the same companies that are active in standardisation, because engaging in them is similarly resource intensive as participating in standardisation. Companies that engage neither in standardisation nor industry associations are therefore often 'standard takers' rather than 'standard makers' (see the distinction by Meyer, 2012) and interactions between the company level and external developments are mostly inwards-flowing for them through the activities discussed above. furthermore, companies that engage in standardisation and regulation need a long-term outlook. This is not only needed because standardisation and regulation processes tend to be lengthy, but also because of the 'public good nature' of standards (see Berg, 1989;Blind, 2006;Tassey, 2000). Standard takers eventually also enjoy many of the benefits from being able to access the market once standards and regulation have been adapted to the technology, but incur none of the costs. Standard makers need to accept that many (but not all) benefits of their work are public. Our study shows that they tend to be motivated by the opportunity to shape the contents of standards and regulation based on their individual preferences. In addition, the required standards and regulation are unlikely to be developed if no company takes action and everyone waits for other players to take the initiative. Even if companies participate in standardisation and attempt to influence e regulation, they are unlikely to succeed in doing so on their own. Cooperation with others is therefore needed. A fundamental decision in this context revolved around which forums for collaboration to engage in. In this context, they need to navigate potentially complex interdependent arrangements of organisations, including SDOs, industry trade associations, and consortia, that might span across multiple modes of standardisation (see Wiegmann et al., 2017). While the motivations identified in the earlier literature for participating in these settings (Blind & Mangelsdorf, 2016;Jakobs, 2017) are confirmed by our study, it appears that different forums for cooperation may fulfil distinct functions in companies' strategies. for example, we observe an emphasis on technological knowledge sharing when participating in technology development consortia. In contrast, firms' activities in SDOs and industry associations appear to be more geared towards ensuring conformity to regulatory requirements and arranging compatibility with other elements of a large system in our case. Ultimately, all of these activities observed in our study were driven by the goal of building a market in which the technology could succeed. This market required rules in the form of standards (also see fligstein & McAdam, 2012;Polanyi, 2001) as well as a critical mass for the technology. Cooperation in technology development and pursuing changes to standards and regulation is one side of firms' engagement on the industry level and in the wider context. On the other side, they remain rivals and compete with each other once their products enter the market. Participating in the processes at the industry level and beyond therefore requires firms to follow a co-opetitive approach (see, e.g. Bengtsson & Kock, 2000;Gnyawali & Park, 2009, 2011Van de Ven, 2005;Walley, 2007). We explore the dynamics that occur in such co-opetitive relationships in Sect. 7.3. dynamics on the industry level and beyond While the needed well-functioning system of standards (see Sect. 7.1.1) may often be taken for granted, it actually is the result of a very dynamic process. We expected in our literature review that this process would mainly take place at the industry level (see Sects. 1.2.2 and 1.2.3). Unexpectedly, our study revealed that the industry's wider context (which covers stakeholders outside the industry where the innovation is developed) also plays a very important role. This reflects research approaches which highlight the embedding of markets in society (fligstein & McAdam, 2012;Polanyi, 2001). Addressing influences coming from this wider context is facilitated by strong cooperation among stakeholders in support of the innovation, both within the industry and across its boundaries. Our study contributes to the literature on these dynamics in three ways: (1) We show what causes these dynamics (Sect. 7.3.1). (2) We then reveal industry-level approaches to address these dynamics (Sect. 7.3.2). (3) following on from this, we argue that these dynamics allow standards to fulfil their function of aligning the innovation with the needs of the wider context (Sect. 7.3.3). Sources of Dynamics in the Industry and Wider Context Much of the dynamics in the process of establishing standards and regulation for an innovation are caused by conflicting interests of involved stakeholders. In our case, the aims of parties involved in developing the technologies were aligned, but even an innovation's developers do not always agree on a common direction. for example, strong differences could be observed among the developers of GSM (e.g. Bekkers, 2001) or in the case of e-mobility charging (Bakker, Leguijt, & van Lente, 2015;Wiegmann, 2013). Our study shows that this picture is further complicated by stakeholders who are not involved in developing the technology but are nevertheless affected by it. The types of interests pursued by these stakeholders can be very diverse and relate to many topics, such as preserving a status-quo that works for them, facilitating another technology that emerges in parallel, or government achieving its policy objectives. This wide variety of interests and stakeholders, which can potentially be affected by the standardisation and regulation of an innovation, causes the core of the dynamics in the process. All involved parties can potentially intervene in the process at any time (see Wiegmann et al., 2017), either to support the innovation or to hinder it. In that context, we observed many different tactics to reach these goals. This wide range of tactics includes attempts to use standards as a tool to actively block a technology (also see Delaney, 2001), coalition building (also see Axelrod et al., 1995), or lobbying the government to intervene (also see Wiegmann et al., 2017). This potential variety of tactics also causes challenges for managing standards and regulation on the industry level, as we outline below. Industry-Level Approaches for Addressing Dynamics in the Process The dynamics discussed above challenge the view taken by some that the development of standards to support an innovation can be planned and coordinated by a central actor, such as a government (featherston et al., 2016;Ho & O'Sullivan, 2017). Although governments (or other actors) sometimes play such a central role, others still can use a range of channels to challenge this (this study; Wiegmann et al., 2017). It may be possible to forecast at what stage of a technology trajectory certain standards would be needed through roadmapping and other tools featherston et al., 2016;Ho & O'Sullivan, 2017). However, the actual emergence of such standards depends on whether the involved parties reach a balance of interests and whether they can sustain this compromise. Nevertheless, our study shows that there are a number of ways to facilitate this outcome, if not to plan it. Strong collaboration among a technology's supporters and with industry-external actors who share the same or complementary interests is at the core of this. Our study highlights several factors that can support such cooperation and help the industry as a whole to navigate the dynamics in a way that increases the likelihood of establishing standards and regulation which support an innovation. Below, we discuss the role of supporting institutions and an optimal approach to IPR as factors that stand out as particularly important for this collaboration. following this, we address our findings regarding the resulting 'group dynamics'. Supporting Institutions for Effective Collaboration A first core element of our findings is the importance of an industry's supporting institutions, e.g. industry associations. They can enhance cooperation in a number of ways, e.g. by providing forums in which actors can agree on common positions to pursue (similar to the role of consortia observed by Baron et al. (2014) in ICT standardisation), or by implementing common technology development initiatives. In addition to facilitating industry-internal alliances, such supporting institutions may also have established links to actors in the wider context (e.g. governments, trade associations in other industries) that can be used strategically to influence standards and regulation in the technology's favour. The Importance of Intellectual Property Rights in Effective Collaboration A second factor underlying effective collaboration is an appropriate approach to IPR. Here, our study questions whether the widely held view of a tight link between standards and patents (e.g. Bekkers, 2017;Bekkers, Iversen, & Blind, 2011;Großmann et al., 2016;Lerner & Tirole, 2014;Rysman & Simcoe, 2008) always applies. Patents have been identified as a core element of many standardisation processes. However, giving them a similar role in our case would have undermined both effective collaboration within the industry, and the degree to which the resulting standards would have been perceived legitimate by others. Indeed, the involved parties aimed to keep patents as separate from standards as possible, although they still gave them a prominent role in the collaborations to develop the technology. The industry in our study managed to find a fine balance between protecting firms' intellectual input into the technology's development, while not crowding others out of the process. To understand these different findings, we contrast our case to others where intellectual property played a more important role, such as mobile telecommunications (see, e.g. Bekkers, 2001;funk & Methe, 2001;Leiponen, 2008), Ethernet (see Jain, 2012;von Burg, 2001), and optical disks (see den Uijl, Bekkers, & de Vries, 2013). This suggests that the type of standards that are being developed is core to the importance of patents in the process: Many cases where patents were important concern interface standards (see the classifications by Blind, 2004Blind, , 2017Egyedi & Ortt, 2017;Swann, 2010), which are by definition solution-prescribing (see, e.g. de Vries, 1998;Tassey, 2000). Such solutions are based on concrete designs that are usually patentable. On the other hand, most standards in our case fulfilled economic functions related to safety and measurement and were performance-based, meaning that little (if any) of their content could be patented. However, not all standards in our case were performance-based: for example, standards for connecting to the electricity grid had important interface elements and therefore incorporated patentable solutions. Nevertheless, we also did not observe an important role of IPR in these standards' development. This can be explained by the 'standardisation culture' that applies in a specific context (see Wiegmann et al., 2017). In the industries in our case, this 'culture' clearly is collaborative and long-term oriented, and most standards that we found link strongly to regulation. This would make any attempts of bringing patents into standardisation unacceptable to many stakeholders. In other industries, such as ICT, most standards arguably concern interfaces that are based on the private intellectual property, and have few links to regulation. Under such circumstances, it is no surprise that the common approach to standardisation emphasises patents more. In summary, the different emphasis on patents in standardisation is initially likely to result from the types of standards that prevail in an industry. This emphasis is then likely to perpetuate itself and become a part of the industries 'standardisation culture'. 'Group Dynamics' Resulting from the Collaboration in an Industry The activities (both in terms of technology development and standardisation/regulation), which make up the cooperation in the industry, contribute to certain 'group dynamics'. In our case, we observed a strongly united industry with an 'us vs. them' mentality in its relations to other stakeholders. In other cases, these group dynamics may vary depending on the distribution of interests and contextual factors like the 'standardisation culture' (see Wiegmann et al., 2017). Our study suggests that such group dynamics affect the degree to which the innovators' activities are perceived as legitimate (see Botzem & Dobusch, 2012;Delemarle, 2017;Tamm Hallström & Boström, 2010) by other actors in the wider context. In particular, Botzem and Dobusch's (2012) concept of standards' input legitimacy is likely to be strongly affected by the composition of an innovation's group of supporters and their activities. for example, in our case, the industry speaking with one voice signalled that mCHP was a genuine technological development for which changing standards and regulation was warranted, rather than a single company's attempt to get special treatment. However, this approach also carried the danger of being perceived as an industry that writes its own rules, similar to the European car industry in the wake of the Volkswagen Diesel scandal (see Neslen, 2015). Our study therefore suggests that the collaborative activities of an innovation's supporters have an important impact on the perceived legitimacy. future research could compare different approaches and their effects in this regard, e.g. by involving more stakeholders (see Sect. 7.5). Dynamics' Support for Aligning the Innovation with the Wider Context In Sects. 6.4 and 7.1.1, we argued that standards fulfil an important function in aligning the innovation with the needs of relevant stakeholders in the technology's wider context. Arguably, the dynamics discussed in this chapter are core to standards fulfilling this function, because they end in the balance that stakeholders must reach for a standard to emerge (see Wiegmann et al., 2017). In that sense, the dynamic processes in standardisation and regulation that we observed are an important element of the wider sociotechnical transition needed to make an innovation successful. In such sociotechnical transitions, innovations either move out of the niches in which they emerge by reaching alignment with the sociotechnical system that are part of, or they fail eventually (e.g. Geels & Schot, 2007;Smith & Raven, 2012;Smith et al., 2010; van den Ende & Kemp, 1999). By specifying clear technological requirements that result from the needs of other actors in the sociotechnical environment and the sociotechnical system (in our case, e.g. related to CO 2 emission targets, or the needs of other users of the electricity grid for grid stability), standards and regulation contribute to this alignment. This function explains the high stakes at play that lead to the dynamics that we observed. Simultaneously, we argue that standards would not be able to fulfil this function in support of sociotechnical transitions without these dynamics. A less dynamic process could most likely only be achieved if it failed to take into account some of the diverse interests typically involved in sociotechnical transitions. The resulting standards would therefore not align the innovation with the needs of its wider context and miss important benefits for the innovation outlined in Sect. 7.1. managerial imPlications Our findings also have strong implications for managerial practice. In particular, we offer insights on three topics that are highly relevant for innovative companies: (1) We highlight important effects of standards (Sect. 7.4.1). (2) We show how innovators can successfully address standards and regulation (Sect. 7.4.2). (3) We identify impactful dynamics on the industry level and beyond, and show how they can be managed through cross-company collaboration (Sect. 7.4.3). Important Effects of Standards Standards can have major positive effects on innovation, such as supporting the technology's legitimacy, securing the links between complementary products, and facilitating proof of regulatory compliance. On the other hand, standards which are not in line with an innovation's needs can impose substantial hurdles, e.g. if standards lock the market into an old technology, or reflect vested interests that oppose the innovation. However, we find no support for the popular assumption that standards in general limit the freedom of innovation. Instead, the freedom for innovating depends on how well standards are managed and integrated in the innovation process (see Sects. 6.1.3 and 7.2). In the European context, standards often are linked to regulation. This link further amplifies their effects on innovation. Harmonised standards, which are in line with an innovation's needs, can be used to show regulatory compliance and give innovators a high degree of legal certainty. On the other hand, innovators can face substantial costs and difficulties in proving regulatory compliance if harmonised standards are not in line with their innovation's needs. The required effort may sometimes even be prohibitively high, meaning that such standards can effectively lock an innovation out of the market. The possible magnitude of standards' effects makes them a topic that innovation managers need to be aware of. furthermore, they also mean that missing standards are an important factor causing uncertainty when innovating. fortunately, an innovation's developers can actively manage standards and their effects. Our study provides managers with useful insights into how this can be done effectively, as we outline in Chapter 6, Sects. 7.2 and 7.3. Implications for Company-Internal Management Our study shows successful approaches that companies can use to manage the effects of standards on their innovations. Within these approaches, we distinguish between the organisational foundation and the specific management activities. In the long term, companies need to prepare themselves for dealing with standards and regulation. To do so, they should establish a solid organisational foundation that allows them to take a strategic approach to standards and regulation. Such a foundation is rooted in awareness, expertise, and financial resources. for large companies, this may mean establishing a department that is responsible for coordinating the topic. Small companies should aim to have at least some staff members with awareness and basic knowledge of standardisation and regulation. Such internally developed competences can be complemented by external experts (e.g. consultants, notified bodies). However, our study shows that relying on them too heavily may limit the company's freedom in innovating. Such a foundation helps companies to carry out the activities needed to manage the topic: (1) identifying regulation and standards, (2) specifying the product, (3) assessing whether modifications in standards/ regulation and/or the product design are needed, and, if necessary, (4) engaging in standardisation. Because firms operate in a dynamic environment, these activities need to be carried out concurrently and throughout the NPD process. This means that companies should identify potentially relevant regulation and standards as early as possible and then continue scanning for potential changes or additional requirements that they missed at first. It also means that the NPD process should involve regular checks whether the design is capable of meeting all requirements. Doing so in parallel avoids both being blindsided by changes in standards and regulation and having to redo large parts of the innovation if certain requirements cannot be met. A further key decision is whether companies limit themselves to applying standards and regulation to their innovations or whether they also attempt to influence standardisation and the passing of new regulation. Companies that do not engage in such external activities still benefit from the results of others that do. However, our findings suggest that this engagement has benefits, which often may justify the necessary expenditure. Most importantly, companies that contribute to external standardisation and regulation processes have an opportunity to participate in shaping the balance of interests enshrined in standards in their favour (see Sect. 7.3). This may substantially increase the company's freedom innovating. Implications for Cross-Company Collaboration Our study shows that these company-external processes are likely to be highly dynamic. These dynamics result from a potentially large number of stakeholders with conflicting interests, all of whom are likely to attempt influencing standards and regulation in their favour. Our study shows that even innovations like mCHP, which are relatively simple and small innovations, 3 can have substantial links to the wider context and affect many parties' interests. In addition to stakeholders from innovators' own industries, these stakeholders therefore often include actors from the wider context (e.g. regulators, developers of other technologies, NGOs). few companies (if any) are likely to be strong enough to be able to shift standards on their own under these conditions. Cooperation in developing both the technology and relevant standards is therefore at the core of influencing external standardisation and regulation. Consequently, innovative companies need to find partners who can complement their own strengths. This cooperation fulfils multiple functions, such as aligning industry actors to pursue a common line in standardisation, and legitimising the technology in the eyes of outsiders. Reaching these goals can be supported by an industry structure that enables effective collaboration. We identify three elements of the industry structure that are important in this context: (1) a network of supporting institutions (e.g. industry associations, consultants, research institutions), (2) an approach to IPR that facilitates cooperation, and (3) broad support for the innovation among firms in the industry. These three elements can support collaboration in many ways. for example, they can help resolve conflicts (or even prevent them from occurring), unlock additional sources of helpful expertise, and provide access to regulators. Companies and other actors in an industry are therefore advised to build these elements in time, so that they are available when needed. We also show that basing industry-level collaboration on this support structure helps innovators to assert themselves in dealing with the complex dynamics of their industry's wider context, as the following three examples show. (1) Industry associations can help unite the industry behind an innovation, giving it a stronger voice when dealing with other stakeholders. (2) Involving other supporting actors, who have no direct commercial interest in the technology (e.g. researchers), can help the innovation's legitimacy and credibility. (3) Using suitable approaches to IPR in standardisation may make it more acceptable to link the resulting standards to regulation. This also makes our findings important for actors other than companies. Especially industry associations can assume an important role in coordinating the collaboration between their members. for example, they can offer forums for industry to find a common position to pursue in standardisation committees and vis-à-vis regulators. They can also represent industry when dealing with external stakeholders on aspects that are not central to the innovation, but nevertheless need to be considered. limitations and scoPe for further research Our detailed grounded theory study provides novel insights into the management of standards as an example of the external requirements, which innovative companies face. first, this raises the question under which conditions our theory is likely to apply (Sect. 7.5.1). furthermore, the results raise intriguing questions for future research (Sect. 7.5.2). Generalising Our Theory Our theory is based on a single nested case. This means that the company-level findings have undergone an initial replication (see whereas the industry-level elements of our theory are derived from a single observation. Nevertheless, we expect that similar observations can be made in other cases which share several key characteristics, which likely determined parts of what we witnessed with our case. These key features of the case are (1) its European scope (due to the relationships of standards and regulation under the 'New Approach'); (2) the highly regulated nature of the industry on aspects like product safety which contributed to the particular importance of standards in the case; (3) the relationship with policy issues (energy and environmental policy in our case); and (4) the relative long-term outlook of the key players in the case which contributes to the industry's culture of collaboration. Other areas where we expect that cases with similar characteristics to exist include, e.g. the European medical and aerospace sectors. In addition to the factors outlined above, the 'self-evident' support for standards in our case most likely makes it a 'best practice case'. future research therefore needs to confirm the extent to which our findings apply to both similar and other contexts, which do not share the four characteristics identified above. It also needs to establish the extent to which not following the practices identified in our case affects innovation. Questions for Future Research Many of our study's new insights raise questions that could lead to exciting new research. Some of them question findings in previous standardisation literature, whereas others point to links with other streams of literature that have not yet been explored extensively. One issue that raises questions for future research is IPR's relatively low importance for standardisation in the heating sector (see Sect. 5.1.4). This raises doubts about the standardisation literature's emphasis on IPR. This emphasis may be related to the literature's empirical evidence largely coming from the ICT sector (see Wiegmann et al., 2017). future research in other settings could establish whether our case is an anomaly and IPR is indeed as important for standardisation as the literature claims, or whether this only applies to ICT contexts. In doing so, such research should also consider factors like the type of standard at stake and the 'standardisation culture' that we identify as potentially important for the role of IPR in standardisation (see Sect. 7.3.2).
2018-11-24T04:35:12.220Z
2018-10-03T00:00:00.000
{ "year": 2019, "sha1": "9612a3bff4967f850b5d2c5ba7e0bf0e572a647a", "oa_license": "CCBY", "oa_url": "https://pure.tue.nl/ws/files/134323828/2019_Book_ManagingInnovationAndStandards.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "399d4514f313f170e82c77cb2e0cd6f2d2ad7b8d", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
13506083
pes2o/s2orc
v3-fos-license
Combined Drug Action of 2-Phenylimidazo[2,1-b]Benzothiazole Derivatives on Cancer Cells According to Their Oncogenic Molecular Signatures The development of targeted molecular therapies has provided remarkable advances into the treatment of human cancers. However, in most tumors the selective pressure triggered by anticancer agents encourages cancer cells to acquire resistance mechanisms. The generation of new rationally designed targeting agents acting on the oncogenic path(s) at multiple levels is a promising approach for molecular therapies. 2-phenylimidazo[2,1-b]benzothiazole derivatives have been highlighted for their properties of targeting oncogenic Met receptor tyrosine kinase (RTK) signaling. In this study, we evaluated the mechanism of action of one of the most active imidazo[2,1-b]benzothiazol-2-ylphenyl moiety-based agents, Triflorcas, on a panel of cancer cells with distinct features. We show that Triflorcas impairs in vitro and in vivo tumorigenesis of cancer cells carrying Met mutations. Moreover, Triflorcas hampers survival and anchorage-independent growth of cancer cells characterized by “RTK swapping” by interfering with PDGFRβ phosphorylation. A restrained effect of Triflorcas on metabolic genes correlates with the absence of major side effects in vivo. Mechanistically, in addition to targeting Met, Triflorcas alters phosphorylation levels of the PI3K-Akt pathway, mediating oncogenic dependency to Met, in addition to Retinoblastoma and nucleophosmin/B23, resulting in altered cell cycle progression and mitotic failure. Our findings show how the unusual binding plasticity of the Met active site towards structurally different inhibitors can be exploited to generate drugs able to target Met oncogenic dependency at distinct levels. Moreover, the disease-oriented NCI Anticancer Drug Screen revealed that Triflorcas elicits a unique profile of growth inhibitory-responses on cancer cell lines, indicating a novel mechanism of drug action. The anti-tumor activity elicited by 2-phenylimidazo[2,1-b]benzothiazole derivatives through combined inhibition of distinct effectors in cancer cells reveal them to be promising anticancer agents for further investigation. Introduction Receptor tyrosine kinase (RTK) signaling has been implicated in tumor evolution for its capacity to influence cell fate through changes in key regulatory circuits [1][2][3]. As evidenced by cancer genomic studies, RTK signaling is one core pathway frequently altered in human cancer [4,5]. We have recently shown the relevance of signaling nodes interconnecting RTK and p53 core pathways and the impact of targeting such nodes during tumor evolution [6,7]. The relevance of altered RTKs in oncogenesis has drawn tremendous interest to identify agents capable of restraining their activity and function. To date, molecular therapies for ''RTK-addicted'' cancer cells are mainly based on the application of compounds that selectively target the oncogenic RTK [1]. However, the success of these strategies has been limited since inhibition of the ''primary RTK-addiction'' triggers a selective pressure on cancer cells to acquire resistance through ''RTK swapping'' [8,9]. These limitations impose the identification, or the combined use, of agents that not only target RTK signaling dependency, but also hamper adaptation caused by redundancy in the RTK signaling network [10]. One approach to circumvent ''RTK swapping'' could be the identification of drugs interfering with oncogene dependency by acting at multiple levels within the addiction path. An example of this concept is provided by Sorafenib, a small chemical agent able to inhibit several RTKs, including VEGFR, PDGFRb, Kit, FGFR1, Ret, and the intracellular Raf kinase [11]. The broadspectrum activity of Sorafenib in several cancer models is likely due to the wide range of its targets. Nevertheless, Sorafenib activity in some types of tumor models is attributed to the concomitant inhibition of RTK-driven angiogenesis and the RTK downstream Raf/MAPK pathway [11]. The generation of agents, which target oncogenic path(s) at multiple levels, is not a simple issue as distinct targets require a precise drug structure and chemical modifications of drugs can either affect selectivity (e.g. by targeting multiple RTKs), effectiveness, or toxcicity. In contrast to other RTKs, the hepatocyte growth factor (HGF) receptor Met is characterized by unusual structural plasticity as its active site can adopt distinct inhibitor binding modes [12]. Indeed, a wide range of smallmolecules have been discovered as Met inhibitors [13,14]. Nevertheless, efforts continue to uncover novel anti-Met agents for targeted therapies and associated resistance mechanisms [8,[15][16][17]. To identify chemical agents capable of inhibiting oncogenic Met signaling in cancer cells, we previously applied a Met-focused cellbased screen. We had reasoned that such a strategy would offer the possibility of identifying compounds that may: a) elicit inhibitory effects directly on Met; b) target other essential components in the Met signaling cascade; c) be well tolerated due to limited toxic effects at biologically active concentrations [18][19][20]. We reported that new amino acid amides containing the imidazo[2,1b]benzothiazol-2-ylphenyl moiety target Met directly and inhibit oncogenic Met function, without eliciting major side effects in vitro [19]. In this study, we explored the anticancer activity of one of the most active agents we have identified, Triflorcas (TFC), on a panel of cancer cells with distinct characteristics and investigated its mechanism of drug action by a range of complementary approaches. We show that Triflorcas targets cancer cells either carrying Met mutations or characterized by RTK swapping. We demonstrate that Triflorcas is well tolerated in vivo and does not significantly alter the expression of several cell toxicity and stress genes. Biochemical and phospho-screening array studies revealed that Triflorcas predominantly alters the phosphorylation levels of the PI3K/Akt pathway, which ensures oncogenic dependency to Met, as well as Retinoblastoma (Rb), and nucleophosmin/B23. These alterations functionally correlate with changes in cell cycle progression underlying mitotic failure. Although Triflorcas anticancer activity correlates with its inhibitory effects on Met, its drug action mechanisms may not be merely restricted to Met target itself. The unique ability of Triflorcas to modulate multiple pathways deregulated in tumor cells with aberrant Met signaling further strengthens the prospect of exploiting the flexible-binding mode capacity of Met active site to identify new agents with inhibitory properties towards signaling targets required to execute the oncogenic program. Finally, the assessment of the inhibitoryresponse profile on cancer cells through the National Cancer Institute anticancer drug screen suggests that Triflorcas is characterized by a novel mechanism of drug action. Bioinformatics studies indicate possible molecular signatures that correlate with cancer sensitivity to imidazo [2,1-b]benzothiazol-2-ylphenyl moiety-based agents. Triflorcas Inhibits Survival and Anchorage-independent Growth of Human Cancer Cells Carrying Mutated Met We first examined the effect of Triflorcas on two human nonsmall-cell lung cancer (NSCLC) cells carrying Met mutations: H2122 and H1437 cells, harboring point mutations at the amino acid residue N375S and R988C, respectively [21]. Triflorcas impaired survival and anchorage independent growth of H2122 and H1437, respectively ( Figure 1A and B). None of the Met inhibitors used as reference compounds, such as SU11274, crizotinib, and PHA665752 interfered with survival and in vitro tumorigenesis of these cells ( Figure 1A and B). In contrast, all tested inhibitors impaired survival and anchorage independent growth of human gastric carcinoma GTL-16 characterized by Met amplification ( Figure 1A and B), as previously reported [19,21]. These data suggest that Triflorcas exerts a marked inhibitory effect on cancer cells with Met mutations, which are not sensitive to other Met inhibitors, in addition to cells carrying Met amplification. Triflorcas Interferes with Met Phosphorylation, with Met Localization, and with PI3K-Akt Pathway Activation We have previously shown that Triflorcas interferes with Met phosphorylation in living cells and with Met activation in vitro [19]. We therefore investigated the effects of Triflorcas on Met in H1437 cells by following Met phosphorylation on two tyrosine residues located in its kinase domain, Tyr 1234 and Tyr 1235 . Immunocytochemical analysis revealed Met phosphorylation predominantly on the plasma membrane when H1437 cells were exposed to HGF stimulation ( Figure 2A). Notably, we found that Triflorcas leads to changes in phosphorylated Met: a) downregulation of its phosphorylation levels and b) a predominant localization in intracellular compartments ( Figure 2A). Treatment with chlorpromazine, a cationic amphipathic drug that inhibits clathrin-mediated endocytosis, restored phospho-Met localization at the cellular membrane, thus indicating that Triflorcas enhances Met internalization (Figure 2A) Reduced Met phosphorylation was also observed in protein lysates from H1437 cells accompanied by reduced Met protein levels ( Figure 2B). Densitometric analysis indicated that Met down-regulation through endocytosis causes the decrease in Met phosphorylation (data not shown). Consistently, we found reduced phosphorylation of Gab1, which is an immediate signaling target of Met ( Figure 2B). The biological effect of Triflorcas on cells carrying Met mutations and Met amplification ( Figure 1) [19] led us to evaluate the phosphorylation status of RTK downstream effectors. Among pathways required in cancer cells with oncogenic Met, it has been shown that only a subset of Met-activated pathways sustains the dependency of cancer cells on Met. In particular, the Ras/ERKs and the PI3K/Akt are two pathways that predominantly ensure dependence on oncogenic Met [22]. We therefore explored whether Triflorcas restricts the activation of these two pathways by following the phosphorylation level of ERKs and Akt in H1437 cells. No significant changes were observed on HGF-induced ERK phosphorylation when H1437 cells were exposed to Triflorcas ( Figure 2B). In contrast, Akt phosphorylation was significantly reduced after Triflorcas treatment ( Figure 2B). Reduced phospho-Akt was paralleled with a decrease in the phosphorylation levels of its downstream signal p70 S6K ( Figure 2B). Consistently, reduced phosphorylation levels of Akt and its downstream signals p70 S6K and S6 ribosomal protein, but not ERKs, were observed also in GTL-16 cells ( Figure 2C). We confirmed the functional relevance of intact PI3K/Akt signaling for anchorage-independent growth of H1437 and GTL-16 cells by pharmacologically blocking its activation with LY294002 (PI3K inhibitor) or A-443654 (Akt inhibitor) ( Figure 2D, E, S1A, and 1B). The decrease of Akt phosphorylation after Triflorcas treatment was also observed in the ErbB1-addicted human breast cancer BT474 cells, where ErbB1 phosphorylation levels were unchanged ( Figure S1C) [19]. Together, these results indicate that the reduction of PI3K/Akt pathway activation by Triflorcas is not merely a consequence of the inhibition of upstream RTK activity. We also found that Akt activity is not required for survival of BT474 cells ( Figure S1D). These results provide insights into BT474 cell resistance to Triflorcas treatment by showing that their addiction to oncogenic ErbB1 is ensured by pathway(s) other than PI3K/Akt. Together, these findings reveal that the anti-tumor activity elicited by Triflorcas occurs through combined outcomes on distinct effectors involved in RTK-driven oncogenic dependency. Triflorcas Impairs in vivo Tumor Growth of Human Cancer Cells Carrying Met Mutation, Without Causing Major Side Effects We have recently reported that Triflorcas is well tolerated by primary neurons and hepatocytes [19]. To evaluate further the potential therapeutic application of this compound, we assessed whether it is also well tolerated after in vivo administration. Triflorcas was intra-peritoneally injected into mice at a dose of 30 mg.kg -1 each day. As body weight is a generic indicator of animal physiology influenced, for example, by metabolism, animal activity, and feeding behavior, the weight of Triflorcas-treated mice was followed over time. No significant differences were found versus controls and throughout treatment (P.0.05; Figure 3A). We also measured the weight of the heart, spleen, kidney, and liver of mice treated for 21 days and no differences were found between the two groups (P.0.05; Figure 3B). We previously showed that Triflorcas elicits tumor growth inhibition of GTL-16 cells in vivo ( Figure S2) [19]. We therefore determined whether the antitumor action of Triflorcas observed in vitro on H1437 cells might also be evidenced in vivo using xenografted nude mice. H1437 cells (5610 6 ) were sub-cutaneously injected into nude mice. After tumor formation, the mice were treated with Triflorcas, crizotinib, or vehicle alone, and tumor growth was examined during and after treatment. Notably, we found a 58.7% and 59% reduction in tumor volume when Triflorcas was administered at a dose of 30 mg.kg 21 Figure 3C). A reduction of tumor weight was also observed in Triflorcas-treated mice (control: 147.1 mg 692.5; 30 mg.kg 21 Triflorcas injection: 79.1658.1, P = 0.03; 60 mg.kg 21 Triflorcas injection: 62.0638.9, P = 0.01; Figure 3D). In contrast, no reduction in tumor growth was found in mice treated with crizotinib at doses of 50 mg.kg 21 every day (tumor volume: 118.6 mm 3 669.4; tumor size: 205.0 mg 684.8; Figure 3C and D), consistent with previous studies [21]. Taken together, these findings demonstrate that in vivo Triflorcas elicits tumor growth inhibition of cancer cells with oncogenic Met. Moreover, the absence of side effects indicates that Triflorcas is well tolerated when injected into mice at doses required to elicit its anti-tumor effects. Figure 2. Triflorcas interferes with Met phosphorylation, its cellular localization, and activation of the PI3K/Akt pathway. (A) Met phosphorylation was analyzed by immuno-cytochemistry on H1437 cells untreated, treated with HGF, with Triflorcas (TFC; 10 mM for 24 hours) or with Triflorcas (10 mM for 24 hours) plus chlorpromazine (Chlor; 10 mg/ml for 2 hours) followed by HGF stimulation (20 ng/ml for 30 minutes). Triflorcas reduced the levels of Met phosphorylation induced by HGF. Note also that HGF-induced phospho-Met is localized at the plasma membrane of control cells, whereas it appears internalized in cells exposed to Triflorcas. Endocytosis inhibition with the chlorpromazine drug restored phospho-Met localization at the cellular membrane. Arrows indicate cluster of phosphorylated Met at the plasma membrane (40X magnification). (B) HGFinduced (20 ng/ml) phosphorylation levels of Met, Gab1, Akt, and p70 S6K were reduced in H1437 cells exposed to Triflorcas. In contrast, no changes were observed on ERKs phosphorylation levels. Similar expression levels of total Gab1, Akt, and p70 S6K were also found, indicating that Triflorcas interfered with their phosphorylation rather than with their expression levels. Western blot analyses were performed on total protein lysates. (C) Phosphorylation levels of Akt, p70 S6K , and S6 ribosomal protein, but not ERKs, were reduced in GTL-16 cells exposed to Triflorcas. Actin or Tubulin protein levels were used as loading controls in all experiments (lower panels in B and C). (D and E) Anchorage-independent growth of H1437 (D) and of GTL-16 (E) cells was impaired in the presence of Triflorcas (TFC), LY294002 (PI3K inhibitor), or A443654 (Akt inhibitor). Values are expressed as means 6 s.e.m. **P,0.01; ***P,0.001; Student-t test. doi:10.1371/journal.pone.0046738.g002 Minor Changes in Gene Expression Profile of Stress and Toxicity Pathways by Triflorcas Correlate with Lack of Toxic Effects in vivo As Triflorcas is well tolerated both in vitro [19] and in vivo ( Figure 3A and B), we next followed gene expression levels of a human RT-PCR array focused on toxicity and stress pathways. The expression profile of 84 genes related to cell stress and toxicity was analyzed in GTL-16 cells exposed to Triflorcas, SU11274, or vehicle. SU11274 altered the expression of 39 genes for at least 2fold (by increasing or decreasing them; P,0.05). These genes belong to the apoptosis/necrosis (n = 10), inflammation (n = 7), oxidative/metabolic stress (n = 7), heat shock (n = 6), proliferation/ carcinogenesis (n = 5), and growth arrest/senescence pathways (n = 4; Figure 4A and Table S1). In contrast, Triflorcas led to a statistically significant change in expression of only 14 genes (P,0.05). Among them, 13 genes overlapped with those altered by SU11274, and belonged to apoptosis/necrosis (n = 3), oxidative/ metabolic stress (n = 3), and growth arrest/senescence (n = 3; Figure 4A and Table S1). No significant decrease of gene expression beyond 50% compared to control cells could be observed. Genes showing over a 2-fold increase in expression levels included tnf (3.83-fold; P = 0.0008), gdf15 (3.18-fold; P = 0.0007), egr1 (2.68-fold; P = 0.003), and serpine1 (2.42-fold; P = 0.005). Intriguingly, Triflorcas led to a robust and predominant upregulation of the cytochrome oxidase cyp1a1 gene (611-fold; P = 0.0001; Figure 4A). Expression of the cyp1a1 gene was also increased by SU11274, but at significant lower levels (22-fold; P = 0.02; Figure 4A). CYP1A1 is a member of the CYP1 family of cytochrome P450 implicated in cancer cell response to therapeutic agents. Western blot analysis confirmed the up-regulation of CYP1A1 protein by Triflorcas, which was impaired by its inhibitor acacetin ( Figure 4B) [23]. CYP1A1 up-regulation by Triflorcas was independent of its action on Met as found also in cells, such as in the human breast cancer BT474 cells ( Figure 4C), which are addicted to ErbB1 signaling. Together, these studies underline a selective metabolism activity profile elicited by Triflorcas linked to the up-regulation of a small subset of stress and toxicity genes. Thus, the minimal number of genes affected by Triflorcas indicates that this compound elicits more selective action on stress and toxicity genes compared to other Met anticancer agents such as SU11274. Triflorcas Mechanisms of Drug Action and its Effects on Cell Cycle Progression Leading to Mitotic Failure The effects of Triflorcas on the PI3K-Akt pathway evidenced by biochemical studies (Figure 2B and C) suggest that Triflorcas anticancer properties may be associated with its activity on distinct signaling targets in addition to Met. One screening approach broadly used to identify targets of a given chemical agent is the KINOMEscan, which allows assessing the activity of compounds against a panel of kinases through binding assays [24]. Triflorcas was screened against 98 kinases at a single concentration of 10 mM, in agreement with standard protocols. However, the low solubility of Triflorcas in buffer conditions used for this type of screen limited the possibility of identifying targeted kinases. Nevertheless, we found that Triflorcas reduces by more than 30% the binding constant of only 5 over 98 kinases analyzed: Abl-1 (either wild-type, or E255K and T315I mutant forms), IKKbeta, JAK2, MKNK1, and ZAP70 ( Figure 5 and Table S2). Although the pattern of kinases interacting with Triflorcas appears highly focused, these results must take into account that a proportion of targets were possibly not detected due to the low solubility of Triflorcas in buffer conditions used for this screen. For example, Met was not identified despite the fact that Triflorcas inhibitory activity was previously established using the Kinexus compound profiling service [19]. Therefore, to further investigate the mechanism of drug action of Triflorcas, we applied a cell-based assay, which allows evaluation of the agent's effects on multiple oncogenic signaling pathways in culture conditions compatible with Triflorcas solubility and biological activity. The phosphorylation levels of several signaling molecules were therefore examined by using the Kinexus phosphorylated protein screen array. In particular, we compared the phosphorylation levels of 44 signaling phosphoepitopes in GTL-16 cells treated or not with Triflorcas ( Figure 6, S3, and Table S3). Consistent with our biochemical studies, we found that the phosphorylation state of several components within the PI3K/Akt pathway was altered in cells exposed to Triflorcas. In particular, reduced phosphorylation levels were observed for Akt1, mTOR/FRAP, p70 S6Kb1 , and S6 ribosomal protein ( Figure 6A and B; red circles). Intriguingly, we found that Triflorcas significantly alters the phosphorylation status of two additional proteins: the phosphorylation of Rb on different Ser/ Thr residues was reduced ( Figure 6A and B; blue circles); the phosphorylation of nucleophosmin/B23, a nucleolar protein found to be significantly abundant in tumors [25], was increased ( Figure 6A and B; green circles). Western blot analysis confirmed changes in the phosphorylation state of Rb and nucleophosmin/ B23 proteins after Triflorcas treatment ( Figure 6C). As both Rb and nucleophosmin/B23 are key regulators of cell cycle progression, we next evaluated whether these changes in phosphorylation state were paralleled with alteration of the cell cycle. Cells were stained with propidium iodide and their distribution in different cell cycle phases was assessed by flow cytometric analysis. Untreated GTL-16 cells were proliferating, as confirmed by the flow cytometry pattern ( Figure 7A and data not shown). Triflorcas treatment alters GTL-16 cell cycle distribution, with an increase of the G0/G1 cell population at the expense of the S and G2/M population ( Figure 7A and data not shown). As controls, treatment of GTL-16 cells with either SU11274 or nocodazol led to a blockage into G0/G1 or G2/M phase, respectively (data not shown). Thus, Triflorcas affects cell cycle progression of GTL-16 cells. Morphological analysis of cells exposed to Triflorcas showed a significant increase in the number of multinucleated cells, which indicates mitotic failure ( Figure 7B and C). In contrast, no mitotic failure was observed in cells treated with SU11274, crizotinib, or PHA665752 ( Figure 7C). Together, these findings show that the anti-tumor activity elicited by Triflorcas may in part account for phosphorylation changes of distinct signaling targets, such as components of the PI3K/Akt pathway, Rb, and nucleophosmin/B23, which correlates with alterations in cell cycle progression and mitotic failure. Triflorcas Impairs Survival and Anchorage-independent Growth of Human Cancer Cells Characterized by RTK Swapping One major limitation of molecular therapies using agents targeting distinct RTKs is the drug resistance mechanism, which can be either constitutive or acquired after treatment. In this context, it has been shown that Met, ErbBs, and PDGFRs can reciprocally substitute for each other to maintain the activity of RTK-driven oncogenic pathways [9,[26][27][28][29]. We therefore evaluated the inhibitory properties of Triflorcas in cancer cell lines, in which reciprocal substitution of Met, ErbBs, and PDGFRb confers resistance to single RTK inhibition. Human glioblastoma-astrocytoma U87 cells were used as a model of RTK swapping [27]. Survival and anchorage-independent growth assays were per-formed by comparing the effectiveness of Triflorcas to that of other Met inhibitors. We found that Triflorcas impaired U87 cell survival in a dose-dependent manner compared to SU11274, crizotinib, and PHA665752 ( Figure 8A). Notably, Triflorcas drastically reduced anchorage-independent growth of U87 cells, whereas both SU11274 and Gefitinib (ErbB1 inhibitor) elicited only moderate anchorage-independent growth inhibition ( Figure 8B and C), as previously shown [27]. By evaluating the compound IC 50 , we found that the Triflorcas inhibitory effects were elicited at lower doses compared to those required in GTL-16 Met-addicted cells (U87:0.2 mM; GTL-16:0.811 mM), and more effectively than those elicited by SU11274 or Gefitinib (1.9 mM and 9.5 mM, respectively; Figure 8C). We therefore investigated whether Triflorcas acts also on other target(s) than Met. ErbB1 and PDGFRb were two obvious candidates as they are responsible for RTK swapping. We excluded that Triflorcas acts on ErbB1 as: a) ErbB1 phosphorylation was unaffected by Triflorcas in BT474 cells ( Figure S1C) [19], and b) ErbB1 is not predominantly phosphorylated in U87 cells under normal conditions (data not shown), as previously reported [27]. In contrast, we observed a drastic reduction in PDGFRb phosphorylation when cells were exposed to Triflorcas ( Figure 8D). Remarkably, Triflorcas efficiently reduced PDGFRb phosphorylation at lower doses compared to Imatinib or Nilotinib ( Figure 8D), two agents targeting PDGFRb, c-Kit, and c-Abl [30]. Triflorcas almost abolished PDGFRb phosphorylation in U87 cells at 0.3 mM, whereas Imatinib or Nilotinib partially reduced PDGFRb phosphorylation only at 3 mM ( Figure 8D). Together, these findings show that Triflorcas exerts its anti-tumorigenic activity also on cancer cells with oncogenic RTK swapping. Triflorcas Elicits a Distinct Growth Inhibitory-response Profile in Cancer Cell Lines To further elucidate the anticancer properties of imidazo[2,1b]benzothiazol-2-ylphenyl moiety-based agents, we evaluated Triflorcas bioactivity by applying the disease-oriented NCI Anticancer Drug Screen [31]. This developmental therapeutic program historically allowed the efficient capture of compounds with anti-proliferative activity. In a preliminary test, Triflorcas was assayed at a single concentration of 10 mM in the full NCI60 cancer cell line panel ( Figure S4). As it satisfied predetermined threshold inhibition criteria established for the NCI Anticancer Drug Screen, according to a minimum number of targeted cell lines, Triflorcas anticancer activity was then evaluated by using a full range of concentrations, in agreement with standard protocols (10, 100 nM, 1, 10, 100 mM). Results were expressed as the percentage of living cells following 48 hours of incubation ( Figure 9). A decrease in cell number was seen in a proportion of cancer cells, with the mean log 10 GI 50 (growth inhibition) of -5.560.5M ( Figure S5). The mean log 10 LC 50 (lethal concentration) calculated for these cell lines was -4.360.3M ( Figure S5). The COMPARE algorithm allows the identification of compounds whose pattern of growth inhibition is similar to the agent of interest [31]. Using this approach, we found that the Triflorcas activity correlated only minimally with that of known standard chemotherapeutic drugs (maximal correlation 0.335; Table S4). We further widened the comparison to the publicly available data from synthetic compounds screened on the NCI60 cancer cell line panel, and found that none of these compounds significantly matched with Triflorcas, as maximal correlation reached only 0.63 (Table S5). Thus, the unique growth inhibitoryresponse profile on cancer cells corresponding to solid tumors and leukemia indicates that imidazo[2,1-b]benzothiazol-2-ylphenyl moiety-based agents are characterized by a novel mechanism of drug action. To get insights into potential molecular signatures characterizing cancer cells sensitive to Triflorcas, we performed bioinformatics studies using a large set of signaling database. In particular, we compared the response of the NCI60 cancer cell line panel to Triflorcas with NCI data resources from three databases: ''Microarrays'', ''All NCI dataset'', and ''Only Protein subset NCI'' ( Figure 10). These studies highlighted a significant correlation between Triflorcas responsiveness and specific molecular changes (belonging to strong positive and weak positive correlation values). Signals with high correlation score included: cytoskeletonassociated protein4 (CKAP4), secernin1 (SCRN1), mitogenactivated protein kinase kinase 2 (MAP2K2), myristoylated alanine-rich protein kinase C substrate (MAPCKS), SMAD4, FIP1L1, p53, insulin-like growth factor binding protein 2 (IGFBP2), forkhead box O3 (FOXO3), and tuberous sclerosis 2 (TSC2) ( Figure 11A). Notably, FOXO3 and TSC2 are known to be regulated by the PI3K/Akt pathway, which is targeted by Triflorcas (Figure 2 and 6). To further support bioinformatics outcomes, we experimentally assessed Triflorcas effects on signals highlighted in the three lists: ''microarrays'', ''all NCI dataset'', and ''only protein subset NCI''. For this purpose, GTL-16 cells were transfected with luciferase reporter plasmids that enable measuring the activity of p53, Smad2/3/4, AP-1 (as read of the MAP2K2-JNK pathway) or NFAT (as readout of the PKC-MARCKS pathway) promoters. As Triflorcas does not affect ERK signaling ( Figure 2B and C), an Elk1-SRF reporter plasmid was used as negative control. Luciferase activity was measured in cells after 48 hours treatment with vehicle or Triflorcas. Consistently with bioinformatics studies, we found that Triflorcas enhanced luciferase activity controlled by p53, Smad2/3/4, AP-1, NFAT, but not Elk1-SRF, promoters ( Figure 11B). As JNK/AP-1 pathway activation can lead to distinct biological outcomes ranging from apoptosis induction to enhanced survival, tumor progression, and metastasis, according to its strength of stimulation and the signaling context [32], it will be relevant to assess JNK-AP-1 function on a panel of cancer cells sensitive to Triflorcas. Together, these studies show that compounds characterized by the imidazo[2,1-b]benzothiazol-2-ylphenyl moiety define a new class of chemical agents displaying anticancer activity towards distinct cancer cell types, according to molecular signatures indicated by bioinformatics studies. The expression profile of 84 genes related to cell stress and toxicity was analyzed in GTL-16 cells. Cells were treated with either Triflorcas (black columns; 3 mM) or SU11274 (grey columns; 1 mM) for 24 hours, and gene expression was compared to that of untreated cells. Genes were grouped in clusters corresponding to oxidative/metabolic stress, heat shock, proliferation/carcinogenesis, growth arrest/senescence, inflammation, and apoptosis/necrosis signaling. Only statistically significant changes in gene expression are indicated (P,0.05). Triflorcas altered the expression of only 14 genes compared to the alteration of 39 genes induced by SU11274 treatment. Notably, the expression of cyp1A1 was increased 611-fold in the presence of Triflorcas. (B) Western blot analysis showing the up-regulation of CYP1A1 protein levels in cells exposed to Triflorcas (3 or 10 mM). Acacetin (ACA; 10 mM) treatment prevented CYP1A1 up-regulation by Triflorcas (TFC). (C) CYP1A1 up-regulation by Triflorcas also occurred in ErbB1-addicted cancer BT474 cells. Gefitinib (Gef; 10 mM) and SU11274 (SU; 2 mM) were used as controls. doi:10.1371/journal.pone.0046738.g004 Discussion Aberrant Met signaling in tumors recapitulates all the biological events controlled by Met during embryogenesis [33][34][35][36][37][38][39][40][41][42] and regenerative processes [43,44]. To target oncogenic Met signaling, we originally generated a virtual chemical library of known anticancer agents and assessed their ability to interact with Met active site through computer modeling studies [12]. We reasoned that the flexibility of Met active site may offer the benefit of generating compounds in which anticancer properties and Met inhibitory features can be merged. The previously described imidazo[2,1-b]benzothiazol-2-ylphenyl compounds interact with Met active sites, as evaluated by in silico studies, interfere with Met phosphorylation, as assessed through biochemical and in vitro kinase assays, and hamper survival and anchorage-independent growth of Met-dependent cancer cells [19]. In the present study, we assessed the anticancer properties of Triflorcas, one of the most biologically active agents containing the imidazo[2,1-b]benzothiazol-2-ylphenyl moiety, on a panel of cancer cells. Our findings suggest that Triflorcas and its derivatives are promising agents to further exploit for targeting cancer cells: a) carrying Met amplification (such as GTL-16 cells, as previously shown [19]); b) carrying Met mutations (such as H2122 and H1437 cells); c) characterized by RTK swapping (such as U87 cells). By investigating the mechanism of drug action, we found that extinction of Met oncogenic signaling by Triflorcas occurs through at least three distinct mechanisms: a) by restraining Met activity [19], its phosphorylation, and phosphorylation of its immediate downstream signals such as Gab1; b) by enhancing Met internalization and degradation; c) by decreasing the phosphorylation levels of Akt and of its downstream targets mTOR, p70 S6K , and S6 ribosomal protein, one pathway known to ensure Met dependency of cancer cells [22]. It is possibly the combination of these three actions that allows Triflorcas to be an effective inhibitor of cancer cells with oncogenic Met. Recent studies have highlighted the importance of intracellular trafficking to the cellular response of activated Met in tumorigenesis [45,46]. As Triflorcas enhances Met internalization and degradation, it will be relevant to assess its properties on cancer cells carrying oncogenic forms of Met that render the receptor refractory to degradation [46]. We also show that the inhibitory properties of Triflorcas in cancer cells with RTK swapping can be partially attributed to its capacity to interfere with PDGFRb phosphorylation. Concerning the PI3K/Akt pathway, it is tempting to speculate that Triflorcas reduces its activation to a threshold level that becomes non-permissive for cancer cells when combined with inhibition of other oncogenic signals. Importantly, the reduction of PI3K/Akt pathway activation by Triflorcas, rather than its complete inhibition with more potent and selective drugs, might have the beneficial effect of minimizing side effects that limit the use of these latter in clinics [47,48]. The effects on the PI3K pathway appear to be a direct action of Triflorcas on this pathway rather than being merely a consequence of its effects on Met. Several data support this hypothesis: a) selective inhibition of Akt function compromises in vitro tumorigenesis of H1437 and GTL-16 cells; b) decreased phosphorylation levels of the Akt, but not Ras/ERKs, pathway was observed in H1437 and GTL-16 cells; c) reduced Akt phosphorylation was observed also in ErbB1-addicted cells, which are resistant to Triflorcas. Beside its effects on the PI3K/Akt pathway, Triflorcas also influences the phosphorylation states of Rb and nucleophosmin/B23, two key regulators of cell cycle progression. Consistently, we found that Triflorcas treatment increases the G0/G1 cell population, leading to mitotic failure. Future studies will clarify how Triflorcas and its derivatives influence Rb and nucleophosmin/B23, whether there is a correlation between changes in their phosphorylation levels and alteration of the PI3K/Akt pathway, and whether these signaling alterations cause microtubular network dynamic instability and impaired mitotic spindle formation. Our stress and toxicity RT-PCR array studies evidenced two additional properties of 2-phenylimidazo[2,1-b]benzothiazole derivatives. First, in contrast to SU11274, Triflorcas changes the expression of only 14 out of 84 genes, which are predominantly related to oxidative/metabolic stress, necrosis/apoptosis, and growth arrest/senescence. This limited alteration of stress and toxicity gene expression by Triflorcas correlates with the absence of major side effects observed in cultured neurons and hepatocytes [19], as well as in vivo ( Figure 3A and B). Second, the expression levels of CYP1A1 are drastically up-regulated in tumor cells treated with Triflorcas. CYP1A1 belongs to the CYP1 cytochrome P450 family and has been implicated in cancer cell response to therapeutic agents by biotransforming them from prodrugs to active drugs [49]. CYP1A1 is the most up-regulated gene in cancer cells exposed to the benzothiazole derivative Phortress, its precursor (5F-203) or its desfluoro derivative (DF-203), chemotherapeutic prodrugs currently evaluated in clinical trials [50]. It is well established that these benzothiazole derivatives up-regulate, bind covalently to, and are metabolically bioactivated by CYP1A1 in sensitive cells [51,52]. As Phortress and Triflorcas contain a similar heterocyclic moiety ( Figure S6), it is reasonable that this part of the molecule plays a relevant role in up-regulating CYP1A1 expression in cancer cells. Future studies will clarify whether CYP1A1 influences the effects of Triflorcas by generating specific metabolites. Comparing the profiles of NCI cancer cells responding to Triflorcas and to benzothiazole compounds [49,51], we intriguingly found a limited overlap, indicating that Triflorcas hampers cancer cells with mechanisms of drug sensitivity distinct to Phortress and its derivatives. We have also evaluated the functional relevance of CYP1A1 up-regulation by Triflorcas for its anti-tumorigenic activity and found that CYP1A1 pharmacological impairment did not significantly influence the inhibitory properties of Triflorcas on cell survival and anchorage independent growth (data not shown). Although the up-regulation of CYP1A1 does not appear to be the main event by which Triflorcas elicits its anti-tumor effects, we cannot exclude that, in some neoplastic cells, the modulation of anticancer pharmaceuticals by CYP1A1 may prove to be an advantage above additional mechanisms of action of 2-phenylimidazo[2,1-b]benzothiazole derivatives. Genome-wide profiling and protein-network-based studies have recently established two important aspects related to cancer complexity. First, tumor evolution is often characterized by the acquisition of mutations in a small number of ''core pathways'' [4,5]. Therefore, agents targeting a core pathway at distinct levels could restrain its oncogenic contribution and possibly minimize resistance mechanism. Second, RTKs share several effectors that participate in the oncogenic process and in the drug response [9]. This opens the possibility of designing anticancer therapies targeting mandatory signals in addition to hitting RTK activity directly. The benefit of combining drugs acting at distinct oncogenic levels is a well-established principle of cancer therapy, supported by several experimental settings [10]. However, determining which combinations would maximize effectiveness among the limitless possibilities remains a major challenge. Our studies establish that the flexibility of Met to accept different inhibitor binding modes offers the possibility to develop drugs targeting oncogenic Met signaling dependency at different levels as an alternative strategy. Although these agents might be less potent in directly impairing Met function compared to others discovered through exhaustive co-crystallographic studies, they may offer the possibility of lowering the activation of oncogenic dependency at non-permissive threshold levels, minimizing redundant pathways, either originally present in the tumor cells or acquired following treatment. Similarly to Sorafenib and here reported outcomes for Triflorcas, it is conceivable that compounds containing the imidazo[2,1-b]benzothiazol-2-ylphenyl moiety may have the ability to influence the oncogenic program by exerting modulation of distinct targets, as also evidenced by the disease-oriented NCI Anticancer Drug Screen. Thus, its novel mechanism of drug action together with a favorable side effect profile makes the 2phenylimidazo[2,1-b]benzothiazole a relevant moiety to be further explored for the treatment of a broad range of tumor types. Cell Culture Human non-small-cell lung cancer (NSCLC) H1437 and H2122 cell lines, human breast cancer BT474 cells, and human glioblastoma-astrocytoma U87 cells were acquired from the America Type Culture Collection. Human gastric carcinoma GTL-16 cells are subclones of MKN45 cells (from Riken Cell Bank) obtained by limiting dilution [22]. H1437, H2122, BT474, and GTL-16 cells were grown in RPMI medium (Gibco-BRL), whereas U87 cells in Eagle's Minimum Essential Medium (ATCC). Culture media were supplemented with 4mM Lglutamine and supplemented with 10% (v/v) fetal bovine serum (Gibco-BRL), 100 U/mL penicillin, and 100 mg/mL streptomycin. Cells were kept at 37uC in a humidified atmosphere of 5% CO 2 . H2122 and H1437 cells were used for survival and anchorage independent growth assays, respectively, according to their suitability for these biological assays. Compound Treatments SU11274, Gefitinib, LY294002, acacetin, andresveratrol were purchased from Calbiochem; chlorpromazine and nocodazol from Sigma; crizotinib (PF-2341066) from Active Biochemicals; PHA665752 from Tocris Bioscience; Imatinib and Nilotinib were kindly provided by E. Buchdunger and P. Manley (Novartis Pharma AG, Basel, Switzerland); A-443654 (Akt inhibitor) was kindly provided by V.L.Giranda (ABBOTT Laboratories, Illinois, USA). For survival assays (H2122, GTL-16, and BT474 cells) and cell cycle analyses (GTL-16 cells), cells were cultured in serum-free media for 24 hours prior to compound addition for 48 hours. Survival assays with U87 cells was carried out in 0.1% serum. Viability was assessed with the Cell-Titer-Glo-Luminescent-Assay (Promega). For in vitro tumorigenesis, soft agar growth assays were performed as previously described [19]. Data on biological assays are representative of three independent experiments performed in duplicate or triplicate. In vivo Assays Mice were kept at the IBDML animal facilities. All procedures involving the use of animals were performed in accordance with the European Community Council Directive of 24 November 1986 on the protection of animals used for experimental purposes (86/609/EEC). The experimental protocols were carried out in compliance with institutional Ethical Committee guidelines for animal research (comité d'éthique pour l'expérimentation animale -Comité d'éthique de Marseille; agreement number D13-055-21 by the Direction départementale des services vétérinaires -Préfecture des Bouches du Rhône). To evaluate compound toxicity in vivo, the weight of mice treated with Triflorcas (intraperitoneally (i.p.) injection: 30 mg.kg 21 ) or vehicle was measured before, during, and after treatment. For heart, spleen, kidney, and liver weight, mice were sacrificed after 21 days of daily Triflorcas or vehicle treatment. Tumor xenografts were established by subcutaneous injection of H1437 cells (5610 6 ) in nude mice (S/SOPF SWISS NU/NU; Charles River). Treatment was initiated when tumors achieved an average volume of 15mm 3 (approximately 7 days after cell injection; n = 10 mice per group). Mice were injected with: Triflorcas (i.p. 30 or 60 mg.kg 21 of body weight) or vehicle every other day; crizotinib (oral gavage 50 mg.kg 21 of body weight, in agreement with standard protocol) daily. Triflorcas was formulated in Cremophor-EL:DMSO (1:1, v/v) and diluted in sterile 0.9% (w/v) sodium chloride. This formulation is classically applied for the administration of different chemical agents and does not elicit toxic effects, as revealed by no changes in mouse body weight during treatment. Mice were then sacrificed after 21 days of treatment. Tumor volume was determined from caliper measurements of tumor length (L) and width (W) according to the formula LW 2 /2. Tumor size was measured every week and at the end of the experiment. Tumor weight was established at the end of treatment. Two independent assays were performed (n = 8 mice per group). For tumor xenograft studies with GTL-16 cells, cells (10 6 ) were i.p. injected in nude mice. Mice were treated with Triflorcas (i.p. 30 mg.kg 21 ) or vehicle at day 1 and treatment was repeated every other day. Mice were then sacrificed after 21 days of treatment. Tumor nodules present in the peritoneal cavity were isolated and quantified according to their diameter and their total weight. Two independent assays were performed (n = 8 mice per group). Gene Expression Analysis For gene expression analysis, GTL-16 cells were cultured in serum-free media for 24 hours at approximately 40% confluence, prior to compound addition for 24 hours (Triflorcas: 3 mM; SU11274:1 mM). Total RNA was isolated from untreated or treated cells using the RNAeasy-Kit, processed with DNase (RNase-free DNase set), and purified on RNAeasy column (Qiagen). The quality of RNA was tested by using Picochip (Agilent Technologies). Gene profiling was done by the Super-Array Biosciences service using the RT2-profiler PCR array Stress and Toxicity Pathway Finder (96 genes). The data were imported into an Excel database and analyzed using the comparative cycle threshold method with normalization of the raw data to b-actin. The results are presented as n-fold changes versus the values of untreated cells. The mean value was calculated from measurements of three independent biological samples. KINOMEscan The activity of Triflorcas was assessed on a panel of 98 kinases through binding assays using the KINOMEscan service. The vehicle alone was used as negative control. The KINOMEscan is based on a competition binding assay that quantitatively measures the ability of a compound to compete with an immobilized activesite directed ligand. Kinexus Cells were treated with Triflorcas (3 mM) or vehicle for 72 hours. Protein extraction was performed as described by the manufacturer (Kinexus Bioinformatics, Vancouver, Canada). Samples were then analyzed by KINEXUS service using the phospho-array KPSS-10.1. FACS Analysis Cells were treated with Triflorcas (3 mM), SU11274 (1 mM), or vehicle for 48 hours. Cells were then fixed with 70% ethanol and washed twice in PBS before treatment with RNase 100 mg/mL. After staining with propidium iodide (50 mg/ml), cells were analyzed by flow cytometry. NCI60 Screening Triflorcas was screened by the NCI towards a panel of 60 cell lines. Cell sensitivity was assessed and results were expressed as TGI (Total Growth Inhibition), GI 50 (50% Growth Inhibition), and LC 50 (50% Lethal Concentration). Triflorcas action was evaluated towards data from public resources including CellMiner and COMPARE software. The COMPARE algorithm was used to search for compounds tested on the NCI60 panel, to uncover similar sensitivity profile to the Triflorcas. Luciferase constructs (Cignal Reporters; QIAGEN) were used for reporter assay studies and experiments were performed according to the manufacturers instructions. Bioinformatics Studies To perform bioinformatics studies, we used three datasets downloaded from the NCI data resources to identify potential molecular signatures of NCI cancer cells sensitive to Triflorcas (last update August 2010, http://dtp.nci.nih.gov/mtargets/download. html). These databases included: ''Microarrays'', which contain 74,700 measured targets derived from large scale experiments (Affymetrix U133 from Chiron, Affymetrix U95A from Novartis, Affymetric HUM6000 from Millenium Pharmaceuticals, and cDNA array data from Weinstein (NCI) and Brown & Botstein (Stanford) groups); ''all NCI dataset'', which contains 12,845 measured targets derived from all small-scale measurements such as protein, mRNA, miRNA, DNA methylation, mutations, SNPs, enzyme activity, metabolites, but excluding Microarray; ''only Protein subset NCI'', which contains 333 measured targets derived from the NCI protein screening subset. The whole bioinformatics analysis involved three steps. First, to compare data, we used the developmental therapeutics program mean graph for the Triflorcas (five dose average data at GI 50 endpoint) to extract the sensitivity level of each cancer cell line tested. We then extracted the expression level of signals scored in each NCI cancer cell line. To allow comparison between Triflorcas cell sensitivity values and molecular target expression values, each value was normalized according to a standard normal distribution (standard score defined as: mean = 0; standard deviation = 1). Second, we scored the correlation existing between Triflorcas sensitivity values with that of each molecular target value by using a statistical correlation coefficient algorithm. The correlation coefficient spans values between 21 and +1 and it permits to define five classes of correlation: strong negative correlation (21 to 20.5), weak negative correlation (20.5 to 0), weak positive correlation (0 to 0.5), strong positive correlation (0.5 to 1), and no correlation (0). Only positive correlations (weak or strong) were included in these studies in order to extrapolate mainly putative targets of Triflorcas. Third, results were displayed in order to verify the strength of the global (whole cell lines) or specific (individual cell line) correlation. For this purpose, we defined a specific color code to highlight differences and similarities between the sensitivity profile of Triflorcas and changes in molecular targets. The color code was assigned according to the standard score above described. All values above 0 represent sensitivity of a given cell line to Triflorcas and are indicated by green squares. All values below 0 stand for no sensitivity and are represented by red squares. The same approach was applied to encode the expression of a given molecular target in each cell line: values above 0 were represented by green squares; values below 0 were represented by red squares. As not all cell lines screened for Triflorcas were included into the analysis of some molecular targets, no available data were represented by gray squares and were not taken into account for statistical analysis. Finally, a web-based application was designed in order to: 1) automatically convert sensitivity of Triflorcas and changes in molecular targets into a numeric format compliant; 2) score the potential degree of correlation between cells response to Triflorcas and the large set of molecular targets, using a statistical correlation coefficient algorithm; 3) sort potential molecular targets according to their correlation score; 4) display the result in a web browser to retrieve a short list of molecular targets highlighting possible molecular signatures. Luciferase Reporter Assay GTL-16 cells were transfected with luciferase reporter plasmids that enable measuring the activity of p53, Smad2/3/4, AP-1 (as read of the MAP2K2-JNK pathway) or NFAT (as readout of the PKC-MARCKS pathway), and Elk1-SRF (as read of ERK signaling) promoters (Cignal Reporters, Qiagen). Transfection was performed by using Lipofectamine according to the manufacturer's protocol (Invitrogen). After 24 hours of transfection, cells were treated or not with Triflorcas (3, 10, and 30 mM) for 48 hours, and then Luciferase activity was assessed. All Luciferase assays were performed using a dual Luciferase assay kit according to the manufacturer's protocol (Promega).
2016-05-18T15:27:46.953Z
2012-10-05T00:00:00.000
{ "year": 2012, "sha1": "c0fc4c2de246ff7c0123aaa778750eefb1a20e35", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0046738&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4798fad6f68520fa7d5af1760f259242ef886264", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267406109
pes2o/s2orc
v3-fos-license
Fixator-assisted Percutaneous Plate Fixation of Complex Diaphyseal Tibial Fractures ABSTRACT Aim The purpose of this study is to evaluate the results of indirect reduction and fixation of comminuted diaphyseal tibial fractures using temporary simplified external fixator and plate osteosynthesis through a limited incision approach with special consideration of the duration of surgery and rate of complications. Materials and methods In this prospective case series study, 41 cases of comminuted diaphyseal tibial fractures were included. Twenty-two were closed fractures, 15 grade I open fractures, and four were grade II open fractures. Patients were evaluated clinically according to the lower extremity functional scale (LEFS). Results Of the 41 cases, 38 were followed up for at least 1 year. Using the LEFS, final scores ranged from 67–80 (mean 75). Union was achieved in all cases except one which united after bone grafting. The mean time to radiological healing was 12 weeks. Operative time from skin incision to closure ranged between 65 minutes and 100 minutes (mean of 80 minutes). There were four cases of superficial infection. Conclusion Treatment of comminuted tibial fractures through use the of a simplified external fixator to aid and maintain the reduction of comminuted tibial fractures whilst limited incisions are then used for minimally-invasive plate osteosynthesis in an effective and time-saving method with a low complication rate. How to cite this article Nada AA, Romeih M, El-Rosasy M. Fixator-assisted Percutaneous Plate Fixation of Complex Diaphyseal Tibial Fractures. Strategies Trauma Limb Reconstr 2019;14(1):25–28. IntroductIon Comminuted fractures of tibia are challenging injuries. The choice of the fixation depends on multiple factors including the fracture pattern, proximity to the joints, bone quality, concomitant softtissue injury, the general condition of the patient and the available equipment. The usual methods of treatment for such fractures include conventional plate fixation, biological plate fixation, intramedullary nailing and external fixation. Intramedullary nailing for tibial shaft fractures is considered the standard of care with satisfactory results in most cases. 1 External fixation offers some benefits in terms of soft-tissue management and for severe comminution. However, concerns remain over the risks of pin tract infection, joint stiffness and the inconvenience from the bulky device. Biological fixation by minimally-invasive locking plate osteosynthesis has become an attractive option for treating comminuted tibial fractures. The basic principles of this technique include indirect closed reduction, extraperiosteal dissection, plate osteosynthesis through limited approaches, functional alignment and relative stability with controlled motion at the fracture site for secondary bone healing through callus formation. 2,3 MAterIAls A n d Methods This is a retrospective review of 41 patients with comminuted tibial shaft fractures. Patients were referred to the emergency department of Tanta University Hospital between March 2012 and March 2016. On admission, a detailed history and clinical examination with the necessary laboratory investigations were performed. At least two X-ray views were taken of the affected leg. In case where fracture extension into a joint was identified, additional CT scans were obtained (Table 1). Surgery was performed within 10 days of injury. A broad spectrum antibiotic was given with the induction of anesthesia. The procedure was carried out with the patient supine on a standard radiolucent orthopedic table. No tourniquet was used. Provisional reduction was obtained using a temporary monolateral external fixator. One Schanz pin (6 mm) was inserted in the proximal part of tibia parallel to the knee joint line and from medial to lateral. Another was inserted in the calcaneus or distal tibia (according to fracture level) from lateral to medial and parallel to the ankle joint line. A side bar was attached using clamps to the proximal and distal half pins in a crossed fashion ( Fig. 1) but not locked until reduction was achieved manually by axial traction and manipulation. The crossed configuration of the fixator allows axial distraction across the fractures site without applying a varus or valgus moment. In some cases, the fibula was reduced and fixed first by an intramedullary wire or flexible nail to facilitate tibial fracture reduction and aid the correction of valgus or varus angulation at the fracture site. Reduction was confirmed using an image intensifier (Fig. 2). The plate was inserted on the medial surface of the tibia (Fig. 1) unless a contraindication, e.g., unfavorable skin conditions or a thin patient were identified. For proximal and midshaft fractures, antegrade plating was performed using locking proximal tibial plates. Two small incisions were made proximal and distal to the fracture site; a small medial or lateral oblique incision was used on the upper part of the tibia and another small distal incision corresponding to the distal end of the plate. For distal tibia fractures, we used a curved anteromedial incision centered over the medial malleolus (Fig. 3). A long periosteal elevator or a long plate is inserted through the incision and manipulated in the subcutaneous, extraperiosteal tissues to create a tunnel. An image intensifier was used to check the position and length of the plate before insertion and fixation (Fig. 3). The appropriate positioning of such anatomical precontoured plates assured a good reduction. Two non-locked screws were inserted first, with one on each side of the fracture to drag the main bone fragments to the plate. Subsequently locking screws were inserted, with at least a total of four screws on each side and using the near-near, far-far configuration for stability-as is the case when using external fixators-for the plate functions as an internal fixator. The aim was to preserve a total of less than or equal to half of the plate holes filled with screws. The fixator was then removed and the wounds irrigated and repaired in layers (Fig. 4). No drains were inserted. In the open fractures, 13 of 15 patients had wound debridement, irrigation, primary wound closure and application of a splint on the first day. Definitive fixation was performed within 1 week after the first operation. The remaining two open fractures, both grade I using the Gustilo and Anderson classification, had debridement and definitive fixation done at the same sitting within 24 hours of injury. This difference was related to the availability of the operating surgeon at the time of trauma, not due to patient-related reasons. Active range of motion exercises were started for the knee and ankle joints once postoperative pain had subsided. Non-weightbearing ambulation was initiated in the first week. The wound was inspected on the second postoperative day and sutures were removed 12-14 days postoperatively. Partial weight-bearing ambulation was then started for 6 weeks and full weight-bearing after 12 weeks or when at least 3 out of 4 cortices showed signs of bridging callus or bony continuity. results Of the 41 patients included in this study, 38 of them were followed up for at least 1 year. The period of follow-up ranged from 1 year to 4 years with an average of 15 months. The delay to surgery ranged from 1 day to 10 days with a mean 5 days. The mean age of the patients was 36.8 years The average operative time was 80 minutes with a range of 65-100 minutes. Intraoperative blood loss was recorded by measuring the amount of blood in the suction machine and an estimation from the degree of blood saturation of gauze swabs; 4 this ranged from 20 mL to 120 mL with an average of 53.6 mL. The results of treatment were analyzed clinically and radiologically. Clinical assessment was performed using the LEFS. The LEFS is a questionnaire containing 20 questions about the individual's ability to perform everyday tasks. The higher the score, the better the function. 5 Using this scale, our results ranged from 67 to 80 (mean 75) (Figs 5 and 6). Radiological union was defined when at least three out of four cortices of the tibia showed bony continuity or bridging callus. The mean time for radiological healing was 12 weeks; it ranged from 8 weeks to 36 weeks. There were four cases (9.7%) of superficial infection. Three of them were from open fractures (two of them were grade II and one was grade I) and all were at the site of the open wound. Treatment was by surgical debridement followed by IV antibiotics. There was one case of aseptic nonunion which needed bone grafting at 6 months to achieve complete union. The nonunion was diagnosed after three follow-up reviews at 1-month intervals showed no radiological signs of bone formation or progress to healing. dIscussIon The management of comminuted tibial fractures is variable. Nonoperative treatment is used for stable fractures with minimal displacement but malunion, shortening, and stiffness of the nearby joints are common. 6,7 Open reduction of comminuted tibia fractures and internal fixation with plate requires a large incision, significant soft tissue dissection to achieve anatomical reduction with ensuing complications including infection (range 8.3-23%), 8,9 delayed and nonunion (range 8.3-35%). [10][11][12] A balance between anatomical reduction and soft tissue stripping is required in order to avoid complications. Clinical practice has shifted from the mechanical concept of absolute stability to the biological concept of functional reduction using indirect methods and minimally invasive plate osteosynthesis (MIPO) techniques with relative stability. However, minimally invasive techniques do not allow direct visualization of the fracture and, hence, intraoperative fluoroscopy is required to confirm the reduction. In our study, early surgical fixation was done for the majority of patients with a mean time to surgery of 5 days. The average time of union was 11.5 weeks which is comparable to other studies on percutaneous plating of tibial fractures. 13,14 The average operative time of 80 minutes is also comparable to other studies of fixation of tibia by interlocking nails or MIPO plating. 15,16 The cases which needed fibular fixation took about 15 minutes longer but helped to restore the coronal alignment and the length of the leg and facilitated subsequent steps in the surgery. The average intraoperative blood loss was 53.6 (range 20-20 mL), which is considered minimal blood loss, without using tourniquet. Notably, this is comparable to other studies in which a tourniquet was used. 17 The complications in this study were limited to five cases. Superficial infection was noticed in four patients during early follow-up after 1 week as redness around the wound sutures. All infections resolved after surgical debridement and intravenous antibiotics administration. There was one case of aseptic nonunion. Here the patient was a 60-year-old smoker and hypertensive. The relative old age and smoking may explain the occurrence of nonunion. Bone graft was added to achieve union. conclusIon Cases treated by biological plating methods show a low incidence of complications including wound infection, nonunion and the need of additional procedures. The application of a temporary monolateral external fixator was found to be helpful in terms of shortening of the operation time, by aiding the reduction and maintenance of reduction whilst plating was carried out. Fixation of the fibula by intramedullary flexible nail can be used to facilitate bony realignment especially in cases with simple fibular fractures. consent A written, informed consent was obtained from all the patients authorizing the treatment, radiological and photographic documentation. They were informed and consented that the data would be submitted for publication.
2020-01-09T09:07:15.598Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "78fde2e80077ff297efbcc09d1ed2f07e4b0c996", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5005/jp-journals-10080-1422", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cba48d00b97fa1babe5ce190b8aab2ce7824bf1a", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
259341713
pes2o/s2orc
v3-fos-license
A NEW TYPE OF BUBBLE SOLUTIONS FOR A CRITICAL FRACTIONAL SCHR ¨ODINGER EQUATION . We consider the following critical fractional Schr¨odinger equation Introduction and the main results In this paper, we consider the following nonlinear elliptic problem where s ∈ (0, 1), N ≥ 3, 2 * s = 2N N −2s is the fractional critical Sobolev exponent and V (y) is non-negative and bounded potential.For any s ∈ (0, 1), (−∆) s is the fractional Laplacian, which is a nonlocal operator defined as (−∆) s u(y) = c(N, s)P.V. where P.V. is the Cauchy principal value and c(N, s) = π −(2s+ N 2 ) Γ( N 2 +s) Γ(−s) .This operator is well defined in C 1,1 loc (R N ) ∩ L s (R N ), where R N |u(y)| 1 + |y| N +2s dy < ∞}.For more details on the fractional Laplace operator and fractional Sobolev space, we refer to [12,16] and the references therein. In recent years, there has been a great deal of interest in fractional Laplacian.One of the main advantages of the fractional Laplacian is its ability to model anomalous diffusion, such as plasmas, flames propagation and chemical reactions in liquids.Additionally, the fractional Laplacian is used to model quasi-geostrophic flows, turbulence and water waves, molecular dynamics and relativistic quantum mechanics of stars (see [5,7,32] and the references therein).In probability and finance, the fractional Laplacian plays an essential role in the theory of Lévy processes, and it can be understood as the infinitesimal generator of a stable Lévy diffusion process (see [3]).This connection with Lévy processes makes the fractional Laplacian a useful tool for modeling various financial products, such as American options (see [16]). Solutions of problem (1.1) are related to the existence of standing wave solutions to the following fractional Schrödinger equation for all t > 0. ( That is, solutions with the form Ψ(x, t) = e −ict u(x), where c is a constant.When s = 1, Chen, Wei and Yan [13] considered the following nonlinear elliptic equation (1.4) They proved that (1.4) has infinitely many non-radial solutions if N ≥ 5, V (y) is radially symmetric and r 2 V (r) has a local maximum point, or a local minimum point r 0 > 0 with V (r 0 ) > 0. Later, Peng, Wang and Wei [28] constructed infinitely many solutions on a circle under a weak symmetric condition of V (y), where they only required that r 2 V (r, y ′′ ) has a stable critical point (r 0 , y ′′ 0 ) with r 0 > 0 and V (r 0 , y ′′ 0 ) > 0. In [18], Duan, Musso and Wei constructed infinitely many solutions, where the bubbles were concentrated at points lying on the top and the bottom circles of a cylinder.In [22], Guo, Liu and Nie proved that problem (1.1) had infinitely many solutions concentrated on a circle. Before stating of the main results, let us first introduce some notations.Denote D s (R N ) the completion of C ∞ 0 (R N ) under the norm where Fu is the Fourier transformation of u: R N e −iξ•x u(x)dx. We will construct solutions in the following space: R N V (y)u 2 dy < +∞ , with the norm: We define the functional I on H s (R N ) by: where (u) + = max(u, 0).Then solutions of problem (1.1) correspond to the critical points of the functional I. Let x + j = r 1 − h2 cos 2(j−1)π k , r 1 − h2 sin 2(j−1)π k , rh , ȳ′′ , j = 1, ..., k, x − j = r 1 − h2 cos 2(j−1)π k , r 1 − h2 sin 2(j−1)π k , −r h, ȳ′′ , j = 1, ..., k, where ȳ′′ is a vector in R N −3 , h ∈ (0, 1) and (r, ȳ′′ ) is close to (r 0 , y ′′ 0 ).In this paper, we consider the following three cases of h in the process of constructing solutions: (i) h goes to 1; (ii) h is separated from 0 and 1; (iii) h goes to 0. We would like to point out that in case (ii), these points {x + j } k j=1 and {x − j } k j=1 will locate around the small circle S 1 + and S 1 − , respectively.In this case, the distance between these points {x + j } k j=1 ({x − j } k j=1 ) like that of corresponding points in [22].While in case (i), {x + j } k j=1 and {x − j } k j=1 will goes to the North pole and the South pole of S 2 simultaneously.In case (iii), {x + j } k j=1 and {x − j } k j=1 will goes to the circle S 1 0 at the same time, so the distance between these points {x + j } k j=1 , {x − j } k j=1 may be very close.Note that S 1 ± , S 1 0 and S 2 are defined as The idea of constructing solutions is to glue some U x ± j ,λ as an approximation solution.In order to not only deal with the slow decay of this function when N is not big, but also simplify some computations, we introduce the smooth cut off function η(y As for the case (i), we assume that α where ν, θ are small constants and M 1 is a positive constant. . If V (y) ≥ 0 is bounded and satisfies (V ), then there exists a positive integer k 0 > 0, such that for any integer k > k 0 , problem (1.1) has a solution u k of the form where , which will be needed in Lemma 2.5 and guarantee the existence of a small constant ν > 0.Moreover, it is easy to see which are axisymmetric on the third coordinate and the number of bubbles can be made arbitrarily large.This blowing-up phenomenon of clustering cannot occur in the case that the potential function V (y) just has isolated critical points.In particular, when taking y ′′ 0 = 0 ∈ R N −3 , u k concentrates on a pair of symmetric points with respect to the region.Additionally, as for the case (ii) and case (iii), we assume that k > 0 is a large integer, where a ∈ [0, 1), θ > 0 is a small constant, M 2 is a positive constant. Theorem 1.4.Suppose that N ≥ 3 and 2N +3− . If V (y) ≥ 0 is bounded and satisfies (V ), then there exists a positive integer k 0 > 0, such that for any integer k > k 0 , problem (1.1) has a solution u k of the form where Remark 1.5.We would like to point out that since there involves h in the points {x and N −4s N −2s in Theorems 1.1 and Theorems 1.4 respectively, to obtain the good enough estimates for the high order terms, we have to deal with all the estimates more carefully, and we have to improve the accuracy of the estimates for the error term ϕ(seep Proposition2.3). Remark 1.6.We would like to point out that the solutions we obtained are different from those obtained in [22].Meanwhile, they are also different from the ones obtained in [18] where they only consider the case that h goes to 0. Now, we outline the main idea in the proof of Theorem 1.1 and discuss the main difficulties in the proof of desired results above. We will prove Theorem 1.1 by a modified finite-dimensional reduction method, combined with various local Pohozaev identities.The finite-dimensional reduction method has been extensively used to construct solutions for equations with critical growth, see [8,10,15,21,23,27,35] and the references therein.In [34], Wei and Yan used the number of the bubbles of the solutions as the parameter to construct the bubbling solutions for a class of prescribed scalar curvature problem.After that, there are a number of researches focusing on looking for infinitely many solutions for non-compact elliptic problems, see [13,15,23,24,28,29,33] and the references therein. The main purpose of this paper is to construct a new type of bubble solutions like [18], where they constructed 2k-bubble solutions concentrating at points lying on the upper and lower circles of a cylinder.In order to release the assumptions about the function V (y), inspired by [22,29], we try to construct solutions by using various local Pohozaev identities to find algebraic equations, which determine the location of the bubbles.Hence, we can construct 2k-bubble solutions, symmetric with respect to the third coordinate, concentrating at even the saddle points of V (y), which just has a weaker sense of symmetry.However, in the present paper, since the bubbles of our solutions may be prettily close when h tends to 0 or 1, we need to do more delicate computations and obtain more precise estimates.We will discuss this in more details later. We would like to point out that we obtain (r, h, ȳ′′ ) by using the reduction argument, rather than determine these parameters via computing the derivatives of the reduced function F (r, h, ȳ′′ , λ) with respect to r, h and ȳ′′ k , k = 4, ..., N directly.Actually, we cannot introduce the condition ∂F ∂ h = 0 or the equivalent Pohozaev identity, since it deduces that h goes to 0 which is in conflict with case (i) and case (ii) when k goes to infinity.In order to surmount the obstacle, we require that the bubbling solutions is symmetric to the third coordinate axis.Then, we can release the limit to the decay rate of h by using a modified reduction method. Noting that the maximum norm will not be affected by the number of the bubbles, we need to carry out the reduction procedure in a space with weighted maximum norm, similar weighted maximum norm has been used in [14,22,[28][29][30]34]. Our paper is organized as follows.In section 2, we perform a finite-dimensional reduction to get a finite dimensional setting.Then we prove some results for the finite dimensional problems and prove Theorem 1.1 in section 3. Theorem 1.4 is proved in section 4. In Appendix A, we give some essential estimates.In Appendix B, we give the expansion of the energy for the approximate solutions and in Appendix C, we give the proof of the local Pohozaev identity for the fractional Laplace operator. The Finite-Dimensional Reduction In this section, we perform a finite-dimensional reduction by using Z r, h,ȳ ′′ ,λ as an approximation solution.For later calculations, we divide R N into k parts, for j = 1, ..., k, where , R 2 denote the dot product in R 2 .For Ω j , we further divide it into two separate parts: We also define the constrained space Now, we consider the following linearized problem for some real numbers c l (l = 2, ..., N). In the sequel of this section, we assume that (r, ȳ′′ ) and h satisfy (1.8). Lemma 2.1.Suppose that ϕ k solves (2.1) for f = f k .If f k * * goes to zero as k goes to infinity, so does ϕ k * . Proof.We argue by contradiction.Suppose that there exist Without loss of generality, we may assume that ϕ k * = 1.For simplicity, we drop the subscript k. From equation (2.1), we have Using Lemma A.7, we can deduce that From the definitions of Z ± j,l , for j = 1, 2, ..., k, we have where β = α N −2s .Combining estimates (2.5) and Lemma A.2, we have where Next, we want to estimate c l , l = 2, 3, ..., N. Multiplying equation (2.1) by Z + 1,t (t = 2, ..., N) and integrating, we see that c l satisfies It is easy to check that for some constant c > 0. It follows from Lemma A.1 and (2.5) that where , and Similarly, we have where θ 0 > 0 is a small constant and we use for and On the other hand, direct calculation gives Hence, which, together with (2.7) and (2.8), yields that which, together with ϕ * = 1, yields that there is R > 0 such that for some j with and u is perpendicular to the kernel of (2. 19), according to the definition of H.As a consequence, u = 0, which is a contradiction to (2.18). From Lemma 2.1, using the same argument as in the proof of Proposition 4.1 in [14], we can prove the following result.Lemma 2.2.There exist k 0 > 0 and a constant C > 0, independent of k, such that for where n 2 = −β, n l = 1 for l = 3, ..., N. Next, we consider the following problem (2.21) First, we give the main result of this section. Proposition 2.3.There exists a positive large integer k 0 , such that for all k ≥ k 0 and λ ∈ where ǫ > 0 is a small constant. In order to use the contraction mapping theorem to prove Proposition 2.3, we need several lemmas.Rewrite (2.21) as where and In order to use the contraction mapping theorem to prove that (2.23) is uniquely solvable, we need to estimate F (ϕ) and l k respectively.Lemma 2.4.If N > 4s + 1 and ϕ * ≤ 1, then which, together with Hölder inequality, yields that where we use (2.10) and (2.11).Therefore, . As for the term J 32 , we divide it into the following three cases: Case 1: If |x − x + j | ≤ σ, together with the definition of the function η, then we have , where we use , using Lemma A.9, then there holds , where we use , where we use . Similarly, we can deduce that . So, we obtain As a result, we have proved that . Now, we are in a position to prove Proposition 2.3. (2.32) By Lemma 2.2, the existence and properties of the solution ϕ to problem (2.23) is simplified to find a fixed point for ) where L k is the linear bounded operator defined in Lemma 2.2. Next, we will prove that A is a contraction map from E to E. In fact, if ϕ ∈ L ∞ (R N ), then by Proposition 2.9 in [31], we can obtain ϕ ∈ C(R N ).For any ϕ ∈ E, by Lemma 2.2, Lemma 2.4 and Lemma 2.5, we have , since 4s N −2s τ < min s, (2 * s − 2)s .This shows that A maps from E to E. On the other hand, for all ϕ 1 , ϕ 2 ∈ E, we have (2.34) If 2 * s ≤ 3, using Hölder inequality like (2.26), then we have Therefore, A is a contraction map from E to E. The case 2 * s > 3 can be discussed in a similar way. Then, we can check that for i = 4, ..., N. Similarly we can compute that and for some constants a 1 > 0, a 2 > 0 and a 3 > 0. Similarly, from (3.1), we can obtain which, together with and Proof.Here, we only prove (3.23) since the proof of (3.24) is similar.We will deal with the terms in the right-hand side of (3.22) one by one. For the first term S 1 , noting that ũk = Zr, h,ȳ ′′ ,λ + φ, we have Next, we will estimate the terms in (3.25) one by one. Using Lemma A.10, we obtain since N −4s−α N −2s < N − 4s − ǫ.By Lemma A.11, there holds By the process of the proofs of (3.26) and (3.27), together with Hölder inequality, we have Similar to (3.28), we have Hence, we have proved that By the same argument as that of (3.30), we can prove Next, we estimate the term S 3 . By Lemma A.10, we have Similarly, we have and So, we can obtain that Next, we estimate the terms S 4 and S 5 .Since u k | ∂Bρ = ϕ, we deduce that where we use k λ 2τ < C. Similarly, we have < C. Finally, we estimate the term S 6 .Note that Similarly, we have which, together with Proposition 2.3, yields that Combining the above estimates, we have that (3.22) is equivalent to Similar to (2.9),by direct computations we have Next, we estimate H 2 .Note that If 2 * s ≤ 3, then we have Similarly, if 2 * s > 3, we have, As a consequence, we have which, together with Lemma B.1 and Lemma A.5, yields that (3.32).The proof is completed. Proof of Theorem 1.4 In this part, we give a brief proof of Theorem 1.4.We assume that k > 0 is a large integer ] for some constant L ′ 1 > L ′ 0 > 0 and (r, h, ȳ′′ ) satisfies (1.10).We define τ = N −4s 2(N −2s) .Actually the proof of Theorem 1.4 has the same reduction structure as that of Theorem 1.1 in section 2. The main difference between the two proofs is how to deal with some problems arising from the distance between these points {x + j } k j=1 and {x − j } k j=1 .Since the distance between x + i and x + j , for i = j, will be closer in case (i) and the distance between x + i and x − i will be closer in case (iii).Therefore, in order to avoid some tedious steps, we just point out that some important estimates, about dealing with the distance between these points {x + j } k j=1 and {x − j } k j=1 , are still valid in this section under the assumption (1.10).Proof of Theorem 1.4.We can verify that (2.13) and (2.14) are still valid provided that h satisfies (1.10) and N −4s N −2s ≤ γ < 1.In fact, It follows from Lemma A.4 and (1.10) Similarly, (2.28) and (2.29) are valid for h satisfying (1.10) and γ > 1, that is, and If h satisfying (1.10), together with (A.6), then we have where β ′ = N −4s N −2s .By the similar steps in the proof of Theorem 1.1, we can check that (3.4), (3.5) and . Next, we discuss the main items in (4.8).Case I: If (λ 1) and h = a ∈ (0, 1) as λ → +∞, then we get for some positive constants C 1 and large λ, then h = o(1) as λ → +∞.In fact, in this case, B 4 defined in (4.8) may not be a constant, but Proof.From (A.1), we have where and ǫ 0 > 0 is a small constant. Proof.The proof is similar to Lemma A.2 in [18]. Proof.The proof is similar to Lemma A.6 in [22]. 2). Data Availability There is no data in this paper.
2023-07-06T06:43:20.439Z
2023-07-05T00:00:00.000
{ "year": 2023, "sha1": "4133d5ae1f212bbec291969d43e2c8e1c1654888", "oa_license": null, "oa_url": "https://www.aimsciences.org/data/article/export-pdf?id=661ce55a150bdb3c4a1180dc", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "dfc2d46c039764159db8b99439f62cfccb8e9584", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
19147279
pes2o/s2orc
v3-fos-license
Camurati-Engelmann disease The thick limb bones can lead to bone pain and muscle weakness in the arms and legs and cause individuals with Camurati-Engelmann disease to tire quickly. Bone pain ranges from mild to severe and can increase with stress, activity, or cold weather. Leg weakness can make it difficult to stand up from a seated position and some affected individuals develop a waddling or unsteady walk. Additional limb abnormalities include joint deformities (contractures), knock knees, and flat feet (pes planus). Swelling and redness (erythema) of the limbs and an abnormal curvature of the spine can also occur. The thick limb bones can lead to bone pain and muscle weakness in the arms and legs and cause individuals with Camurati-Engelmann disease to tire quickly. Bone pain ranges from mild to severe and can increase with stress, activity, or cold weather. Leg weakness can make it difficult to stand up from a seated position and some affected individuals develop a waddling or unsteady walk. Additional limb abnormalities include joint deformities (contractures), knock knees, and flat feet (pes planus). Swelling and redness (erythema) of the limbs and an abnormal curvature of the spine can also occur. Individuals with Camurati-Engelmann disease may have an unusually thick skull, which can lead to an abnormally large head (macrocephaly) and lower jaw (mandible), a prominent forehead (frontal bossing), and bulging eyes with shallow eye sockets (ocular proptosis). These changes to the head and face become more prominent with age and are most noticeable in affected adults. In about a quarter of individuals with Camurati-Engelmann disease, the thickened skull increases pressure on the brain or compresses the spinal cord, which can cause a variety of neurological problems, including headaches, hearing loss, vision problems, dizziness (vertigo), ringing in the ears (tinnitus), and facial paralysis. The degree of hyperostosis varies among individuals with Camurati-Engelmann disease as does the age at which they experience their first symptoms. Other, rare features of Camurati-Engelmann disease include abnormally long limbs in proportion to height, a decrease in muscle mass and body fat, delayed teething (dentition), frequent cavities, delayed puberty, a shortage of red blood cells (anemia), an enlarged liver and spleen (hepatosplenomegaly), thinning of the skin, and excessively sweaty (hyperhidrotic) hands and feet. Frequency The prevalence of Camurati-Engelmann disease is unknown. More than 300 cases have been reported worldwide. Causes Mutations in the TGFB1 gene cause Camurati-Engelmann disease. The TGFB1 gene provides instructions for producing a protein called transforming growth factor beta-1 (TGFβ-1). The TGFβ-1 protein triggers chemical signals that regulate various cell activities, including the growth and division (proliferation) of cells, the maturation of cells to carry out specific functions (differentiation), cell movement (motility), and controlled cell death (apoptosis). The TGFβ-1 protein is found throughout the body but is particularly abundant in tissues that make up the skeleton, where it helps regulate the formation and growth of bone and cartilage, a tough, flexible tissue that makes up much of the skeleton during early development. TGFβ-1 is involved in different processes in other tissues. The TGFB1 gene mutations that cause Camurati-Engelmann disease result in the production of an overly active TGFβ-1 protein. This abnormal TGFβ-1 protein activity causes an increase in signaling, which leads to more bone formation. As a result, the bones in the arms, legs, and skull are thicker than normal, contributing to the movement and neurological problems often experienced by individuals with Camurati-Engelmann disease. Some individuals with Camurati-Engelmann disease do not have an identified mutation in the TGFB1 gene. In these cases, the cause of the condition is unknown. Inheritance Pattern This condition is inherited in an autosomal dominant pattern, which means one copy of the altered gene in each cell is sufficient to cause the disorder. In some cases, an affected person inherits the mutation from one affected parent. Other cases result from new mutations in the gene and occur in people with no history of the disorder in their family. Some people who have the altered gene never develop the condition, a situation known as reduced penetrance.
2019-08-20T06:03:52.675Z
2010-09-27T00:00:00.000
{ "year": 2010, "sha1": "02d6905770f36b52916ec9f7b31f522c1525a4a6", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.53347/rid-10851", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f57fdc9fc4dd13ef71702106e5a63b25f1ab74aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
54819929
pes2o/s2orc
v3-fos-license
Revenue Sharing in Mining : Insights from the Philippine Case * Most mining operations in developing countries are defacto public-private partnerships, as the state typically owns the resources and partners with a company or consortium in extraction. Revenue sharing is a critically important element of such partnerships, and it is the starting point for any meaningful analysis of over-all costs and benefits from mining. As a contribution to the policy discussions on this topic, this paper tries to clarify issues in properly evaluating public sector revenues from mining, using data on the Philippines as a case. The main objective here is to illustrate the main differences between macro-level and micro (firm-) level data, and explain why such differences exist. We find evidence that macro-level revenue sharing indicators in the Philippines fail to capture a high degree of heterogeneity in micro(firm-) level revenue sharing outcomes. For instance, using a sample of large-scale metallic mines, we find that this group’s payment to the government (as a share of revenue) is much higher than the industry average and is roughly comparable to some foreign comparator firms. Clarifying and explaining these discrepancies could help determine broader net benefits from extractive industries, and thus establish whether and to what extent mining operations provide enough net gains to the country. Our analysis suggests that industry-level analysis of mining revenue sharing is inadequate in determining fairness and comparability to international standards. More complete simulation of tax revenues is necessary in accurately analyzing revenue sharing and in designing revenue-sharing policies. Introduction In most cases, mining operations are defacto public-private partnerships because the state usually owns the minerals while the company does the extraction.Revenue sharing is thus a critical component of this partnership and is the starting point of cost-benefit analysis for mining.Like other mining countries, the Philippines levies taxes, royalties and other fees to mining firms operating within its borders.Using macro-level and firm-level data, we did a first pass analysis of mining revenue sharing regime in the Philippines. Government data show that, on the average over the last four years, roughly 10% of revenues from the entire mining industry were paid to the government.This figure, however, should be interpreted with caution as there is much heterogeneity in the cost and revenue structure across different types of mines.For instance, using a sample of two large-scale metallic mines with publiclyavailable financial statements, we found that this group's payment to the government (as a share of revenue) is much higher than the industry average and is roughly comparable to some foreign comparator firms.This implies that tax payments (as a share of revenue) are widely different across firms.Although there are miners that pay taxes (as a share of revenue) that are similar to international comparators, some firms must be pulling down the figures to the current industry average. There are several reasons for wide discrepancies in tax payment across firms.These firms operate in different contexts and on different minerals.Mines are also at varying stages in their life cycle.Mining companies also differ in their economic scope (i.e.large scale versus small scale), with possible implications on their techno-logy that affect costs of operations.In addition, governance issues-and possible tax evasion-especially for the less regulated small scale mining sector could also be rampant, with direct consequences on over-all revenue figures.Financial conditions also affect tax paymentsfirms with negative profit pay smaller taxes. All these suggest that industry-level analysis of mining revenue sharing is inadequate in determining fairness and comparability to international standards.More complete simulation of tax revenues across different types of mines is necessary in accurately analyzing revenue sharing and in designing revenue-sharing policies. As a contribution to the policy discussions on this topic, this paper tries to clarify issues in properly evaluating public sector revenues from mining with a focus on the Philippines.The main objective here is to illustrate the main differences between macro-level and micro (firm-) level data.At the national level, data on government revenue from mining is an aggregation of all mining firms and therefore cannot take into account heterogeneity at the firm level.Hence, we turn to an analysis of firm-level data, by analyzing financial statements of selected mining companies with publicly available financial information.This offers a potential way forward to analyze from the bottom-up the industries' contributions to government revenues, as this financial information is widely available as part of documents submitted annually to the Securities and Exchange Commission (SEC).Nevertheless, many financial statements are not disag gregated enough for this kind of analysis, therefore limiting our sample of mining firms.In addition, there are limitations in using financial statements as our main data source, as these documents do not indicate all the details needed.Hence the analysis here should be considered an initial comparison, using data sources with fairly similar information and applying the same methodology to calculate the revenue share of government.Finally, we consider that the financial statements are truthful and do not reflect issues such as under-reporting of output value and over-reporting of expenses, which are a possible practice in areas with weaker corporate governance and regulatory oversight. In the next sections, we analyze the components of public sector revenues from mining, followed by an analysis of the actual data on public sector revenues turning to macro-level indicators as well as firm-level data.A final section outlines some of the main findings as well as directions for future research. Components of Government Share on Mining Revenue Government Receipts from Mining The Philippine public sector obtains its share of mining revenues through taxes, fees and royalties both at the local and at the national levels.Table 1 summarizes the various payments mining firms have to remit to the government, as well as the specific government agency receiving it.Each of these items is briefly described here. Local Government Taxes and Fees.These are taxes and fees paid to the local governments with jurisdiction over the mine.These include business tax, real property tax, registration fee and occupation fee.The occupation fee on extraction is PhP50.00 per hectare or fraction thereof per year and is shared by the province (30%) and city/municipality (70%).  Additional Government Share.This is applicable only to mines under the Financial or Technical Assistance Agreement (FTAA) scheme 1 .After the mine's recovery period, the firm is required to pay an Additional Government Share (AGS).The AGS is computed as follows: first, the Basic Government Share (BGS)the sum of all taxes, fees and royalties paid by the firm to the national and local governments-is calculated.Then, the Net Mining Revenue (NMR) is computed.NMR is gross revenue from mining less oper-ating expenses, interest expenses, mine development expenses, and royalty to land owners.If BGS is less than 50% of the NMR, the difference is paid to the government as the AGS.Therefore, for mining firms under FTAA, the total receipts of the government are 50% of the NMR.  Mining Funds.Aside from the taxes, royalties and fees discussed above, mining firms are also required by law to maintain a Contingent Liability and Rehabilitation Fund (CLRF) in a government depository bank.Although CLRF does not accrue directly to the government, the public stands to benefit from these funds as these will be used in case of damages brought by the mines and to rehabilitate the site after minerals have been fully extracted.The CLRF has three components-the Mine Rehabilitation Fund (MRF), the Mine Waste and Tailings Reserve Fund (MWTRF) and the Final Mine Rehabilitation and Decommissioning Fund (FMRDF).The MRF is used for rehabilitation of areas affected by mining operations.MWTRF is the fund generated by the accumulation of the mine wastes and tailings fee, and FMRDF is used to rehabilitate the mine areas after it has been decommissioned [4].  Incentives.Some of the various taxes, fees and royalties due to the government are offset by the incentives offered to mining firms.The incentives for mining firms are outlined in the Mining Act of 1995.These include Income Tax Carry Forward of net operating loss, Income Tax Accelerated Depreciation, and incentives for pollution control devices. Mining Revenue Allocation Scheme The different taxes and fees (as well as exemptions) are channeled through various agencies in government with different implications on the amount of resources under the remit of each agency or level of government.Figure 1 shows a graphic illustration of how mining revenues are shared across the public sector.Gross mining revenue refers to the gross value of sales generated through mining activities.From this base amount, royalties, excise tax and local government business tax are computed.Corporate Income Tax is computed using total taxable income as base, which is computed by deducting revenues with expenses and other deductible items.For mines under Financial or Technical Assistance Agreement (FTAA) scheme, the government also gets an Additional Government Share (AGS), which is the difference between 50% of the NMR and basic government share.It has been noted by some analysts that the government would like to pursue more FTAA arrangements, instead of MPSA arrangements which are claimed to yield less government revenues.The supposed higher government share of mining revenues in FTAA is due to the AGS, which is absent in MPSA.AGS is essentially used to tax resource rent, but it is not progressive, unlike the instruments used by other governments to tax excess profit [1].However, pursuing more FTAAs is easier said than done, and this is highlighted by the disproportionately large amount of MPSAs compared to FTAAs.There are currently 339 existing MPSAs as opposed to only six FTAAs.Analysts cite various reasons why mining firms choose MPSA over FTAA.One is amount of capitalization-firms who want to apply for an FTAA are required USD4 million capitalization compared to PhP2.5 million for MPSA.Another is the longer application process for FTAA.FTAA requires the approval of the President of the Philippines while MPSA is approved only by the DENR Secretary.FTAA is generally intended for foreign firms as this allows up to 100% foreign ownership of the investing company. Aside from royalty, income tax, excise tax and business tax, the government also receives other fees and taxes not based on income or revenue.These are VAT and duties on imported inputs, withholding taxes, fees imposed by the MGB, and local government fees and taxes.Strictly speaking, therefore, these items cannot be considered as government share in mining revenues.Nevertheless, these are still payments made by mining firms to the government, and these parts of the over-all payments to government are not unique to mining activeties. An Analysis of Data on Government Mining Revenues In order to provide a clearer picture of the government share from mining, this section contains an analysis using both macro-(industry-level) and micro-(firm-level) data.One important caveat in the analysis of industry-level data is that it fails to take into account the heterogeneity among individual firms.Hence, we also turn to firmlevel data, by analyzing financial statements of selected domestic mining companies with publicly available financial information.This offers a potential way forward to analyze from the bottom-up the industries' contributions to government revenues, as this financial information are widely available as part of documents submitted annually to the Securities and Exchange Commission (SEC).Most large companies also post their financial statements in their website.However, some financial statements are not disaggregated enough for this kind of analysis.Analyzing financial statements in order to calculate the tax payment as a share of total mining revenue of the firm would also face some limitations, as these documents do not indicate all the details needed.Nevertheless, the analysis here presents a first pass estimate of the revenue share of government.We implement this standard approach using financial statements of Philippine mining companies and selected foreign comparators in order to arrive at some initial comparison. Macro Level Data Table 2 shows the amount of government revenues derived from mining against the sum of all government revenues.The share of mining revenue in total government receipts averaged 0.87% from 2007 to 2010, although figures for the latter two years are much higher than the previous two.The share of mining in total government revenue is significantly less than the industry's share in total Philippine GDP, as highlighted in Figure 2. Reference [8] pointed out that this is an indication of low revenue contribution from mining, and attributed it to the large share of small-scale mines (which pay small amount of tax) in total production, old mines nearing the end of operations, and new mines that are still enjoying tax perks.Further, Table 3 shows the amounts disbursed by mining firms to the government, both at the national and local levels, disaggregated into the main tax instrument (or fee) categories.Table 4 presents the share of each category in the total. Data shows that Taxes Collected by National Government Agencies, mostly composed of income taxes, account for the largest share of disbursements made by mining firms to the government.A far second in 2010 was Excise Taxes Collected by BIR, with 9.72% share, followed closely by Taxes and Fees Collected by LGUs with 8.01%.Fees, Charges and Royalties Collected by DENR-MGB come in last at 5.88%, although the latter three items' rankings frequently interchange in the last four years.A rather direct way of looking at the actual share of the government in mining revenues is to directly compare the total revenues earned by all mining firms with the total amount of taxes, royalties and fees they paid.As shown in Table 5, an average share of roughly around 10% of all mining revenues goes to the government.A casual comparison might indicate that this is lower than the 15.3% calculated by the professional services firm PricewaterhouseCoopers (PWC) in a study of 22 mining firms from 20 countries in 2008 [9].It must be noted, though, that the number of firms surveyed relative to the number of countries covered is small.The study therefore is not meant to be representative of each country included.Figure 3 shows a comparative illustration of the share of governments in mining revenue across country groups included in the PWC survey. Firm Level Data The aggregate tax indicators only paint a partial picture of the government share.These macro-level indicators do not capture the heterogeneity in tax and fees payments across mining companies which have varying mining lifecycle points at any one point in time.For instance, a newer mine may be paying less in the beginning due to tax incentives in the early stages of mining operations.An older mine could be paying the peak of its tax payments already, due to extraction schedule. A detailed financial statement with fully disaggregated data on revenues, taxes, fees and royalties is necessary in order to complete the firm-level snapshot.Corporations registered with the SEC are required to submit financial statements annually, and they often post these in their websites if the company has one.However, one important caveat is that there is no required disaggregation of data on revenues and expenses.Consequently, there are firms that do not have more detailed financial statements that allow us to distinguish between different types of taxes and fees, as well as on where revenues were derived from 3 .Nevertheless, the financial statements of two 4 Philippine mining firms were sufficiently detailed for our analysis.The financial statements of these corporations have enough disaggregation to reasonably isolate taxes, fees and royalties from other payments and expenses.Their source of revenue is also limited mainly to mining activities, i.e. any other sources account for a minor share of revenues. The mining companies analyzed were Nickel Asia Corporation and Philex Mining Corporation.These are large-scale mining firms with asset size of PhP26.To begin with firm-level revenue sharing analysis, the amounts of the different types of disbursements (i.e.taxes, royalties and fees) made by the two mining firms to the government are presented.These are then compared to the firms' revenues and the share of each type of disbursement in the total is calculated. The summary of payments made by the mining firms to the government is presented in Table 6, and the percentage shares for each type of payment are shown in Figure 4.It can be seen from the bar graph that income tax is the most dominant form of payment to the government for Philex and Nickel Asia.Royalties and excise tax account for the second largest share of the pie, followed by other taxes and licenses. Next, Table 7 shows the actual amounts paid by thetwo sample firms to the government in comparison to their revenues.It also gives the amount of disbursements to the government expressed as percent of total firm revenues.The firm-level revenue sharing (the percent share of government in total revenues) is close between Nickel Asia (18.8%) and Philex (20.0%).Recall that the taxes and fees expressed as a share of total industry revenue indicated earlier in Table 5 points to an industry-wide average figure of about 10%.These firm-specific figures drive home the point that macro-level indicators fail to reflect a considerable amount of variation across firms.The PWC survey found a 15.3% average government share in mining revenues in its sample of 22 large-scale mining companies in 20 countries.For our sample of two Philippine firms, the average share of Source: Authors' computations based on firms' financial statements.Notes: a Based on information shared by Nickel Asia on its 2010 taxes, the royalties it paid to the government were only PhP233, 522,000 out of the PhP361, 722,000 indicated in the income statement.The rest were paid to claim holders and indigenous people.Also, for taxes and licenses, the amount was PhP65,351,000 (instead of PhP21,125,000 indicated in the income statement).The difference was due to the wharfage fees collected by the Philippine Ports Authority.These items cannot be extracted from the financial statements.If these will be incorporated in the computations, total payments of the company to the government in 2010 would amount to PhP1,408, 087,000 (instead of PhP1,492,061,000 if these information are not taken into account).This would not significantly change our calculations, although we note this down here to recognize the caveats of our analysis.b Includes royalties paid to private enterprises.government in mining revenues is 19.4% and thus is somewhat comparable-even higher-to those in the PWC survey. A variety of factors could help explain the heterogeneity in tax payments across firms.These firms operate in different contexts, on different minerals, which suggest that the price dynamics for these different minerals, and thus tax payments, may also differ widely.Extracting different types of minerals and operating different types of mines entail different cost structures.Mining firms in the Philippines are also engaged at different stages of the mining lifecycle 7 .For instance, the exploration stage typically does not yield any profit, and governments usually allow loss carry forward at this stage.The development phase also yields high cost for the firm as this entails construction of the necessary infrastructure and purchase of equipment.It is in the utilization phase where mining firms are most profitable [15].Mining firms that are still in the exploration and development phases may have therefore dragged down the average government share in mining revenues in the macro-level data. Mining companies or operators could also differ widely in their economic scope (i.e.small scale vs. large scale mining operations), with possible implications on their technology use and other factor inputs which also affect costs of operations and net revenue calculations.Large-scale mines are more efficient than small-scale ones due to economies of scale and more modern equipment.Thus, large scale mines are able to produce more at similar costs.Inadequate technical knowledge in mining operations, as well as inadequacy of access to financial and consultancy services, lead small-scale miners to inefficiency [16].Inefficiencies lead to lower revenue and profit, which in turn lead to lower tax payments. In addition, governance issues-and possible tax eva-sion-especially for the less regulated small scale mining sector could also be rampant, with direct consequences on over-all revenue figures.The Chamber of Mines of the Philippines has recently urged the government to regulate and collect taxes from small-scale miners.The organization asserts that there are many loopholes in the regulation of small-scale miners and that many of them do not pay taxes 8 .Another source of heterogeneity in revenue-sharing across firms is the financial condition of companies.Companies experiencing negative income do not pay as much taxes compared to those who are profitable.Because income tax is the biggest component of payments to the government, a negative income will significantly drive down the government share.This is best exemplified by the example of Apex Mining.As we noted in a footnote earlier, Apex Mining's financial statements are viable for a reasonable revenue-sharing analysis.However, it was dropped from our analysis due to its outlying low tax figures (as share of revenue) compared to Nickel Asia and Philex 9 .Inspecting this firm's financial statement will reveal that, in contrast to the two other firms, it posted losses 10 in the subject years.This sharply drove down its income tax.And since the income tax is the largest source of government share in mining revenue (at least for firms with positive profit), this pulled down the amount of disbursements to the government as share of mining revenues. A casual scrutiny of the macro and firm-level data will show some similarities and differences between revenue sharing trends at the national and at the firm levels.The most glaring similarity is the large share of income tax to the total disbursements of Philex and Nickel Asia, and the large share of income tax to the total amount received by the government from the mining industry as a whole.The main difference lies in the share of government to total mining revenues.The average of the two firms is 19.4%, which is higher than the overall average for the entire mining industry of about 10.0% from 2007 to 2010. Firm Level Analysis of Foreign Mining Firms To complement the firm-level analysis of mining benefit 8 The Chamber of Mines was quoted in a newspaper article [17]. 9Apex Mining's taxes as share of revenue is 7.4%.If this will be included among the sample firms for micro-level analysis, the average taxes as share of revenue will drop from 19.4% to 15.4% -still higher than industry-level figure. 10Apex reported a PhP50 million profit for the first quarter of 2012, a reversal of the PhP50 million loss for the same period the previous year and losses for 2010 The company attributed this to higher gold prices and "streamlining of company operations".Further exploration and development of the Maco mine in recent years also increased its gold potential by 90% from 588,000 troy ounces in 2009 to 1.118 million troy ounces [18,19]. 7The life cycle of a mine is composed of four stages: exploration, development, utilization/commercial operation, and decommissioning and rehabilitation.Exploration involves the search for mineral deposits.Development is the construction of mine and other necessary infrastructure for mining operations.Utilization/commercial operation refers to the actual extraction of minerals.Decommissioning is the closure of the mine after the site's mineral supplies have been fully extracted, while rehabilitation is the restoration of the site and cleanup of mine wastes. sharing among local firms, we undertake a similar analysis of foreign mining firms to compare their revenue sharing behavior with those of Philippine mining companies.Five firms with headquarters in established mining countries and operating in various continents are included to serve as comparators.These are Barrick Gold Corporation, the Rio Tinto Group, Eurasian Natural Resources Corporation (ENRC), Norilsk Nickel and PT Vale Indonesia Tbk (formerly PT International Nickel Indonesia Tbk).All in all, these comparator firms have operations in at least 25 countries and produce at least 20 mine products.Similar to the analysis of local mining companies, we had to rely on publicly available financial statements of foreign firms, which are available in company websites.A similar caveat holds in that financial statements should be disaggregated enough to be used for a reasonable analysis.Analyzing foreign financial statements can also be more difficult than analyzing local ones because the former follows the generally accepted accounting princeples (GAAP) of their home countries.Reporting of expense and revenue items thus are different 11 . Barrick is the world's largest gold producer in terms of production, reserves and market capitalization.The company's headquarters is in Canada, but it operates 26 mines in Canada, United States, Australia, Peru, Argentina, Chile, Zambia, Saudi Arabia, Dominican Republic, Papua New Guinea, Pakistan and Tanzania.Although gold is its primary extracted mineral, it also produces copper.The company was founded in 1983 and has an asset size of USD48.9 billion as of 2011.The company's gold production for the same year was 7.7 million ounces, of which 44% were from North America, 25% from Australia and the Pacific, 24% from South America and 7% from Africa.As of 2011, it has proven and probable gold reserves of 139.9 million ounces.Barrick is also in the exploration phase of several potential mine sites across the globe 12 . Rio Tinto is another large mining firm with operations all over the world.Although its headquarters is located in the United Kingdom, bulk of its operations is located abroad.It operates mines in Australia, Brazil, Guinea, Chile, Indonesia, United States, South Africa, Canada, Zimbabwe and Namibia.It mines five major product groups-aluminum, copper and gold, diamonds, iron ore, and coal and uranium.Iron ore contributes the largest revenue among these product groups with 49.6% share followed by aluminum with 20.2%, copper and gold with 12.7%, coal and uranium with 12.2% and diamond with 5.3%.Rio Tinto was founded in 1873 and has an asset size of USD119.5 billion.Rio Tinto is also exploring and developing several other mine sites in its countries of operation 13 . ENRC has its head office in London but the corporation traces its roots in Kazakhstan, where the first investtors bought mining assets from the Kazakh government during its privatization program in the 1990s.Since then, the company expanded its operations to several countries to include Russia, China, Brazil, Mali, Democratic Republic of Congo, Zambia, Zimbabwe, Mozambique and South Africa.Its mine products are iron ore, chromium, manganese, silicon and aluminum.As of 2011, it has an asset size of USD15.5 billion and employs 70,000 people 14 . On the other hand, Norilsk Nickel is the world's largest producer of its two major products-nickel and palladium.Its secondary products are platinum and copper and it also produces cobalt, rhodium, silver, gold, iridium, ruthenium, selenium, tellurium and sulfur.The company's headquarters are located in Moscow, Russia but operations are also located in Australia, Botswana, Finland and South Africa.The company started operating in 1939 and has grown to an asset size of USD18.9 billion in 2011.In the same year, its production of nickel stood at 295,000 tons for 18% share of world total.Palladium production was 2.8 million ounces or 41% of world total 15 . Established in 1968 and a 58% owned subsidiary of Vale Canada, Vale Indonesia operates 190,510 hectares of nickel mine in the island of Sulawesi.In 2011, it produced 66,900 metric tons of nickel in matte and has 72.1 million metric tons of proven reserves and 37.3 million metric tons of probable reserves of nickel ore.As of 2011, it has an asset size of USD2.2 billion.Mining operations are projected to cease in 2035 16 . Unlike the four other comparator companies, Vale Indonesia operates solely in one country 17 .It is also the most similar with Philippine mining companies in terms of its operational scheme.The company is limited to extracting nickel ore and processing these into nickel matte.This product is then exported abroad for refining and further processing.This is unlike most large multinational mining firms that sometimes do refining and smelting of some of their ore extracts. Aside from having publicly available financial statements that are disaggregated enough for a reasonable 13 Based on information from the Rio Tinto Website and 2011 Annual Report [21]. 14Based on information from the ENRC Website and 2011 Annual Report [22]. 15Based on information from Norilsk Nickel's Website and 2011 Annual Report [23]. 16Based on information from Vale Indonesia 2011 Annual Report [24]. 17Although its parent company, Vale, operates all over the world.revenue sharing analysis, these five firms have particular attributes that make them good comparators.Barrick Gold, Rio Tinto, ENRC and Norilsk Nickel are multinational corporations that operate mines in different countries at different stages of the mining life cycle.They also extract different types of minerals.Their tax figures thus level out differences in revenue sharing arising from differences in minerals extracted, stages in mine life cycle, and revenue sharing policies in the host countries.On the other hand, Vale Indonesia is a good comparator because it operates in a country with a similar economic and socio-political condition as the Philippines.Its structure is also similar to many mines in the Philippines-partly or majority owned by foreigners and production process is limited to extraction and initial processing of ores before being exported for refining, smelting and further processing. Taxes paid by Rio Tinto are 128 times higher than the taxes paid by Philex Mining, and about 288 times that of Nickel Asia.Taxes paid by Barrick Gold, ENRC and Norilsk Nickel are also much larger than those of the two Philippine firms being studied.Indeed, Barrick Gold, Rio Tinto, ENRC and Norilsk Nickel are all included in the world's 100 largest mining firms based on market value [27].The taxes paid by Vale Indonesia-the only South East Asian firm in the comparator group-are also larger than the taxes paid by the Philippine firms being studied, but only by a relatively smaller degree. Scaling tax payments by company revenues provides a more meaningful comparison.Table 8 presents the amount of taxes paid by the foreign firms scaled by their revenues.Taxes as share of company revenue for Barrick Gold, Rio Tinto, ENRC, Norilsk Nickel and Vale Indonesia of 17.9%, 13.6%, 16.8%, 17.1% and 13.3%, respectively-for an average of 15.7%-do not seem far from the 19.4% average of Nickel Asia and Philex Min-ing.Moreover, these figures are also very close to the 15.3% average government share found by the PWC survey mentioned above. The next point of comparison is on the share of each payment type to total disbursements to the government.The summary of payments made by the firms to the governments where they operate are shown in Table 9, and the percent share of each payment type for each firm is shown in Figure 5. Similar to local firms, income tax is the largest component of payments to government of the five foreign companies.Income tax accounts for an average 68.7% of all disbursements to governments.This is comparable to the share of income tax in total tax payments of the two Philippine firms in the sample at 67.8%.The share of revenue-based taxes (royalties and excise tax) is, however, larger for the Philippine firms at 29.2% against 16.4% for the foreign comparators. Two things may be observed from the revenue sharing analysis of the domestic mining firms and the comparator foreign companies.First, the share of taxes in total revenue is comparable between the foreign and the Philippine firms analyzed in this study.As shown in Figure 6, the share for the Philippine firms in the sample is even higher by 3.7 percentage points.However, this must be interpreted with caution.As discussed earlier, the Industry-wide average in the Philippines is lower than this, and the share of the mining industry in total government revenues is less than its share in total GDP.From 2007 to 2010, the average annual share of mining in total government revenue is less than half its share in total GDP (0.87% against 1.93%).This is a possible sign that the government is not getting enough from the mining industry as a whole [8].Second, taxes indexed to income make up the bulk of payments to the government for both Philippine and foreign mining firms-and the share of income tax to total tax payments is comparable between the two groups.The difference lies in the share of revenue-based taxes (royalties and sales tax).The share of this tax component for the sample Philippine firms is 29%, and 16% for the foreign comparators.This is illustrated in Figure 7. The data on income-based and revenue-based taxes is emphasized here because of the differences in implications of charging income-based and revenue-based taxes.Presumably, a tax arrangement that is tied to company income also ensures that the government gains during natural resource booms 18 .One question is whether the Philippines would like to explore slightly higher taxes on mining that would be indexed on income, yet be applied over and above the corporate income tax, when there are supernormal profits.The present corporate income tax rate in the country is 30%-near the levels of other Asian economies such as Thailand (30%) 19 , Malaysia (25%), Indonesia (25%), Viet Nam (25%), China (25%) and India (30%) 20 . The literature suggests that there are several advantages of using taxes tied to income over taxes tied to revenue.Royalties imposed on revenue introduce inefficiencies and affect the firm's production decision because these increase the marginal cost of production.In contrast, a tax on profit is more efficient because it does not affect the optimal level of output.Indexing of taxes also affect the sharing of risk between firm and government.Tax on income tends to distribute risk between the mining firms and the government while tax on revenue shifts risk to the former [15,28]. A tax on profit also better captures mining rent compared to royalties, notably when there are price booms.And while many countries use royalty to get hold of early revenue flows, it is often offset by lower income tax rates.Some countries also use variable income tax rates on mining firms.Tax rates could be higher in years when profitability is high and lower in years when profitability is low [1].This is, however, an administrative challenge.Another disadvantage of a revenue-based tax is its regressive effect on the tax regime.With high royalties, the average effective tax rate is higher for less profitable firms and lower for more profitable mines [8]. 18If tax is tied to revenue, collections will also increase during natural resource price booms, but only if the miner's selling price follows the world price.Some mining firms and the buyers of their mineral products engage in hedging-the price of future transactions is already specified in the contract.Thus, even if market price increases by a large amount, revenue and therefore taxes do not. 19Temporarily reduced to 23% for 2012 and 20% for 2013 and 2014. 20Based on data from PricewaterhouseCoopers Worldwide Tax Summaries database [29]. On the other hand, the advantage of a revenue-based tax is it assures the government of some share in mining revenue even during years when mines post losses, aside from guaranteed government share in early revenue flows as mentioned earlier. From the preceding analysis of available data, it is clear that macro-level revenue indicators should be interpreted with care.Much heterogeneity in firm-level information is averaged-away by merely looking at the industry-level indicators.Indeed, our preliminary calculations suggest that some firms' tax revenues are much higher than these industry averages indicate.It must be emphasized that data on Figure 2 (share of mining in government revenue and GDP) and Table 5 (government share in mining revenue) are for the entire mining industry, while the two Philippine firms and five foreign firms in the sample are large-scale metallic mines.Presumably, some types of mining-small scale and/or non-metallic-are pulling the figures down 21 . An Analysis of Net Revenue Sharing Another way of analyzing mining benefit sharing is by looking at net revenue rather than gross revenue.Using this method controls for differences in cost structures arising from differences in type of mine, age of mine and type of mineral extracted, among other factors.Net revenues-gross revenues less costs-measure the actual returns that the firm and the economy receive from mining.For the purpose of this study, the terms net revenue and net benefit will be used interchangeably and will refer to the mining firms' profit before income tax. Figure 8 shows taxes and other payments to the government as share of both gross and net benefits for the two Philippine mining firms and the five foreign mining firms in the sample.The Philippine mining firms' average taxes as a share of net benefits stood at 40.22%almost equal to that of the foreign firms' average of 40.37%.Hence, expressing the indicator in terms of net revenues does not really change the gist of our earlier analysis. Summary, Recommendations and Directions for Future Policy Research Drawing on the analysis herein, there are at least three main messages for policymakers here.First, we find signs that the mining industry as a whole may not be contributing enough to government revenue.Possible reasons for this include the large share of small-scale mining to total production and the presence of mines that only recently commenced and may still be enjoying tax perks.This highlights the need to examine whether these tax incentives are still needed and to what extent small scale mines can contribute their fair share in tax payments.A word of caution, though, for policy makers is that increasing taxes, particularly for those who are already at par with international standards, may bring in some tradeoffs.The usual argument of unattractiveness to investors is one, but there can be other less obvious consequences.Because taxes are higher, the mining firms may be incentivized to drive down their cost as low as possible.This might result in disincentives to invest in technologies that are cleaner but are often more expensive.Policymakers need to consider that an increase in taxes collected may just be offset by additional cleanup or mitigation expenses.This does not necessarily mean that taxes should not be increased-it just implies that any planned increase in taxes should be studied carefully, with costs and benefits being weighed.Analysis should also be mineral-specific and mine-type-specific as these groups have heterogeneous technologies and cost structures. Second, analysts and researchers should be careful in interpreting macro-level data on revenue sharing due to heterogeneity of firm-level data.The scale of the mine, its stage in the mining cycle, and even governance and implementation of laws can affect the sharing of revenue between mining firms and the government.Nevertheless, based on the preliminary evidence we have here, at least two of the Philippine mines actually stack-up well on tax payments, when juxtaposed against the available international comparators.More disaggregated, yet still comprehensive, information is necessary to provide a fuller and fair picture of the revenue sharing across the public and the private sectors.A complete simulation of tax payments for the entire mine life across different minerals and different mine types is essential in determining if we really are at par with other established mining countries in terms of taxing mining firms.Simulation will also guide policy-makers in gauging the fairness of revenuesharing regime. Future research on revenue sharing could be usefully expanded in at least two more directions.First, this paper has examined benefits using government revenues as a The fund devoted to diversification using money from its energy sector to invest in non-energy related sectors.The QIA controls around $75 billion in assets.Source: [30].Notes: a Linaburg-Maduell Transparency Index. possible metric.Yet benefits derived from the public are not just reflected in tax revenues or mining royalties.These are also included in aspects such as job creation, and community-related investments and the corporate social responsibility (CSR) projects supported by the firms.The public sector is expected to try to represent the views of various stakeholders with potentially widely varying interests and objectives-spanning both national and local government, civil society, and other groups in society with a stake in natural resource wealth management (including present and future generations).This difficult aggregation of preferences often involves very rough and often difficult bargains across different interest groups.It would be useful to shed light on these different aspects in a more empirical way.Second, it is also clearly relevant to go beyond the concept of benefits, and better reflect net benefits-or benefits net of costs related to mining-which creates a much more nuanced understanding of the net impact on the different stakeholders of this economic activity.Although this paper presented a brief overview of government share in mining firms' profit, net benefit may be defined in other ways other than profit, and this is worth studying further in future research in this area.For instance, if neither the mining company nor the government agencies (both local and national) provide resources for mine clean-up and environmental rehabilitation, the brunt of the environmental damage and its costs to human development will likely be borne by the community hosting the mine.Facing such costs, it is unlikely that they will get a net positive gain from mining.This is part of the reason why it is now considered international best practice for mining companies to contribute to a fund that would be dedicated for the future cost of clean-up and mine site rehabilitation once the mining operations cease [30].The Philippines does not fall behind in this respect, as mining companies are required to maintain funds for future cost of rehabilitation and clean-up.However, what the country lacks is a concrete scheme on how to use and distribute wealth derived from the mining industry.Table 10 shows a summary description of selected sovereign wealth funds derived from extractive industries in selected countries.These funds enable the government to better manage wealth derived from mining in promoting human development. Future research on the broader net gains from extractive industries should therefore involve a full accounting of all the benefits and gains-including the cost incidence for aspects like environmental clean-up and protection-in order to clarify the true net benefits of these industries for the present and future generations. Figure 1 . Figure 1.Philippine Mining Revenue Allocation Scheme.Notes: Illustration draws on information reported in [1,2,4].a) Net Mining Revenue = Gross Sales − Operating Expenses − Interest Expenses − Development Expenses − Royalty to Land Owners; b) Basic Government Share = Sum of all taxes, royalties and fees paid to the national and local governments; c) VAT and Customs Duties on imported goods and services; d) S et by LGUs; e) PhP75 or PhP100 per hectare per annum, PhP5 per hectare per annum for exploration. Figure 2 . Figure 2. Share of mining in total national revenue and GDP, 2007 to 2010.Source: Data from NSCB, MGB and DOF; authors' computations. 4 billion and PhP32.5 billion, respectively, in 2011.Nickel Asia is the largest miner of nickel in the country today.The corporation was formally registered with the Securities and Exchange Commission in 2008 but its subsidiaries have been operating mines for several deca-des now.Its subsidiaries with current mining operations include (area and start of year of operation in parenthesis) Hinatuan Mining Corporation (Surigao del Norte, 1980), Cagdianao Mining Corporation (Dinagat Island, 1999), Taganito Mining Corporation (Surigao del Norte, 1987) and Rio Tuba Nickel Mining Corporation (Palawan, 1975).Three of Nickel Asia's four mines are nearing the end of their expected lives.Rio Tuba, Cagdianao and Hinatuan have expected mine lives of 28, 6 and 9 years, respectively 5 .These sites are therefore operating well beyond their expected lives.Taganito is the only one operating within its expected life of 29 years.All of these four mines are under MPSA.Philex Mining Corporation was incorporated in 1955 and has since operated the Padcal Mine in Benguet.It produces copper, gold and silver.It also extracts petroleum and coal, although these account for only a small portion of sales.Padcal Mine is under MPSA and is expected to operate until 2020.From the start of operations until 2011, the mine produced 359.3 million tons of ore containing 2.1 billion pounds of copper, 5.6 million ounces of gold and 6.1 million ounces of silver 6 . Figure 4 . Figure 4. Share of each payment type in total disbursements to the government.Note: 2010 and 2011 average.Source: Authors' computations based on firms' financial statements. Figure 5 . Figure 5. Share of each payment type in total disbursements to governments, 2010 and 2011 average.Source: Authors' computations based on firms' financial statements. Figure 6 . Figure 6.Revenue sharing between mining firms and government.Note: Average for 2010 and 2011.Source: Authors' computations based on firms' financial statements. Figure 7 . Figure 7. Share of each payment type in total disbursements to Government.Note: Average for 2010 and 2011.Source: Authors' computations based on firms' financial statements. Figure 8 . Figure 8. Taxes as share of gross and net revenue.Note: Average for 2010 and 2011.Source: Authors' computations based on firms' financial statements. Table 1 . Taxes, royalties and fees in the Philippine mining industry.  Corporate Income Tax.Mining firms are subject to Corporate Income Tax (CIT) at the regular rate of 30% of total taxable income.The CIT is collected by BIR.Other Taxes and Fees to the National Government.These taxes include Value Added Tax (VAT) and Customs duties paid on imported inputs, withholding taxes (WHT), the waste and tailings fee, and other fees charged by MGB.VAT and WHT are collected by BIR and Customs duties by the Bureau of Customs (BOC). Table 2 . Government revenues, total and received from mining, (in Billions PhP), 2007 to 2010. Source: Data from DOF and MGB; authors' computations.Note: Figures may not add up due to rounding. Table 3 . Components of mining firms' payments to the government, (in Billions PhP), 2007 to 2010. Source: Data from MGB. Note: Figures may not add up due to rounding. Table 4 . Components of mining firms' payments to the government, (% Shares), 2007 to 2010. Source: Authors' computations based on data from MGB. Note: Figures may not add up due to rounding. Table 5 . Government share in mining revenues, 2007 to 2010, in Billions PhP and percentage share. Source: Data from MGB; authors' computations.Note: Mining Gross Production Value and Amount Paid to the Government are rounded, thus direct division may not give the Percent Share of Government shown. Table 7 . Revenue sharing between mining firms and go- vernment, average figures for 2010 and 2011. This will become 18.59% if adjustments in Footnote a of Table6will be taken into account. a Table 9 . Disbursements to governments by type of payment, 2010 and 2011 average, in millions USD. Source: Authors' computations based on firms' financial statements.Notes: a Royalties may include royalties paid to private enterprises.This item may include ther taxes subsumed under or reported with sales tax and royalties.b "Tax Directly Attributable to Cost of Goods Sold" in Norilsk Nickel's financial statement.o
2018-12-14T23:54:50.970Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "5073de60f4da9a6019979f7f485f3a47e419d14d", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=35675", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "5073de60f4da9a6019979f7f485f3a47e419d14d", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
244495651
pes2o/s2orc
v3-fos-license
Modeling and Methods of Statistical Processing of a Vector Rhytmocardiosignal Aims: We have developed a new approach to the study of human heart rate, which is based on the use of a vector rhythmocardiosignal, which includes as its component the classical rhythmocardiosignal in the form of a sequence of heart cycle durations in an electrocardiogram. Background: Most modern automated heart rate analysis systems are based on a statistical analysis of the rhythmocardiogram, which is an ordered set of R-R interval durations in a recorded electrocardiogram. However, this approach is not very informative, since R-R intervals reflect only the change in the duration of cardiac cycles over time and not the entire set of time intervals between single-phase values of the electrocardiosignal for all its phases. Objective: The aim of this paper is to present a mathematical model in the form of a vector of stationary and permanently connected random sequences of a rhythmocardiosignal with an increased resolution for its processing problems. It shows how the vector rhythmocardiosignal is formed and processed in diagnostic systems. The structure of probabilistic characteristics of this model is recorded for statistical analysis of heart rate in modern cardiodiagnostics systems. Methods: Based on a new mathematical model of a vector rhythmocardiosignal in the form of a vector of stationary and permanently connected random sequences, new methods for statistical estimation of spectral-correlation characteristics of heart rate with increased resolution have been developed. Results: The spectral power densities of the components of the vector rhythmocardiosignal are justified as new diagnostic features when performing rhythm analysis in modern cardiodiagnostics systems, complementing the known signs and increasing the informative value of heart rate analysis in modern cardiodiagnostics systems. Conclusion: The structure of probabilistic characteristics of the proposed mathematical model for heart rate analysis in modern cardiodiagnostics systems is studied. It is shown how the vector rhythmocardiosignal is formed, and its statistical processing is carried out on the basis of the proposed mathematical model and developed methods. INTRODUCTION The heart rate analysis has long been an integral part of not only modern cardiology but also many other areas of biomet-human. In addition, the heart rate analysis is carried out for early diagnosis of the pathological condition of the fetus, the state of the autonomic system in diabetic patients. Heart rate makes it possible to assess the value of the risk of death in myocardial infarction, the degree of tension of the state of the regulatory process in the human body, etc [1 -12]. The special efficiency of the heart rate analysis is achieved by using modern computerized diagnostic systems that make it possible to automate the assessment of diagnostic signs and make medical decisions about the human heart rate based on recorded cardiosignals, mainly electrocardiosignals. The accuracy, reliability, information content and speed of functioning of cardiodiagnostic computerized heart rate research systems significantly depend on the adequacy and constructiveness of the mathematical model of heart rate, as well as the accuracy, reliability, information content, speed of methods and algorithms of its analysis in these information systems. Most methods for processing the classical rhythmocardiosignal in the framework of the stochastic approach are based on three of its probabilistic models, namely, a random variable, a random stationary sequence, and a periodically correlated random sequence are used. These models are based on an approach to describing the heart rate as a sequence of R-R intervals that uses a ritmocardiogram (classical ritmocardiogram), which imposes significant limitations on the informative value of heart rate analysis. The essence of this limitation is that the values of R-R intervals, which are the corresponding values of the rhythmocardiogram, reflect only the change in the duration of cardiac cycles over time, and not the entire set of time intervals between single-phase values of the electrocardiosignal for all its phases, which does not make it possible to describe the heart rate with sufficient information. That is why the approach based on the analysis of the classical rhythmocardiogram as a sequence of R-R intervals does not allow us to identify more subtle and detailed features of the heart rate in modern computer systems of medical diagnostics. In the study [13,14], a new approach to heart rate analysis based on a high-resolution rhythmocardiosignal was developed. As indicated in these works, the classical rhythmocardiogram is embedded in a rhythmocardiogram with increased resolution, which is the basis for increasing the level of the information content of heart rate analysis in modern computer systems for functional diagnostics of the human heart state based on a rhythmocardiosignal with increased resolution. In this approach, the heart rate was represented by a highresolution rhythmocardiosignal (other names: high-informative rhythmocardiosignal or vector rhythmocardiosignal), the mathematical model of which was a vector of normally distributed random variables. Therefore, the classical ritmocardiogram is embedded in a ritmocardiogram with increased resolution, which is the basis for the level increase of information content of heart rate analysis in the modern computer systems for functional diagnostics of the human heart state based on a ritmocardiosignal with increased resolution. As a mathematical model of a rhythmocardiosignal with increased resolution, in the study [13,14], is justified the use of a vector of random variables with a normal distribution. This stochastic model can already take into account several phases of the cardiac cycle when analyzing the heart rate. However, this model is a relatively simple mathematical model of a highresolution rhythmocardiosignal, since it does not allow us to study its temporal dynamics. To take into account the temporal dynamics of a rhythmocardiosignal with increased resolution, it is necessary to use the mathematical apparatus of random sequence theory, namely, to consider it a vector of discrete random sequences. In this paper, a mathematical model of a rhythmocardiosignal with an increased resolution for its processing problems is described as a vector of stationery and stationary connected random sequences. It shows how the vector rhythmocardiosignal is formed, processed and modeled in diagnostic systems. The structure of probabilistic characteristics of this model is recorded for statistical analysis of heart rate in modern cardiodiagnostics systems. Electrocardiosignal Mathematical Model in the form of a Conditional Cyclic Random Process Let's move on to constructing a mathematical model of a vector rhythmocardiosignal. Since the rhythmocardiosignal is formed from an electrocardiosignal, the mathematical model of the vector rhythmocardiosignal is based on the corresponding model of the electrocardiosignal itself (ECS). According to the study [15], a mathematical model of an electrocardiosignal that is conditional cyclic random process is called a process , which is given on the cartesian product of two stochastically independent probabilistic spaces with sets of elementary events Ω and Ω' on the set of real numbers R, and for which the following conditions are met: 1) There is such a random function , what for each ω', appropriate ω'-realisation T ω' (t,n) of this function satisfies the conditions of the rhythm function; 2) For each ω' із Ω' finite-dimensional vectors -multiple separabilities of the process , with all the goals k N is stochastically equivalent in a broad sense; 3) For any different random processes are isomorphic to the order and values cyclic random processes. A Generalized Mathematical model of a Highresolution Rhythmocardiosignal The conditional cyclic random process ξ(ω,ω',t) allows simultaneous consideration of both the stochasticity of the morphological structure of electrocardiosignals (which is important for their statistical morphological analysis), as well as the stochasticity of their rhythmic structure (which is important for the heart rate analysis). Considering that according to such a mathematical model of the electrical signal, information about the heart rate is contained in the rhythm function T(ω',t,n) of the conditional cyclic random process ξ(ω,ω',t), and also taking into account the fact that the processing of electrocardiosignals is carried out in a digital system, the analysis of heart rate is reduced to the statistical analysis of the random rhythm function of the conditional cyclic random process discrete argument . Random rhythm function T(t ml (ω'),n) is completely defined by the elements of the random domain D(ω') according to the formula: (1) When n = 1, rhythm function T(t ml (ω'),1) is calculated in the following way: (2) For each ω'-realisation of the random domain definition D ω' = of the conditional cyclic random process of a discrete argument that is given on a probabilistic space (Ω′, F′, P′), the following conditions apply: , whetherm 2 < m 1 , or whetherm 2 = m 1 , and l 2 < l 1 , in other cases If to base the heart rate analysis on a random rhythm function T(t ml (ω'),1) the conditional cyclical random process ξ(ω, t ml ,(ω')), preserving a tight bind to the phase of the cardiac cycle and the number of the cardiac cycle, a mathematical model of the rhythmocardiosignal with increased resolution is presented as a vector of random sequences: (3) where each l-component of a vector is a random sequence T l (ω',m), the value of which is equal to the value of the random rhythm function T(t ml (ω'),1) at moments in time t ml (ω') from a discrete set . The set of D l (ω') is integrated into the D(ω') and describes the time distances between the same type l -phases of the studied electrocardiosignal in its two adjacent cycles, namely: (4) Dimension (the number of components) L of vector determines the resolution of the rhythmocardiosignal and is equal to the number of studied time intervals between preselected phases in the electrocardiosignal, which can be identified by segmentation and detection methods when solving the problem of automatic formation of the rhythmocardio signal from the electrocardiosignal [16 -28]. According to the block diagram shown in Fig. (1), the first block is the determination of the same type of phases corresponding to the boundaries of segments-zones ECS; this stage is implemented on the basis of the use of methods for segmenting cyclic signals. Detection of the same type of phases within certain zones is the next step in the formation of a vector rhythmocardiosignal. At this stage, information is obtained about time points that correspond to the maximum or minimum of characteristic ECS segments, for example, R, P, or T. The final stage in this structure is the formation of a vector rhythmocardiosignal based on the information obtained at the previous stages. ′ ( , ) = 0, if = 0; ′ ( , ) < 0, if < 0, ∈ ; for any 1 ∈ and 2 ∈ ′ ( 1 , ) + 1 < ′ ( 2 , ) + 2 , ∀ ∈ ; Updated Mathematical Model of a High-resolution Rhythmocardiosignal and its Probabilistic Characteristics Let's move on to the justification of the probabilistic characteristics of the vector of random sequences. One of the simplest stochastic models that can take into account the dynamics of changes in the rhymocardiosignal with increased resolution is the vector = f the stationary and stationery-related random sequences. First of all, note that the vector of the stationary and stationary-related random sequences, in the partial case, if its components are stationary sequences with independent values, i.e. white noises given on a set of integers, is a well-known model of a rhythmocardiosignal with increased resolution in the form of a random variable vector, which was developed in the studies. However, the hypothesis of independence or uncorrelation of rhythmocardiosignal readings does not correspond to the reality on practise, which requires taking into account the stochastic relationship between rhythmocardiosignal readings with increased resolution, and therefore the use of a more complex and more general mathematical model in the form of a vector stationary and stateonery -related random sequence Let's move on to substantiating the probabilistic characteristics of the vector random sequences. One of the simplest stochastic models that takes into account the dynamics of changes in the rhymocardiosignal with increased resolution is the vector stationary and stationary connected random sequences. First of all, we note that the vector stationary and stationary-related random sequences, in the partial case, if its components are stationary sequences with independent values white noises given on a set of integers, this is a well-known model of a rhythmocardios signal with increased resolution in the form of a random variable vector, which was developed in [29,30]. However, in practice, the hypothesis of independence or uncorrelation of rhymtocardiosignal readings does not correspond to reality, which requires taking into account the stochastic relationship between rhymtocardiosignal readings with increased resolution, and therefore the use of a more complex and more general mathematical model in the form of a vector stationary and stationary possible random sequences. Defining property of the vector stationary and permanently connected random sequences are the invariance of its family of distribution functions to time shifts by an arbitrary integer For any distribution function F of order іfrom the family of vector distribution functions for stationary and permanently connected random sequences the following equality holds: Distribution function in the case when l1 = l2 = ... = lp = l is a distribution function stationary components of vector -that is an automatic order distribution function p for a stationary random sequence , what describing the time distances between single-phase readings of an electrocardiosignal for its l-phase. If p = 1, then we will have a one dimensional automatic distribution function of a stationary random sequence . In the case where equality l1 = l2 = ... = lp = l if the distribution function is not executed is a compatible distribution function for several (at least two) stationary vector components , which describes the time distances between singlephase readings of an electrocardiosignal as a whole for its different phases. Family of vector distribution functions stationary and stationary connected sequences most fully describe its probabilistic structure, but methods for statistical estimation of the distribution function they are too bulky for their practical use in computer diagnostic systems of the functional state of the cardiovascular system of the human body. So, if there is a mixed initial moment function of stationary and stationary connected random sequences, then the equality holds for it: where M -operator of mathematical expectation. If there is a mixed central moment function of order of stationary and permanently connected random sequences, then the equality holds for it: where is the plural is a set of first-order initial moments (mathematical expectations) of stationary random sequences from the set In practice, to analyze a rhythmocardiosignal with increased resolution, it is appropriate to use mixed moment functions of low orders, namely, mixed initial moment functions of the second-order -covariance functions and mixed central moment functions of the second-order -correlation functions. In this case, the initial moment functions of the second order for the vector stationary and permanently connected random sequences are represented as a matrix of covariance functions: what can be noted more compactly like this: (9) where each of its elements is a covariance function , which is set as: Since the components of the vector random sequences are stationary and permanently connected sequences, then their covariance functions are functions of only one integer argument u, which is equal to u = m 1 -m 2 . Therefore the covariance matrix of this random vector can be represented as follows: where each of its elements is a covariance function which is equal to: Provided that , covariance function it is an auto-variational function l stationary components of vector which describes the time distances between single-phase readings of an electrocardiosignal for its lphase. If , then the covariance function is a mutual covariance function for two stationary vector components which describe the time distances between single-phase electrocardiosignals for l 1 statistical estimate of the initial moment s -order of the stationary random sequence and l 2 phase. Mixed second-order central moment functions for a vector stationary and stationary connected random sequences are represented as a matrix of correlation functions: which can be written more compactly like this: where each of its elements is a correlation function , which is set as follows: Since the components of the vector random sequences are stationary and stationary connected sequences, then their correlation functions are functions of only one integer argument u, which is equal to u = m 1 -m 2 . Therefore the correlation matrix of this random vector can be represented as: where each of its elements is a correlation function , which is equal to: If l 1 = l 2 = l, are correlation function is an autocorrelation function l-stationary components of vector , which describes the time distances between single-phase readings of an electrocardiosignal for its l -phase. If , then the correlation function is a mutual correlation function for two stationary components of the vector which describe the time distances between single-phase electrocardiosignals for l 1 and l 2 phase. Statistical Estimates of Probabilistic Characteristics of a High-resolution Rhythmocardiosignal Let's write down the formula expressions for calculating the implementations of statistical estimates of probabilistic characteristics of a rhythmocardiosignal with increased resolution. The formula expression for calculating the implementation of a statistical estimate of covariance function of two stationary and stationary-related random sequences , which describe the time distances between single-phase electrocardiosignals for -of it phases, namely: where M -the number of recorded complete cycles of the electrocardiosignal from which the rhythmocardiosignal with increased resolution is formed, M 1 (M 1 << M) is the maximum value of arguments m 1 , m 2 which is selected depending on the number of averages in the implementation of statistics to ensure the required level of accuracy and reliability of statistical evaluation. If in the formula (19) s = 1, then we get an expression for calculating the implementation of a statistical estimate of the initial moment of the first order (mathematical expectation) of a stationary random sequence , namely: (20) The formula expression for calculating the implementation of a statistical estimate of the correlation function of two stationary and stationary-related random sequences , which describe the time distances between single-phase electrocardiosignals for l 1 and l 2 -phases, namely: (21) Since for stationary and stationary-related random sequences, the correlation functions are the functions of only one integer argument u, which is equal to u = m 1 -m 2 , then their statistical estimates also depend on only one argument u. In this case, if we assume the ergodicity of the stationary components of the vector, then the formula (21) looks like this: (22) If in the formula (22) u = 0, then l 1 = l 2 = l, then we have the expression for calculating the implementation of the variance estimate of the stationary random sequence , namely: (23) In order to reduce the number of diagnostic signs for a high-resolution rhythmocardiosignal it is necessary to take into account the fact of symmetry of the estimated matrix of correlation functions , which indicates the adequacy evaluation of only those elements of the matrix , which lie on its diagonal and above the diagonal, namely, such an ordered totality On the diagonal of this matrix, when l 1 = l 2 , estimates of autocorrelation functions are placed, and the elements of matrix , which are placed above its diagonal, namely, when l 1 = l 2 , are estimates of inter-correlation functions. Therefore, the matrix , without losing its information content, can be replaced with a triangular matrix . Another way to reduce the number of diagnostic features in information systems for the analysis of heart rate for the main rhythmocardiosignals with increased resolution is to use spectral decompositions of the triangular matrix elements RESULTS AND DISCUSSION Based on the above mathematical model and methods of processing a high-resolution rhythmocardiosignal, a multifunctional software package for modeling and automated analysis of a wide class of cyclic heart signals for the needs of functional medical diagnostics has been upgraded. Namely, as a component of this software package, a system of computer programs has been developed for the automated formation and the statistical analysis of heart rate based on a vector rhythmocardiosignal (rhythmocardiosignal with increased resolution), which expanded the functionality of the existing software package and made it possible to automatically analyze the heart rate with increased information content. A typical structural and functional diagram of the software for processing ECS is shown in Fig. (2). A dashed line in this block diagram highlights the blocks that are emphasized in this article. The software package is implemented in the programming language Object Pascal. According to the blocks presented in the block diagram, ECS processing includes evaluating the segmental structure using segmentation methods, for example [29]. Evaluation of the rhythm function by interpolating the rhythmic structure (discrete rhythm function), based on the method [30]. Further, the development of the ECS branches out into two stages (two problems are solved). The first stage performs morphological analysis, which, according to this structure, provides for statistical processing of ECS, normalization of statistical estimates and their decomposition in the Chebyshev basis, and decision-making based on the obtained morphological features. This stage is described in [31]. The second stage performs rhythm analysis and consists in forming a vector rhythmocardiosignal, statistical processing of the vector and spectral analysis of the obtained statistical estimates. As an example, Fig. (3) shows a general view of the program interface for evaluating the autocorrelation function and the cross-correlation function of the components of a vector rhythmocardiosignal. Fig. (2). Structural and functional diagram of software for heart rate analysis with increased information content. {Ŝ 2 T l 1 T l 2 (ν), ν = 0, M 1 − 1 __________ } Ŝ 2 T l 1 T l 2 (ν) {Ŝ 2 T l 1 T l 2 (ν), ν = 0, M 2 − 1 __________ } r 2 T l 1 T l 2 (u) Analyzing the graphs of relative errors in the formation of a high-resolution rhrythmocardiogram, which are presented in Fig. (5), it can be argued that the method of automatic formation of a high-resolution rhrythmocardiogram, which is based on Brodsky-Darkhovsky Statistics, has higher accuracy compared to a similar method based on the use of a first-order difference function. Fig. (6). A, and the implementation graph of the second component is shown in Fig. (6B). The justification of statistical hypotheses about the stationarity of the mathematical expectation and variance of the components of the vector rhythmocardiosignal have been tested. Namely, the statistical hypotheses about the invariance of the mathematical expectation and variance of the components of the vector rhythmocardiosignal have been tested by applying well-known statistical criteria for checking the equality of mathematical expectations and variances of two random variables represented by their samples (as a sample, two sections of each component of the vector rhythmocardiosignal were taken. The Student's criterion (for mathematical expectation of the vector rhrythmocardiosignal component) and Fischer's criterion (for variance of the vector rhrythmocardiosignal component) have been used as a statistical criterion for testing hypotheses about stationarity. The results of 13 of the 15 tests performed with a confidence level of 0.95 indicate the consistency of the hypothesis about the stationarity of the components of the vector rhythmocardiosignal, which can be considered verification of a new mathematical model of the rhythmocardiosignal with increased resolution in the form of a vector of stationary and stationary-related random sequences. To check the stationary components of the vector for their normality, Fig. (7) shows histograms for implementations T 1ω ,(m), T 2ω ,(m) corresponding stationary vector components . . To test the hypothesis of the normality of the distribution of stationary components of a random vector according to the Pearson consent criterion, it has been found that these results do not contradict the hypothesis of the normality of its distribution. Normality of the vector distribution is the basis for substantiating diagnostic features in systems for the analysis of heart rate using a highresolution rhythmocardiogram within the framework of spectral correlation theory, which significantly reduces the computational complexity of such an analysis. In this case, to estimate the probabilistic structure of the vector stationary and stationary-related random sequences, it is sufficient to perform a statistical estimation of the only vector of its mathematical expectations according to Formula (20) The above implementations of statistical estimates of restorative probabilistic characteristics significantly complement the known informative features in the systems of the heart rate analysis. Namely, the new diagnostic features are being introduced into practice, such as the matrix of correlation functions and the matrix of spectral power densities of stationary components of the rhythmocardiosignal with increased resolution, which, by reflecting the stochastic temporal dynamics of the heart rate, make it possible to increase the level of information content of the heart rate analysis in modern cardiodiagnostic systems. CONCLUSION The paper presents a new mathematical model of a rhythmocardiosignal with increased resolution in the form of a vector of stationary and stationary-related random sequences, which, in comparison with the known mathematical models of heart rate, allows to increase the level of information content of automated heart rate analysis and is logically consistent with .
2021-11-24T16:40:17.302Z
2021-11-19T00:00:00.000
{ "year": 2021, "sha1": "36c3824941418a174119830b06d4dd235caddb48", "oa_license": "CCBY", "oa_url": "https://openbioinformaticsjournal.com/VOLUME/14/PAGE/73/PDF/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e305fe3765d3bd51de0645fbd9645be7f1c5f91e", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
91743362
pes2o/s2orc
v3-fos-license
Influence of aqueous extracts of black angico on Pratylenchus brachyurus in cotton plants The root lesion nematode (Pratylenchus brachyurus) is one of the main phytosanitary problems of cotton plants in Brazil. Searching for alternatives that minimize the damages in the crop, several methods are performed aiming to manage these damages. Among them, is the use of vegetal extracts. In this sense, the aim of this study was to evaluate the potential of black angico extract (Anadenanthera macrocarpa) in the management of P. brachyurus in cotton crop. The experiment was conducted in a greenhouse at the Phytopathology Laboratory of the Federal University of Piauí in Bom Jesus-PI. The experimental design was completely randomized, in a factorial scheme (2×6), composed of two sources of extracts (leaf and bark) of black angico under six concentrations (0, 20, 40, 60, 80 and 100 g L -1 ), with five replications per treatment. The plants were inoculated with 1900 specimen/juveniles and eggs, 96 h after the transplanting. Sixty days after the application of extracts, some agronomic variables of the cotton and P. brachyurus were evaluated. The volume and fresh root mass showed considerable gains for all concentrations with the leaf extract. The plant height was negatively influenced by concentrations above 60.83 g L -1 for both extracts. Regarding the parasitism, all the extract concentrations, regardless of the source (leaf or bark), showed suppressiveness to P. brachyurus. Therefore, the aqueous extracts of black angico present nematicidal action and favor the development of cotton plants. INTRODUCTION The cotton crop (Gossypium hirsutum L.) represents one of the most important activities of Brazilian agribusiness in a vigorous expansion process and with exceptional technical and economic results for the use of its seed and *Corresponding author.E-mail: weversonufpi@gmail.com.Tel: +89 981247994. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License mainly its fiber (Ribeiro et al., 2012).In the 2015/2016 season, the cotton planted are decreased by 2% compared to the previous harvest, with an area of 956.2 thousand hectares (CONAB, 2016). Cotton is grown in more than 60 countries.Anobg, China, India and the United States are the largest producers and together they produce 64% of the world production.Despite having larger planted area, India produces a volume of fibers almost equal to the United States due to the low yield of its crops.The list of the top five producers is completed by Pakistan and Brazil.In recent years, Brazil has improved its ranking in producing countries.Currently, it is the fifth largest producer in the world (Abrapa, 2015). Among the barriers to cotton crop management, phytosanitary problems caused by fungi, bacteria, viruses and mainly nematodes are often associated with reduced low crop yield (Ribeiro et al., 2012).Among the key nematodes of this crop, there are approximately five species responsible for causing severe damage worldwide.Three of them are considered as causing significant damages to Brazilian cotton production: Meloidogyne incognita, Rotylenchulus reniformis, Pratylenchus brachyurus (Starr et al., 2007;Jones et al., 2013). Currently, the nematode P. brachyurus, which is responsible for root lesions, has been considered the most frequent in Brazil and is widespread in the main agricultural regions of the country (Severino et al., 2010).Because it is a polyphagous species and is extremely common in regions of tropical climate (Arieira et al., 2009), and has become a concern to cotton producers in the Northeast region.It is the third most important regarding the global economic impacts caused to crops, being exceeded only by root-knot and cyst nematodes (Heterodera and Globodera) (Jones et al., 2013).The symptoms associated with this species in cotton include darkened injuries on the roots, causing atrophy, and may even compromise the absorption of water and nutrients (Dinardo-Miranda et al., 2003) and consequently, reduction in the shoot part development of plants with a sharp drop in production (Ribeiro et al., 2012). Considering the great importance of phytonematodes management in commercial production areas, chemical control has always stood out because of its fast and efficient results (Oliveira et al., 2005).However, numerous problems are encountered due to their high toxicity, risk of environmental contamination, high cost, or low control effectiveness after repeated applications (Dong and Hang, 2006).In an attempt to reduce these effects, different control methods such as genetic, biological, crop rotation and alternative control have been studied. Within this context, vegetable extracts represent a viable alternative to alleviate economic and social conditions of most of the farmer.In addition, their use reduces or replaces chemical application (Ferraz et al., Fonseca et al. 123 2010).Several studies have demonstrated the nematicidal effect of extracts of different plants on different species of phytonematodes when applied directly to the soil or by air (Cetintas and Yarba, 2010).Among these species, the black angico (Anadenanthera macrocarpa) is worth mentioning because it has potential to manage several diseases in the human, animal and plant area, with emphasis to phytonematodes.Black angico is a tree that can reach 13 to 20 m of height and trunk with 40 to 60 cm of diameter, when adult, occurring from Maranhão and Brazilian Northeast to São Paulo, Minas Gerais and Mato Grosso (Gonçalves et al., 2012).Thus, the aim of this study was to evaluate the potential of plant extracts based on black angico (A.macrocarpa) on the management of P. brachyurus in cotton. Location of the experimental area and soil treatment The experiment was performed under greenhouse conditions at the Phytopathology Laboratory, at the Universidade Federal do Piauí, Prof Cinobelina Elvas campus, Bom Jesus city, from October to December, 2014. To evaluate the treatments, the substrate was composed of soilsand-manure in a proportion of 3:2:1, respectively.It was autoclaved, at a temperature of 120°C and pressure of 1.05 kg/cm 2 for 2 h.The substrate was fertilized according to the previous analysis and distributed into plastic containers with capacity of 4 dm -3 . Origin and multiplication of inoculum The inoculum was obtained from a population of P. brachyurus from soybean crops in Bom Jesus-PI.The extraction was carried out by liquefaction and centrifugation in sucrose with kaolin solution, according to Coolen and D'Herde's (1972) methodology.Soon after, the specimens were isolated and inoculated in corn hybrid plants Pioneer 30F53 grown in pots and kept in a greenhouse for 30 days for multiplication.The pre-identification of the specimen was done with semi-permanent blades in formalin, examined under an optical microscope, comparing the characteristics observed with the literature (Handoo and Golden, 1989). Experimental procedures The experimental design was completely randomized, in a factorial scheme (2×6), composed of two sources of extracts (leaf and bark) of black angico under six concentrations (0, 20, 40, 60, 80 and 100 g L -1 ), with five replications per treatment. The seedlings were prepared in trays of expanded polystyrene with 128 cells, with substrate consisting of sand, manure and earthworm humus (in the same ratio), sterilized by autoclaving, at a temperature of 120°C and pressure of 1.05 kg/cm 2 for 2 h.Transplanting of seedlings was done on the thirteenth day after emergence, and two seedlings were maintained per pot.Thinning was done 28 days after transplanting, keeping a single plant which corresponded to the experimental unit. Subsequently, at 4 days after transplanting, a suspension with 2000 specimens/juveniles and inoculum eggs was used for inoculation with the aid of a pipette and distributed in three holes of 5.0 cm deep, spaced 2.0 cm from the hypocotyl of cotton plants to facilitate the development of nematode action in the soil. The botanical material (leaves and bark) of the black angico plant species was collected in the region of Bom Jesus -PI.The dehydration process was done in the laboratory at room temperature during 5 days, then subjected to a mechanical mill pulverization process, reduced to powder, and stored in a 1000 ml beaker until the preparation of the fractionated aqueous extracts. One day prior to application of the treatments, the bark and leaf powder of Angico at concentrations 0, 20, 40, 60, 80 and 100 g/L was subjected to cold extraction with distilled water for 24 h to obtain the maximum extraction of the chemical constituents.The resulting extractive solution was filtered and then applied in the treatment through the soil. A solution of 100 ml was applied in each pot.It was divided into 4 aliquots of 25 ml each, at intervals of 15 days.The concentrations used throughout the intervals were prepared only 24 h before the applications. Analyzed variables The evaluations were performed sixty days after application of the extracts.Agronomic variables of the cotton plant were evaluated: plant height and root length using a graduated ruler; fresh shoot mass and the fresh root mass, obtained with the aid of a semianalytical balance.Root volume was measured using a 1000 ml test tube, considering a fixed volume of 800 ml and immersing the root in this volume, calculating the difference to obtain the final volume. The variables on parasitism were estimation of the number of specimens in the soil of each treatment, extracted in 100 cm 3 of soil by centrifugation and flotation (Jenkins, 1964) and estimation of the root nematodes (Coolen and D'herde, 1972). Statistical analysis Data on agronomic variables and parasitism were analyzed by the Shapiro-Wilk test and the analysis of variance (ANOVA) by the F test (p <0.05), using the statistical program "R" version 3.1.2.When significant, the mean were adjusted in regression equations using the software SigmaPlot 10.0. Influence of aqueous extracts of black angico on cotton plants By the analysis of variance, it was observed that there was an interaction between sources and concentrations of the black angico extract, with significant effect only for volume (P< 0.01) and fresh root mass (P<0.05).At the same time, only the plant height (P<0.01) was affected by the individual performance of the extract concentrations. The height of cotton plants was positively influenced by the extracts of black angico.Regardless of the source tested (bark or leaf), there were quadratic responses as a function of the concentrations applied (Figure 1A).Thus, the plants showed greater heights when they received 60.83 g L -1 of the extract, reaching an increase of 25.80% (Figure 1B).However, the plants showed a reduction in growth at concentrations above 60.83 g L -1 . The harmful effect of the black angico extract on the plant in high concentrations could be related to the presence of tannin in this species, which is considered an allelopathic agent, due to its ability to act directly on the cytological characteristics, phytohormones, membranes, mineral absorption, respiration and enzymatic activity (King and Ambika, 2002). Root volume and root fresh mass of cotton plants presented positive gains after the extracts application as means adjustment was made in the quadratic polynomial regression model (Figure 1B and C).The highest averages of these variables were observed with a leaf extract concentration of 60 g L -1 , reaching respective maximum increases at 30.61 and 15.01%, at the highest concentration tested (100 g L -1 ).The increase in the root system with leaf extract can be attributed to the reduction of the parasitism of P. brachyurus (Figure 2), which is also associated with other factors such as the presence of some allelochemicals stimulating the roots development, as well as an allelopathic effect (Carvalho et al., 2002).Abreu (1997), reported a possible presence of an allelochemical in the aqueous extract of red angico (Anadenanthera peregrina (L) Speg), acting as a phytohormone in the roots development. For the bark extract, the best results for volume and fresh root mass occurred at low concentrations of 37.38 and 30 g L -1 , respectively.However, at concentrations above these, there was a reduction of root development due to allelopathic effects demonstrated by the bark extract (Figure 1B and C).As previously mentioned, tannin is the main chemical constituent present in the leaves and bark of black angico, which may interfere with the physiological activity of plants, harming or stimulating growth and development.Thus, the divergent results between leaf and bark extract may be related to the pronounced presence of tannin in the barks (around 15 to 20%) (Lorenzi and Matos, 2002). Influence of aqueous extracts of black angico on P. brachyurus parasitism For the variables of P. brachyurus parasitism, there was no significant interaction (P>0.05) between sources and concentrations of black angico extracts (Table 1).However, there was a significant effect (P<0.01) of extract concentrations on juvenile variables in the root and juveniles in the soil. The extracts of black angico influenced negatively the number of juveniles of P. brachyurus in the root, with exponential reduction according to the tested concentrations (Figure 2A).These results demonstrate that the lowest applied concentration (20 g L -1 ), regardless of the source (bark or leaf), was efficient in reducing root nematodes by 71.43%.These results of P. brachyurus control may be related to tannins presence in black angico bark (Lorenzi and Matos, 2002), and may present antimicrobial activity (Djipa et al., 2000).Tannins have action on microorganisms' cell membranes and modify their metabolism (Scalbert, 1991).Wilted black angico leaves are popularly toxic and can be used as natural defenses (Silva Filho, 2007). Leaf and bark extracts of black angico reduced the number of juveniles of P. brachyurus in soil, with exponential reduction as a function of the applied concentrations (Figure 1B).The lowest number of nematodes in the soil was observed in the lowest extract concentration applied (20 g L -1 ), with a reduction of 74.83%.The nematicidal action of black angico is attributed to compounds involved in chemical defense, which include lectins, protease and amylase inhibitors, toxins, and low molecular mass secondary metabolites (Xavier-Filho, 1993).As the extracts in this research were directly applied on the soil, possibly the compounds present in the leaf and bark of the black angico have acted directly by contact on the nematodes, promoting a population decrease.Maistrello et al. (2010), demonstrated the nematicidal action of tannin in preventing hatching and development of phytonematodes of the Meloidogyne genus. Several natural substances of different plant species have been isolated and chemically characterized, and some are promising for field application.Martinez (2002) demonstrated the nematicidal effect of neem on several species of phytonematodes such as Pratylenchus species, R. reniformis, and M. incognita.Franzener et al. (2007) verified the nematicidal effect of the aqueous extract at 0.05 g ml -1 of Tagetes patulae flowers when applied to the soil, observing a reduction of 62.2, 61.5 and 52.8%, in the number of galls, number of juveniles in the soil and number of eggs for M. incognita in tomato roots, respectively.Aqueous extracts obtained from crotalaria leaves (Crotalaria mucronata L.), at a concentration of 0.2 g ml -1 , when applied via soil in tomato plants, reduced the number of galls caused by Meloidogyne javanica by 33% compared to the control, in which only water was applied (Gardiano et al., 2010). Thus, the use of secondary metabolites of plants with nematicidal properties represents an economically viable option, since it presents a lower risk of environmental contamination due to its biodegradable characteristics.However, it is necessary to carry out new studies to characterize the active ingredients in the extracts so as to pinpoint the mode of action of the extracts on cotton plants.In addition, this could pave the way for possible synthesis of botanical nematicides based on the extracts. Conclusions The aqueous extracts of black angico present nematicidal potential and promote plant growth and development. The leaf aqueous extract contributed to an increase in root volume and root fresh mass of cotton plants. Leaf and bark extracts of black angico negatively influenced plant height was at concentrations higher than 60.83 g L -1 . Root volume and root fresh mass decreased when exposed to concentrations above 37.38 and 30 g L -1 , respectively of bark extract. All concentrations for leaf and bark extracts showed some nematicidal action, mainly in the lower concentrations (20 and 40 g L -1 ). Figure 1 . Figure1.Plant height (A), root volume (B) and root fresh mass (C) of cotton plants according to the concentrations of aqueous extract of black angico.**Significant at 1%; *Significant at 5%.Extract 1 -Individual effect of extract concentrations, independently, from the source tested. Figure 2 . Figure2.Juveniles in the root (A) and juveniles in the soil (B) of P. brachyurus according to the concentrations of black angico extract.**Significant at 1%. Extract 1 -Individual effect of extract concentrations, independently, from the source tested. Table 1 . Summary of variance analysis for cotton agronomic variables: plant height (PH), shoot fresh mass (SFM), root length (RL), root volume (RV) and root fresh mass (RFM) and for the variables of the P. brachyurus parasitism: juveniles in the root (JR) and juveniles in the soil (JS).
2019-04-03T13:09:52.141Z
2019-02-14T00:00:00.000
{ "year": 2019, "sha1": "a6d918158626644cab035cd2143a7c412ecf794b", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJMR/article-full-text-pdf/B9CCABA60156.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a6d918158626644cab035cd2143a7c412ecf794b", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
221701145
pes2o/s2orc
v3-fos-license
Associations between Activity Pacing, Fatigue, and Physical Activity in Adults with Multiple Sclerosis: A Cross Sectional Study Fatigue is common in people with multiple sclerosis (MS). Activity pacing is a behavioral way to cope with fatigue and limited energy resources. However, little is known about how people with MS naturally pace activities to manage their fatigue and optimize daily activities. This study explored how activity pacing relates to fatigue and physical activity in people with MS. Participants were 80 individuals (60 females, 20 males) with a diagnosis of MS. The participants filled in questionnaires on their activity pacing, fatigue, physical activity, and health-related quality of life, 3–6 weeks before discharge from rehabilitation. The relationships between the variables were examined using hierarchical regression. After controlling for demographics, health-related quality of life, and perceived risk of overactivity, no associations were found between activity pacing and fatigue (β = 0.20; t = 1.43, p = 0.16) or between activity pacing and physical activity (β = −0.24; t = −1.61, p = 0.12). The lack of significant associations between activity pacing and fatigue or physical activity suggests that without interventions, there appears to be no clear strategy amongst people with MS to manage fatigue and improve physical activity. People with MS may benefit from interventions to manage fatigue and optimize engagement in physical activity. Introduction Symptoms of fatigue are among the most frequently reported and strongest predictors of functional disability in people with multiple sclerosis (MS) [1][2][3]. The experience of fatigue and perceived fatigability (changes in the sensations that regulates effort and endurance) draws behavioral adaptations, such as limiting the engagement in activities resulting in underactivity, or a lifestyle characterized by periods of overactivity followed by long extensive rest periods [4][5][6][7]. However, both underactivity and overactivity are linked with disability [8]. Despite growing efforts to manage fatigue through exercise interventions in people with MS, studies investigating the effect of exercise interventions report a high number of dropouts, and identified that participants struggle to continue engaging in physical activity post-intervention [9,10]. This warrants the need to explore ways to enable long-term adoption of a physically active lifestyle. Activity pacing is a self-management strategy that can help alter often-occurring inefficient activity patterns (underactivity and overactivity) and stimulate long-term engagement in an active lifestyle [11]. It involves dividing one's daily activities into smaller, manageable pieces to manage fatigue, and to maintain a steady activity pace, whilst reducing relapses [12,13]. However, current literature on how people naturally pace activities in daily life is limited and inconclusive [11][12][13][14][15][16]; some studies show that activity pacing is associated with higher levels of fatigue and lower physical activity [16,17], while others show the opposite or no association [8,18], and no clear strategies are available in rehabilitation treatment to optimize activity pacing to improve engagement in physical activity [14]. Similarly, quality of life has been proposed to impact activity pacing [8]. It is notable that most of the above studies aimed to explore issues in a range of chronic disabling conditions and did not focus on MS specifically. Thus while findings from these studies [8,[13][14][15][16][17][18] contribute to our understanding of activity pacing, their broader focus with regards to multiple health behaviors and mixed populations may have resulted in failure to elicit key issues specific to engagement in physical activity for people with MS. Currently no study has explored people with MS in a natural approach to activity pacing, and its relations to fatigue and physical activity. Understanding these associations can help guide and tailor rehabilitation treatment efforts for people with MS and promote an active lifestyle in this population. The aim of this study was to examine reported engagement in pacing and how it relates to fatigue and physical activity in people with MS just before discharge from rehabilitation, controlling for demographics, health-related quality of life, and perceived risk of overactivity. Based on the expectation that activity pacing would be an adaptive strategy to manage fatigue and optimize daily activities [14,17], we hypothesized that reported engagement in pacing would be associated with a decrease in fatigue and an increase in physical activity. Design This study was part of a multicenter longitudinal study (Rehabilitation, Sports, and Active lifestyle; ReSpAct) to evaluate the nationwide implementation of an active lifestyle program (Rehabilitation, Sports, and Exercise; RSE) among people with a wide range of chronic diseases and/or physical disabilities in Dutch rehabilitation [19,20]. Participants received either inpatient or outpatient rehabilitation at rehabilitation centers and departments of rehabilitation in hospitals because of MS. The current study uses a cross-sectional design based on baseline measurement (3-6 weeks before discharge from rehabilitation) of activity pacing behaviors, fatigue, physical activity, and health-related quality of life of people with MS, selected from the ReSpAct dataset. The study procedures were approved by the ethics committee of the Center for Human Movement Sciences of the University Medical Center Groningen, University of Groningen (reference: ECB/2013.02.28_1) and at participating institutions. Participants Participants were recruited upon referral to the participating rehabilitation institutions across the Netherlands. Potential participants received information on study rationale and procedures, had questions answered, and were checked for the inclusion criteria. Participants were included in this study if they were 18 years and older, had a diagnosis of MS, had received rehabilitation care or treatment based on medicine consultation within one of the participating rehabilitation institutions, and participated in the 'RSE' program. Participants were excluded from the study if they were not able to complete the questionnaires, even with help, or participated in another physical activity stimulation program. Eligible participants who volunteered signed an informed consent form. Procedure Enrolled participants were assessed through a standardized baseline measurement, which consisted of filling out a set of questionnaires on paper or digitally [19,[21][22][23]. As part of the full questionnaire and producer, first, participants indicated which physical activities they perform in the context of the rehabilitation treatment and on their own initiative by filling out an adapted version of the short questionnaire to assess health enhancing physical activity (SQUASH) [21]. Secondly, participants filled out short questionnaires on their perceived engagement in pacing, risk of overactivity, and fatigue [19,22]. Lastly, participants filled out a questionnaire on their health-related quality of life [23]. Primary Measures Fatigue severity was measured using the Fatigue Severity Scale (FSS) [22], a valid and reliable questionnaire to determine the impact of fatigue in people with MS [24]. The participants scored the nine items of the questionnaire on a scale of 1-7 (1, completely disagree; 7 completely agree). Mean fatigue score based on an average of the nine items was used. The mean fatigue score ranges from 1 to 7. A mean FSS score ≥4 was adopted as the cut-off for clinically significant fatigue [25]. Physical activity was assessed using an adapted version of the Short Questionnaire to Assess Health-Enhancing Physical Activity [21]. The questionnaire is a self-reported recall measure to assess daily physical activity based on an average week in the past month. The original questionnaire has demonstrated good test-retest reliability and internal consistency and moderate concurrent validity in ordering participants according to their level of physical activity [21,26,27]. Some minor changes were made to make the SQUASH applicable for people with a chronic disease or physical disability. Specifically, within the domains 'commuting activities', 'leisure-time', and 'sports activities', the items 'wheelchair riding' and 'hand cycling' were added. Also, 'tennis' was modified as '(wheelchair) tennis'. Total minutes of physical activity per week was calculated by multiplying frequency (days/week) and duration (minutes/day) for each activity. Reported engagement in pacing was assessed with the 'engagement in pacing' subscale of the Activity Pacing and Risk of Overactivity Questionnaire [19]. This questionnaire was developed for use in the ReSpAct study [19]. The engagement in pacing subscale reflected reported engagement in pacing within daily routines and was the primary outcome in the current study. Participants scored the five items of the subscale on a scale of 1-5 (1, never; 2, rarely; 3, sometimes; 4, often; 5, very often). The mean subscale score ranged from 1 to 5, with higher score indicating high engagement in pacing. Appendix A shows the preliminary validation metrics of the questionnaire. In summary, the sampling adequacy tested with the Kaiser-Myer-Olkin (KMO) and the Bartlett's test of sphericity showed that the questionnaire had a KMO of 0.722, and Bartlett's test was significant (p < 0.05), supporting a principal component analysis (PCA). Results of the PCA showed that there were two factors with an Eigen value >1.00, therefore based on the Kaiser's criterion two components were chosen. Factor loadings were used to assign the items to the two components. The two components explained 60.50% of the total variance and there was a negative correlation between the two components of −0.115. Background Measures and Confounders Background demographics included age, gender, and body mass index, which was calculated from self-reported body mass and height (body mass (kg)/height 2 (m 2 )). To assess health-related quality of life, the RAND-12-Item Health Survey (RAND-12) [23] was used. RAND-12 assesses seven health domains; general health, physical functioning, role limitations due to physical health problem bodily pain, role limitations due to emotional problems, vitality/mental health, and social functioning. The RAND-12 was scored using the recommended scoring algorithm for calculating general health [28], a composite score of person's health-related quality of life. Scores ranged from 18 to 62. A high score indicated better health-related quality of life. The RAND-12 has been proven to be a valid and reliable measure of health-related quality of life [29]. The 'risk of overactivity' subscale of the Activity Pacing and Risk of Overactivity Questionnaire [19] was used to measure perceived risk of overactivity within daily routines. Participants scored the two items of the subscale on a scale of 1-5 (1, never; 2, rarely; 3, sometimes; 4, often; 5, very often). The mean score ranged from 1 to 5, with higher score indicating high perceived risk of overactivity. Data Analysis Data were analyzed using IBM Statistical Package for the Social Sciences version 23.0 [30]. Based on descriptive statistics and visual inspection of frequency distributions, data were normally distributed. All values were reported using descriptive statistics of means, standard deviations, and interquartile ranges to summarize characteristics of participants. To ensure there was no multicollinearity, bivariate Pearson correlations were conducted to examine basic between-person associations among demographic and study variables, prior to testing the study hypotheses (variables were not highly correlated with each other, r < 0.8). Hierarchical linear regression was used to test the study hypotheses. This statistical approach was optimal for adjustment for confounders, as we wanted to determine whether there were relationships between engagement in pacing and fatigue, and between engagement in pacing and physical activity after controlling for demographics, health-related quality of life, and perceived risk of overactivity. To examine how engagement in pacing was related to fatigue and physical activity, two hierarchical regression analyses were conducted with fatigue or physical activity as the dependent variable, and engagement in pacing as the independent variable. Age, gender, body mass index, health-related quality of life, and perceived risk of overactivity were confounders. These demographics and confounders were included in the models based on the fact that they are general demographic variables of interest in studies on physical activity behaviour and fatigue experience, and on known associations with perceived fatigability and physical activity behaviour [18,31]. We chose to analyze our data using these models based on the literature and our expectation that activity pacing may be a positive strategy to manage fatigue and optimize daily activities [14,15]. In both models, at the first step, gender, age, and body mass index were entered. At the second step, health-related quality of life and perceived risk of overactivity were entered, and at the third step engagement in pacing was entered. In both models, the variance inflation factors (VIFs) were examined for multicollinearity. Results Of the 89 participants included in the study, nine participants had incomplete data and were therefore excluded from the analysis. Characteristics of the sample (N = 80) are shown in Table 1. Of the sample, 75% were female (n = 60) and the mean age was 44 ± 11 years. The majority of the sample (n = 73, 91.3%) were scored as having clinically significant fatigue on the FSS (FSS score > 4). We found that 85.61% (n = 69) of the participants lived independently and 33.6% (n = 69) had a university education. The sample was, on average, overweight according the World Health Organization standards (body mass index ≥ 25.0 kg/m 2 ). Bivariate Pearson correlations (Table 2) showed that the variables were not strongly correlated with each other, providing support for the decision to include them into the primary analyses. Fatigue and health-related quality of life had the highest modest correlation (r = −0.41). The next modest correlations were between engagement in pacing and health-related quality of life (r = −0.27), and between engagement in pacing and fatigue (r = 0.27). These were followed by the correlations between engagement in pacing and physical activity (r = −0. 25), and between engagement in pacing and age (r = 0.24). All other bivariate correlations were of modest magnitude (r ≤ ±0.22). Relationship between Engagement in Pacing and Fatigue Results of the relationship between engagement in pacing and fatigue, controlling for demographics and confounders (Table 3), showed no association between engagement in pacing and fatigue (β = 0.198; t = 1.43, p = 0.16). Among the confounders, health-related quality of life was negatively related to fatigue (β = −0.341; t = −2.57, p = 0.03). Relationship between Engagement in Pacing and Physical Activity Results of the relationship between engagement in pacing and physical activity, controlling for demographics and confounders (Table 4), revealed no association between engagement in pacing and physical activity (β = −0.242; t = −1.61, p = 0.12). None of the demographics and confounders was related to physical activity (p ≥ 0.05). For all analyses, the VIFs were low showing that there was no problem of multicollinearity (range: 1.04-1.30). Discussion This study explored relations of reported engagement in pacing with fatigue and physical activity, while controlling for demographics, health-related quality of life, and perceived risk of overactivity in adults with MS and found no associations between engagement in pacing and fatigue or and physical activity. These findings were similar to the findings of Murphy et al. [18] but did not support our hypothesis that engagement in pacing would be associated with low fatigue and high physical activity. Regarding the confounders, health-related quality of life was negatively related to fatigue. Descriptive statistics showed people with MS demonstrated clinically significant fatigue complaints, which was similar to studies evaluating fatigue in the MS population [32], high engagement in pacing and a high perceived risk of preventing overactivity. The total minutes of physical activity level reported by participants in our study is consistent with previous research involving people with MS [6,33]. The FSS score (5.43 ± 1.11) and percentage of participants reporting clinically significant fatigue (91.3%) in our study were comparable with those reported in other studies involving people with MS [1,34,35]. In their studies, Weiland et al. [34] and Hadgkiss et al. [35] reported median FSS score of 4.9 (IQR 3.2-6.1) with 65.6% of the sample reporting clinically significant fatigue. Similarly, Merkelbach et al. [1] reported a mean FSS score of 4.4 ± 1.6, with 58.75% of the sample reporting clinically significant fatigue. Bivariate correlation analysis conducted prior to the primary analyses revealed a moderate negative association between fatigue and health-related quality of life, indicating high fatigue was associated with low health-related quality of life. Furthermore, there was a weak negative association between engagement in pacing and health-related quality of life, suggesting that high engagement in pacing was associated with low health-related quality life. Together, these findings suggest that without interventions, there appears to be no clear strategy for using physical activity to ameliorate fatigue symptoms and improve quality of life amongst people with MS. This underscores the need to explore the potential of guiding and advising people with MS regarding optimal pacing behaviour and to develop therapeutic interventions. A possible explanation for the lack of associations between reported engagement in pacing and fatigue or physical activity after controlling for demographic and confounding variables, coupled with the clinically significant fatigue found in this study, may be multiplicity in persons' attitudes towards physical activity in relation to fatigue symptoms. People with MS who experience more disruption through fatigue in daily life may be consciously limiting their activities to prevent fatigue worsening, or exhibiting all-or-nothing behaviour, which is a lifestyle characterized by periods of overactivity (when feeling good) and as a consequence of that, feeling overtly fatigued afterwards, followed by long extensive rest periods to recover from residual symptoms or prevent symptoms re-occurring. For those consciously limiting their activities to prevent fatigue worsening, more engagement in pacing will most likely result in less physical activity, while for those exhibiting a lifestyle characterized by periods of overactivity and prolonged inactivity, more active engagement in pacing will most likely result in more physical activity, and thus when both attitudes are present in the subject population no relations between activity pacing and physical activity may be found. This further highlights the importance to explore the natural use of activity pacing in relation to what we know from literature to help guide treatment efforts for people with MS. Tailored advice and goal-directed interventions on how to approach activity effectively, such as guidance on optimal use of pacing, might be beneficial for people with MS. For example, people who avoid physical activity in anticipation to fatigue might score high on engagement in pacing but may need advice to engage more in physical activity, and could be provided with a graded consistent program of physical activity to increase their health, as well as be given information and strategies to help change their beliefs that "I should do less if I am tired" or "symptoms are always a sign that I am damaging myself." Similarly, people who have developed an all-or-nothing behaviour style might need advice to be more aware of anticipatory ways of engaging in pacing to develop a consistent pattern of paced activity and rest. To our knowledge this is the first study to tap into the experiences of people with MS during their daily routines and explore the associations between engagement in pacing, fatigue, and physical activity. Adequate management of fatigue might be essential to improve health and wellbeing in people with MS, based on the findings of this study and previous literature that revealed most people with MS experience high levels of fatigue throughout the day [31]. Though the sample size in this study was substantial for this population (N = 80), it would be useful to replicate these analyses in a larger sample to obtain more precise estimates of the model parameters while controlling for confounders. Furthermore, the adapted SQUASH, and the Activity Pacing and Risk of Overactivity Questionnaire used in this study are recent and have undergone limited validity and sensitivity testing, which may have influenced the study findings. Currently, further studies on the validity of these measurements and usage for the current purposes are being conducted. Although self-report measures are more feasible in population studies, they are susceptible to biases as they involve recalling activities (over days, weeks, or months) that could lead to underreporting or overreporting. Using an objective device would allow to examine more macro levels of activity and is warranted in future study. To optimize generalizability within the population of people living with MS, this study was conducted solely in people with MS. Generalizability to other populations might therefore be limited, as findings may vary per condition [14]. Unfortunately, there was a lack of information on participants' MS type and MS disability in this study, which limits the ability to draw firm conclusions. These variables could influence the study findings. Lastly, the weak bivariate correlations between reported engagement in pacing and fatigue, and between reported engagement in pacing and physical activity found may have accounted for the lack of associations after controlling for demographics, health-related quality of life, and perceived risk of overactivity. It is worth noting that although participants received rehabilitation treatment as part of the larger multicenter study, a structured activity pacing program was not included and we do not think this has influenced the findings of this study. Future studies should further explore how engagement in pacing and perceived risk of overactivity relate to performance of activities of daily living, to allow for firm conclusions and help advice people with MS on how to engage in an active lifestyle. Additionally, exploratory studies on how activity pacing behaviour might affect physical activity, fatigue, and health-related quality of life over a longer period of time are warranted. Such studies should explore higher versus lower fatigue group in terms of clinical fatigue cut-off point (FSS > 4) or a median split, to help better understand associations. Conclusions This study examined the relationships between reported engagement in pacing and fatigue and physical activity in people with MS, while controlling for demographics, perceived risk of overactivity, and health-related quality of life. No associations were found between reported engagement in pacing and fatigue, or between reported engagement in pacing and physical activity. We found that low health-related quality of life was associated with high fatigue. People with MS might benefit from targeted interventions to better manage their fatigue and improve their health and wellbeing. Ascertaining engagement in pacing may be important to help tailor advice on optimal pacing behaviour for people with MS. There is a need to explore the potential of guiding and advising people with MS on activity pacing and develop therapeutic interventions.
2020-06-18T09:09:00.956Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "dc3f0c2a6aafec03410f34f6d79abb646001cf87", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2411-5142/5/2/43/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "802cf3b4f71888a9a6579e938135657e642b4e9e", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211171614
pes2o/s2orc
v3-fos-license
Gravitoelectromagnetism, Solar System Test and Weak-Field Solutions in $f(T,B)$ Gravity with Observational Constraints Gravitomagnetism characterize phenomena in the weak field limit within the context of rotating systems. These are mainly manifested in the geodetic and Lense-Thirring effects. The geodetic effect describes the precession of the spin of a gyroscope in orbit about a massive static central object, while the Lense-Thirring effect expresses the analogous effect for the precession of the orbit about a rotating source. In this work, we explore these effects in the framework of Teleparallel Gravity and investigate how these effects may impact recent and future missions. We find that teleparallel theories of gravity may have an important impact on these effects which may constrain potential models within these theories. I. INTRODUCTION General relativity (GR) has passed numerous observational tests since its inception just over a century ago, confirming its predictive power. The detection of gravitational waves in 2015 [1] agreed with the strong field predictions of GR, as does its solar system behaviour [2]. However, GR requires a large portion of dark matter to explain the dynamics of galaxies [3,4] and even greater contributions from dark energy to produce current observations of cosmology [5]. Given the lack of a concrete theoretical explanation of these phenomena we are motivated to explore the possibility of modifying gravity within the observational context. There are a myriad of ways in which to consider modified theories of gravity [6][7][8], and to constrain them [9][10][11]. These range from extensions to the standard gravity of GR to more exotic directions. One interesting framework that has gained attention in recent years is that of Teleparallel Gravity (TG). TG is formed by first considering a connection that is not curvature-full, i.e. we consider a connection that is distinct from the regular Levi-Civita connection (which forms the Christoffel symbols). In this way, the gravitational contributions to the metric tensor become a source of torsion rather than curvature. This is achieved by replacing the Levi-Civita connection with its Weitzenböck analog. The Weitzenböck connection is torsion-full while being curvature-less and satisfying the metricity condition [12]. Thus, we can construct theories of gravity which express gravitation through torsion rather than curvature. One such theory is the teleparallel equivalent of general relativity (TEGR) which produces the same dynamical equations as GR while being sourced by a different gravitational action, i.e. one that is based on torsion rather than curvature. TEGR and GR differ in their Lagrangians by a bound- * gabriel.farrugia.11@um.edu.mt † jackson.said@um.edu.mt ‡ andrew.finch.12@um.edu.mt ary term that plays an important role in the extensions of these theories [13][14][15]. The boundary term naturally appears in GR due to the appearance of second-order derivatives in the Lagrangian [16,17], which is the core difference between GR and TEGR at the level of the Action. In fact, this boundary term is the source of the fourthorder contributions to f (R) theories of gravity. For this reason, TG features a weakened Lovelock theorem [18][19][20], which as a direct result means that many more theories of gravity can be constructed that are generally second-order in their field equation derivatives. It is for this reason that TG is very interesting because it organically avoids Gauss-Ostrogradski ghosts in so many contexts. The TEGR Lagrangian can be immediately generalize to produce f (T ) gravity [21][22][23][24][25] in the same way that the Einstein-Hilbert action leads to f (R) gravity. A number of f (T ) gravity models have shown promising results in the solar system regime [26][27][28][29], as well as in the galactic [30] and cosmological regimes [13,31,32]. Of particular interest is its effect on weak lensing in galaxygalaxy surveys [33]. However, to fully incorporate f (R) gravity, we must consider f (T, B) gravity where B represents a boundary term that appears as the difference between the Ricci scalar and the torsion scalar (and will be discussed in more detail in §.II). Gravitomagnetic tests offer offer an ideal vehicle to probe the rotational behaviour of theories of gravity in their weak field limits [34][35][36][37][38][39]. In fact, gravitomagnetic effects are the result of mass currents appearing in the weak field limit of GR where the Einstein field equations take on a form reminiscent of Maxwell's equations [40] (and do not involve actual electromagnetic effects). These effects emerge as a result of a rotating source or observer in a system, which both give independent contributions to the overall observational effect. For the case where an orbiting observer is moving about a stationary source, Geodetic effects emerge [40] where a vector will exhibit precession due to the background spacetime being curved. This is the general relativistic analog of the wellknown Thomas precession exhibited in special relativity [41]. Another closely related relativistic precession phenomenon is that of the Lense-Thirring effect (or framedragging effect) [42] where the neighbourhood of a large rotating source causes precession in nearby gyroscopes. While independent, these effects are often observed as a combined observable phenomenon such as in the Earth-Moon system about the Sun [43], where the precession of the Moon's perigee is caused by this phenomenon [44][45][46]. Motivated by the Gravity Probe B experiment [47], there have been a number of investigations into the behaviour and predictions of modified theories of gravity [48][49][50][51]. However, the accuracy of this experiment is not enough to adequately differentiate between competing models of gravity. There have also been other experimental efforts such as LAGEOS [52,53] which aimed to perform lasers test while in orbit about the Earth. The MGS spacecraft [54,55] tested gravitomagnetism effects about Mars, while there have also been tests about the Sun [56]. For this reason, there have been a number of ambitious proposals put forward in recent years to further test this relativistic effect and to increase the experimental precision of the observations [57][58][59][60]. In this work, we explore the gravitomagentic effects of TG through f (T, B) gravity, as well as the classical solar system tests within this context. We do this by first expanding into the weak field limit of the theory and explore both the Geodetic and Lense-Thirring effects separately. We then compare their combined results against the recent observations. The manuscript is divided as follows, in §.II we briefly review and introduce TG and its f (T, B) gravity extension. In §.III, we explore the weak field regime of f (T, B) gravity and discuss some important properties of the theory in this limit. Perturbations about a statis spherically symmetruc metric are considered in §.IV. The core results associated with gravitomagentism and the classical solar system tests are then determined in §.V, while a comparison with observational values is presented in §.VI. Finally we conclude in §.VII with some remarks and a discussion. Throughout the manuscript, the speed of light is not set to unity for comparison purposes in the electrodynamics analysis in §.III. II. TELEPARALLEL GRAVITY AND ITS EXTENSION TO f (T, B) GRAVITY Teleparallel Gravity represents a paradigm shift in the way that gravity is expressed where curvature is replaced by torsion through an exchange of the Levi-Civita connection,Γ σ µν , with its Weitzenböck analog, Γ σ µν , (we use over-dots to represent quantities determined using the Levi-Civita connection) [17]. GR expresses curvature through the Levi-Civita which is torsion-less, while the Weitzenböck connection is curvature-less and also satisfies the metricity condition [61]. In theories based on the Levi-Civita connection, curvature is given a meaningful measure by means of the Riemann tensor on Riemannian manifolds [40]. This formulation of gravity is retained in most popular modified theories of gravity where gravitation continues to be expressed in terms of curvature of a background geometry. However in TG, irrespective of the form of the metric tensor, the Riemann tensor must vanish since the Weitzenböck connection is curvature-less [62]. It is for this reason that TG necessitates a fundamental reformulation of gravitation in order to construct realistic models of gravity. GR and its variants utilize the metric, g µν , as their fundamental dynamical object, but TG treats this as a derived quantity which emerges from the tetrad, e a µ . The tetrad acts as a soldering agent between the general manifold (Greek indices) and its tangent space (Latin indices) [63]. Through this action, the tetrad (and its inverses e µ a ) can be used to transform indices between these manifolds Moreover, these tetrads observe orthogonality conditions for internal consistency. The Weitzenböck connection is then defined using the tetrad as [12][13][14]62] where ω a bµ is the inertial spin connection. The Weitzenböck connection is the most general linear affine connection that is both curvature-less and satisfies the metricity condition [63]. The appearance of the spin connection is there to retain the covariance of the resulting field equations [64]. This is an issue in TG due to the freedom in the choice of the components of the tetrads, that is, there is an infinite number of tetrads that produce the same metric tensor in Eq. (2). These different tetrads are related by local Lorentz transformations (LLTs). As a result, the spin connection components take on values to account for the LLT invariance of the underlying theory. Thus, there is a particular choice of frames in which the spin connection components are allowed to be zero [13]. In GR, this issue is hidden in the internal structure of the theory [40]. Considering the full breadth of LLTs (boosts and rotations), Λ a b , the spin connection can be represented as ω a bµ = Λ a c ∂ µ Λ c b [14]. Thus, it is the combination of tetrad and an associated spin connection that forms the covariance of TG. Given a Riemann tensor that measures curvature, we must define a so-called torsion tensor that gives a meaningful measure of torsion, defined as [13] T σ µν : where the square brackets represent the anti-symmetry operator. The torsion tensor represents the field strength of TG, and transforms covariantly under both diffeomorphisms and LLTs [63]. To formulate a gravitational action, we must define two other quantities. Firstly, consider the contorsion tensor which effectively is the difference between the Levi-Civita and Weitzenböck connections, defined as [17,65] which plays a crucial role in relating TG results with Levi-Civita connection based theories. Secondly, we also need the superpotential which is defined as [63,66] S µν a := K µν a − e ν a T αµ α + e µ a T αν α . This has been shown to potentially relate TG to a gauge current representation of the energy-momentum tensor for gravitation [67,68]. Then, by contracting the torsion and superpotential tensors, the torsion scalar can be defined as which is entirely determine by the Weitzenböck connection along the same vain as the Ricci scalar being determined completely by the Levi-Civita connection. Naturally, the Ricci scalar calculated with the Weitzenböck connection will vanish since it is a measure of curvature. This property in conjunction with the use of the contorsion tensors allows for a relation between the regular Ricci scalar and the torsion scalar defined in Eq. (7) through [13,14,63] where R is the Ricci scalar calculated using the Weitzenböck connection,R is the standard gravity Ricci scalar determined using the regular Levi-Civita connection, and e = det e a µ = √ −g is the determinant of the tetrad. Thus, the standard Ricci and torsion scalars turn out to be equivalent up to a total divergence term where B = 2∇ µ (T σ µ σ ) is a total divergence term. This relation guarantees that the ensuing equations of motion will be equivalent. Thus, the TEGR action can be written as [13,63] where κ 2 = 8πG/c 4 , and L m is the matter Lagrangian. This action leads to the equivalent dynamical equations as the Einstein-Hilbert action, but the difference in their Lagrangians means that the fourth-order boundary terms are not necessary to form a covariant theory within the TG context. While this does not effect the TEGR limit, it will influence the possible theories that can be formed in the modified gravity scenario. Considering the same reasoning that led to f (R) gravity [6,7], the Lagrangian of TEGR can be immediately generalized to f (T ) gravity [21][22][23][24][25]. The f (T ) gravity setting produces generally second-order field equations in terms of derivatives of the tetrads [13]. This feature is only possible due to a weakening of Lovelock's theorem in the TG setting [18][19][20]. This fact alone guarantees that f (T ) gravity will not exhibit Gauss-Ostrogradsky ghosts since it remains second-order. f (T ) gravity also shares other properties with TEGR such as its GW polarization signature [69,70]. However, to fully encompass the breadth of f (R) gravity, we must consider the generalization to f (T, B) gravity which contains as a subset the limit f (R) = f (−T + B). Thus, f (T, B) gravity is further generalization of f (R) in which the second-and fourth-order contributions to the theory are decoupled [71]. In this work, we investigate the gravitomagnetic effects of f (T, B) gravity and its effect on observational constraints of the theory for particular models of this setting [69,[71][72][73][74][75][76]. To do this, we need the field equations of the theory, which are determined by a variation of the f (T, B) gravitational Lagrangian density, ef (T, B) to give [13,14,77] e λ where subscripts denote derivatives, and Θ ν ρ is the regular energy-momentum tensor. The spin connection is taken to be zero [69,[71][72][73] since this will be a demand in the work that follows. We will revisit this statement at various stages of the analysis to confirm the consistency of the work. Using the contorsion tensor relations, the f (T, B) gravity field can also be represented as whereG µν is the regular Einstein tensor calculated with the Levi-Civita connection. In this setting, the spin connection depends on the choice of tetrad components and so does not produce independent field equations. However, works exist in the literature that consider this scenario such as Refs. [14,78] where a Palatini approach is considered so that a second set of field equations are produced for the spin connection. A. The Field Equations Linearised gravity offers a relatively simple procedure to examine the weak-field metric for a given source. As the gravitational field is assumed to be weak, the metric can be expressed as a Minkowski background plus a small (first order) correction, h µν . In other words, the metric tensor can be expanded as with |h µν | ≪ 1. By extension, a similar consideration can be applied for the linearised expansion for the tetrad: a background value γ (0)a µ which yields the Minkowski metric plus some small correction γ with γ Following the methodology considered in Ref. [69], the resulting perturbed torsional quantities and field equations can be derived. Through the relation between the metric and the tetrad given in Eq.(1), the perturbed quantities are interlinked as Given the equations are constructed in the Weitzenböck gauge (ω abµ = 0), this imposes a constraint on the behaviour of γ which when compared to its LLT form reveals that the background tetrad corresponds to the Lorentz matrices. This is expected as this background tetrad represent a trivial frame, one which constructs the Minkowski metric [63]. As the spin connection is zero here, the background tetrad reduces to a constant, i.e. to the class of constant Lorentz matrices. For simplicity, the background tetrad can be chosen to be γ (0)a µ = δ a µ [79] Under these considerations, the torsion tensor Eq.(4) turns out to be a first order quantity in the perturbations Consequently, as both the contorsion Eq.(5) and superpotential Eq.(6) tensors are linearly dependent on the torsion tensor, then these are also of at least first order. Ultimately, this implies that the torsion scalar is of at least second order. Observe that this result holds true even if the Weitzenböck gauge is not imposed [69]. On the other hand, the boundary term is first order. This is also consistent with the relationR = −T + B as the Ricci scalar is of at least first order. Indeed, the Ricci tensor and Ricci scalar are given to be where h := h µ µ represents the trace. It is remarked that from here onwards, indices are raised and lowered with respect to the Minkowski (background) metric. Moreover, the d'Alembert operator reduces to = ∂ µ ∂ µ . The next step would be to extract the perturbed field equations. For simplicity, as both T and B are null at a background level, the gravitational Lagrangian f (T, B) is assumed to be Taylor expandable about these latter values, namely Observe that the coefficient f T (0, 0) = 0 as this corresponds to the effective Newtonian gravitational constant as evident from the field equations Eq.(12) (see for instance Refs. [80,81] for detailed discussions in the case of f (T ) gravity). Under this assumption, the zeroth and first order field equations of f (T, B) gravity Eq. (12) are where the resultR = B (which is valid up to this order) has been used, a property which shall be useful in order to simplify the forthcoming equations. The zeroth order equation confirms the absence of a cosmological constant 2Λ ≡ f (0, 0), maintaining consistency with the linearisation regime as the background geometry is Minkowski spacetime. As mentioned previously, f (R) gravity is a sub-case of f (T, B) gravity. In fact, the resulting perturbed equations Eq.(23) are practically identical in form to those found in f (R) gravity with the only difference being in the form of the coefficients [7,[82][83][84][85][86][87][88][89][90]. Motivated by this, the same procedure as presented in Refs. [70] shall be followed. First, the quantityh µν defined as is introduced, withh :=h µ µ . As shown in Refs. [88,91], the Lorenz gauge ∂ µh µν = 0 can be imposed. In this way, the field equations Eq.(23) take a relatively simple form The next step is to obtain the form of the perturbed Ricci scalar. Taking the trace of Eq.(23) yields the relation which is of the same form as the Klein-Gordon equation having an effective mass Depending on the form of the source (and hence of the stres-energy tensor), Eqs. (25) and (26) allow for a full determination of the weak-field metric Eq. (24). Observe that in vacuum, these equations give rise to gravitational waves which polarisation states have already been investigated in detail [69]. B. Solving the Field Equations In general, the solutions for h µν andR can be obtained by making use of a Green's function G(x, x ′ ), which result intō where r = |x − x ′ | and the Green's function G R defined as [85,92] Within the practical application of the weak-field approximation, it is sufficient to consider a slowly rotating source while keeping all terms up to the order of c −3 . Thus, the stress-energy tensorial components would be negligible within this context. In other words, the stressenergy tensor takes the form [59,93] where ρ is the density of the source and v i is the velocity vector. Alternatively, the off-diagonal components can be simply expressed in terms of the mass current vector j i := ρv i . In this way, we therefore find that where Φ and A are the scalar and vector potentials respectively. This yields the weak-field metric where dΣ 2 = dx 2 + dy 2 + dz 2 . As the main aim of this work is to match with Gravity Probe B and Solar System observations, it is imperative to treat the source as a slowly rotating spherically symmetric static source having a constant mass M , radius R S and angular momentum J with a constant density profile ρ expressed, for simplicity, as Under these assumptions, for distances sufficiently far away from the source (as the field is weak), the integrals can be solved through the Legendre polynomial expansion where L = |x|,L = |x ′ | and Θ is the angle between the two position vectors x and x ′ . This yields the solutions where J represents the angular momentum vector. Therefore, the weak-field metric takes the simple form where we have defined the function C. Analogy with GEM From the resulting weak-field metric, we can make a direct analogy with gravitoelectromagnetism (GEM) to generate the corresponding gravitoelectric and gravitomagnetic fields. Whilst these fields remain effectively unchanged in form, the Lorentz force is affected by the scalarR mode similar to what is encountered in f (R) gravity. Following the steps dictated in Ref. [92], the GEM equations and the Lorentz force equation are obtained as follows. Starting from the Lorenz gauge condition ∂ µh µν = 0, we obtain that with the remaining equations ∂ µh µi = 0 are of order O c −4 and therefore neglected. The gravitomagnetic field B and gravitoelectric field E are then defined as It can then be easily shown that using Eqs. (43) and (25), the GEM equations result: On the other hand, the Lorentz force for a test particle of mass m can be obtained starting from its Lagrangian L = −mc ds dt , using the weak-field metric solution Eq.(36) and expanding up to first-order of the potentials. This yields where γ is the Lorentz factor and v = dx dt is the velocity vector. From the equations of motion d dt ∂L ∂v = ∂L ∂x , assuming that the vector potential A is stationary, it can be shown that up to first order in v 2 /c 2 , the force F ≡ dp dt where p = mγv is the relativistic momentum vector obeys Similar to Ref. [92], one obtains the first two terms which are found in GR (except for a gravitational constant rescaling from f T (0, 0)) with a new contribution arising from the scalar mode. However, if the scalar mode is absent (i.e. µ 2 → ∞), the Lorentz force reduces to its GR form. D. Comparison with a Spherically Symmetric Metric: The Schwarzchild Solution In the absence of rotation, the resulting weak-field metric Eq.(41) cannot be directly correlated with the Schwarzchild solution due to the preferred choice of coordinates set by the Lorenz gauge. However, the metric can be transformed into a spherically symmetric form which can then be associated to such known solutions, and shall be notably important when discussing the geodetic effect. Here, we follow the procedure shown in Ref. [85]. The aim is to express the weak-field metric into the spherically symmetric form with A(r) and B(r) representing some scalar functions and dΩ 2 represents the polar symmetry. The necessary coordinate transformation is dictated by the conditioñ where the last equality only holds for a weak-field. In particular, for a spherically symmetric static source, we haver In this way, we obtain that up to first order in M/r Observe that the exponential, similar to f (R) gravity, retains the r dependence. On the other hand, B(r) is found to be Evidently, when µ → ∞ (i.e. in the limit of GR or when f (T, B) → f (T )), the metric reduces to its Schwarzchild form. IV. PERTURBATIONS ON A STATIC SPHERICALLY SYMMETRIC METRIC: f (T ) GRAVITY In the previous section, we have initially assumed that the gravitational field is weak, for which the relevant weak-field metric for an arbitrary f (T, B) function was obtained. In what follows, a different approach is considered, particularly in the context of f (T ) gravity. Originally considered in Ref. [94] which was further pursued in Ref. [95], the idea is to assume a static spherically symmetric geometry arising due to a spherically symmetric static source of mass M . Then, one solves the field equations Eq.(11) to obtain the corresponding metric. Since no exact solutions have been obtained following this approach (although exact solutions can be found assuming, for instance, that the Lagrangian exhibits a Noether symmetry [96,97]), a perturbative approach is employed where it is assumed that the f (T ) Lagrangian takes the form of where ǫ ≪ 1 represents a small, fiducial, order parameter, which will be omitted once the perturbations are solved. The role of the latter is to represent the small correction to the TEGR Lagrangian. In this way, the scalar functions A(r) and B(r) of the metric Eq.(49) are expected to be in the form of a background solution plus a small correction. This will allow for corrections which were previously omitted in the weak-field regime. For simplicity, the source shall be assumed to be non-rotating as no perturbed solutions to the Kerr metric have been yet obtained. In this formulation, the TEGR term gives rise to the exact Schwarzchild solution while the small correction sourced by F (T ) yields the first-order correction to the solution. Since the main interest lies in the Gravity Probe B results, an alternative approach is to assume the gravitational field to be weak, meaning the metric can be approximated by a Minkowski spacetime background plus a small correction. Both approaches shall be presented and show that the same results are ultimately recovered, whilst offering a more detailed analysis on the affect of the f (T ) Lagrangian on the geodetic effect. A. Perturbations on the Schwarzchild Solution The Schwarzchild correction can be obtained by taking the scalar potentials to be expressed as for some functions A and B. To solve for the corrections, the field equations are perturbed up to first order in ǫ. For simplicity, the power-law ansatz F (T ) = αT p for some constant α and p is considered. Furthermore, unless otherwise stated, GM c 2 → M . The solutions for the scalar functions can be obtained from the differential equations Although a general solution is not recovered in general, some special cases are considered. For p = 2, the scalar functions are given to be [94,95] The solutions are well behaved in the sense that in the absence of a source, the solutions reduce to Minkowski space as expected, i.e. when M → 0, A(r), B(r) → 1. If the gravitational field is weak, the solutions follow the order expansion which agrees with the weak-field metric Eq. (41) in the limit of f (T, B) → f (T ) = T + αT p up to first order in M/r. Observe that the α contributions do not appear in the latter as it is a higher-order contribution. Solutions for other values of p are considered in Ref. [95]. For the purpose of the analysis which follows, the solution for p = 3 is given and is listed in Appendix A. B. An Alternative Approach for a Weak-Field Limit If the field is assumed to be weak, the scalar functions can be expanded around a Minkowski background according to A(r) = 1 + ǫA 1 (r) + ǫ 2 A 2 (r) + ǫ 3 A 3 (r) + . . . , (63) B(r) = 1 + ǫB 1 (r) + ǫ 2 B 2 (r) + ǫ 3 B 3 (r) + . . . . (64) Once again, assuming that the Lagrangian f (T ) is Taylor expandable about T = 0, and solving the field equations order by order yields where c 1,. . . ,6 are integration constants. To determine these constants, we impose the following constraints. As r → ∞ (i.e. far away from the source), the metric must reduce to Minkowski spacetime and thus sets c 2,4,6 = 0. On the other hand, according to the solution obtained in §.III, namely Eq.(41), we find that c 1 = 2M fT (0) (alternatively, it can be reasoned that in the limit of TEGR, the metric must reduce to the Schwarzchild metric). Finally, the constants c 3,5 have to be zero otherwise the solution does not reduce to its TEGR limit for f (T ) = T . Therefore, the final solution is Taking f (T ) = T + αT 2 recovers the previously obtained weak-field limit solution as expected. Observe that for f (T ) = T + αT n , n > 2 (n integer) does not reveal any contributions at this order meaning their effects are even smaller. On the other hand, this approach is not applicable for functions which are not expandable about T = 0, for instance f (T ) = T + αT n , n < 0 and even some cosmologically viable ones such as the Linder model f (T ) = T +αT 0 1 − e −p √ T /T0 for some constant p. However, there exist cosmological model Lagrangians which may be further investigated for such weak-field observational tests, such as f (T ) = T + αT 0 (1 − e −pT /T0 ) and f (T ) = T + αT n tanh(T /T 0 ) for appropriate values of p and n [31]. Observe that the result is in agreement with the parameterised post-Newtonian (PPN) approximation investigated in Ref. [98] since there is no deviation up to second order expansion. The first modification appears at third order when the f T T term contributes to the behaviour. V. OBSERVATIONAL CONSTRAINTS As we have now closely discussed the theoretical foundations to obtain the necessary metrics, in what follows, we apply those results to observations obtained by Gravity Probe B and from classical Solar System test observations. In particular, we shall focus on the geodetic effect (de Sitter precession), the Lense-Thirring effect, Shapiro time delay, light bending and perihelion precession. A. Geodetic Effect The geodetic effect describes the effect of a precessing gyroscope due to its orbit around a massive central body. Here, we obtain the precession rate following Rindler's approach [99]. Starting from a spherically symmetric metric, we consider the system to be rotating at an angular frequency ω, i.e. By assuming the gyroscope to lie in a circular polar orbit (at an angle θ = π 2 ) allows us to rewrite the metric in a canonical form where e 2Ψ ≡ A − r 2 ω 2 . As shown in Ref. [99], the angular frequency of the gyroscope is given to be where k ij is the spatial 3-metric and ω i ≡ e −2Ψ ωr 2 δ 3 i , which simplifies to The angle after one full revolution is then given to be α ′ = Ω∆τ , where ∆τ represents the proper time of the gyroscope, which can be obtained directly from the metric Thus, the precession over one orbit is α = 2π − α ′ , which implies that the precession rate per year is given to be B. Lense-Thirring Precession It is well known that the Lense-Thirring precession in GR can be simply derived by assuming a freely falling gyroscope initially at rest with an angular spin vector S µ . Taking u µ to represent the gyroscope's rest frame velocity, we have that S µ u µ = 0. Then, the Lense-Thirring precession rate would be then obtained using the geodesic equations In the context of teleparallel gravity, the gyroscope moves according to force-like equations Despite this apparent difference, the above is mathematically equivalent to the geodesic equation due to the fact that K σ µν =Γ σ µν − Γ σ µν . Nonetheless, the force-like equations offer a different interpretation as discussed, for instance, in Refs. [100,101], as the teleparallel force equations allow for a separation between gravitation and inertia which has important implications on the weak equivalence principle (WEP), which lies beyond the scope of this manuscript (see, for instance, Ref. [67] for further discussions on the topic). Within the assumption that the WEP holds, one can follow the same steps encountered in GR. Alternatively, one can work out directly using the torsion and contorsion tensors to obtain the same result. If the field is weak, the field equations reduce to where Ω k ≡ − 1 2 ǫ kmn ∂ m h 0n defines the angular velocity precession vector of the gyroscope. Following the results obtained in the f (T, B) weak-field solution Eq.(41), we find that the Lense-Thirring precession rate Ω LT remains unaffected except for a Newtonian rescaling, which is expected as the electromagnetic field is identical to that found in GR. However, this result is only valid within the context of weak-fields and thus remains to be investigated in the case of strong gravitational fields. C. Shapiro Time Delay The effect of Shapiro time delay [102] can be derived following the steps listed in Ref. [103]. Here, we focus on deriving the α dependent correction for the f (T ) powerlaw model. For the given spherically symmetric metric Eq.(49), the time delay of a radio signal as it travels from the Earth to Mercury and back, as the signal passes through the closest point of approach R ≃ R ⊙ to the Sun is where r ⊕ and represent the Earth and Mercury orbital radii respectively, and t(r, R) is defined as Using the fact that generally, the orbital radii satisfy the conditionr ≫ R, together with the weak-field metric solutions Eqs. (59), (60),(A3),(A3), we find that the α contribution takes the following forms D. Light Bending The total deflection angle of light is derived following the method used in Ref. [26]. A photon is assumed to be emitted from some far away source at an angle φ = −π/2. It travels and reaches a point of closest approach r =r ⋆ with respect to some spherical massive source at φ = 0, and then continues to travel away from the source approaching an angle of φ = π/2. To account for the deflection angle due to the gravitational attraction of the source, we start from the spherically symmetric metric Eq.(49) within the equatorial plane θ = π/2 to obtain that the path of the photon obeys the second order differential equation where u = 1/r and u R = 1/R is the inverse impact parameter, with boundary conditions u(0) = u ⋆ = 1/r ⋆ and u(±π/2) = 0. Since the differential equation cannot be solved in general, even for weak-field sources, the perturbative iterative method considered in Ref. [104] is applied. The approach aims to obtain a perturbative solution by taking the mass M as the perturbation parameter, i.e. we let u = u 0 + u 1 + u 2 + . . . (82) where u i represents the solution up to O M i . As an illustrative example, we derive the perturbative solution for the power-law model with p = 2. In this case, the differential equation up to O(M 3 ) is where we have denoted primes to represent derivatives with respect to φ. This yields the following ordered system of differential equations Once the solution for u is obtained, following Rindler and Ishak's approach [105], it can be shown that the total deflection angle is given to be which, for the quadratic f (T ) Lagrangian yields the following solution A similar analysis for the cubic f (T ) Lagrangian reveals that the total deflection angle is . (87) Observe that in both cases, the GR second order mass correction found in Refs. [104,106,107] is recovered. In general, for a Taylor expandable f (T ) model within the regime of weak gravitational fields, the first deviation from GR appears at O(M 3 ), having the form Naturally, the quadratic weak-field result is recovered while for the cubic case requires the higher order contributions. E. Perihelion Precession The effect of α for the power-law ansatz Lagrangian on perihelion precession has been investigated in great detail in Refs. [94,95]. Here, we shall only quote the results: [108] p = 2 ∆φ = 16παM 2 r c 4 (89) where r c represents the circular radius of the orbit. For the n = 2 case, the detailed analysis in Ref. [94] leads to a bound of α 10 20 km 2 . VI. NUMERICAL RESULTS In this section, we make use of the weak-field solutions listed in §.III and IV against observations in order to constrain the Lagrangian free model parameters depending on the model considered. It is important, however, to comment on the results for an arbitrary f (T, B) model for the case when µ → ∞. Although the weak-field metric has been obtained in its spherically symmetric form, the scalar functions A(r) and B(r) are not truly expressed in terms ofr since the relation between r andr is not invertible. This leaves two unknown parameters, the isotropic radial coordinate r and µ. However, r is not measured and hence one must instead opt to impose specific values of µ to determine whether the results would then be consistent. Since the goal is to constrain the Lagrangian parameters (and hence constrain µ through observations), this option is not investigated in detail. Nonetheless, if µ is sufficiently large, the contributions would be small enough that deviations from observations (and hence form GR) are expected to be effectively negligible. On the other hand, a more thorough investigation can be inferred in the case of f (T ) gravity using the results obtained in §.IV. In particular, we shall make use of the results for the two power-law ansatz values considered, namely p = 2 and 3, which eventually lead to observation constraints on the constant α. A. Geodetic Effect In April 2004, Gravity Probe B was launched starting its year and a half flight mission, with the purpose of accurately measuring the geodetic and the frame dragging precession rates while in orbit about the earth. A geodetic precession rate of −6601.8 ± 18.3 mas/yr was measured while in a polar orbit at around 642 km [47]. Through the use of Eq.(74), the α constraints are obtained as listed in Table I. The table also illustrates the α constraint which has to be obeyed for the weak-field approximation to hold (which is a direct consequence of the assumption that the perturbation F (T ) ≪ T ). Based on the expressions listed in Table I, the corresponding numerical constraints are then obtained as shown in Table II. Evidently, the constraints obtained from observations are well within the expected bounds of the weak-field condition which supports the consistency of the weak-field approach. B. Classical Solar System Constraints For Shapiro time delay and light deflection, the PPN formulation together with observations from the Cassini spacecraft pose a viable opportunity to obtain constraints. As illustrated, for instance in Ref. [103,109], the γ PPN parameter appears in the former tests as follows. VII. CONCLUSION The main result of this work is that both the classical solar systems and the gravitomagnetic constrains from Gravity Probe B result in a constant on the coupling parameter to |α| 10 16 km 2 for p = 2 and |α| 10 43 km 3 for p = 3. This forms one of the strongest constraints on this parameter (to the best of our knowledge). Gravitomagnetic effects are imperative for understanding the weak field limit of modified gravity in the context of rotation. In this work, we have explored these related effects in the TG framework. TG offers a novel possibility of constructing gravitational theories in which the background manifold is torsionful rather than curvatureful. While this is dynamically equivalent to GR in the TEGR limit, modifications of the TEGR action produce theories which may be distinct from what can be constructed in regular curvature-based theories of gravity. This allows for the possibility of totally new models of gravity that may have important consequences for meeting the observational challenges of the coming years. The main crux of the weak field analysis stems from the analysis in §.III where we take an order by order expansion of a general f (T, B) gravity Lagrangian. In Eq.(26), this is found to potentially behave as a massive theory with a mass that is mainly dependent on whether a boundary term contribution is present or not. This approximation is then set into the field equations with a slowly rotating source to find metric solutions in Eqs. (32)(33)(34)(35). In §.III C we go into the details of how this analogy tallies with the well-known GEM effects to produce a Lorentz force-like effect in Eq. (48). Finally, we compare this with the Schwarzschild solution to determine the relation to the effective mass of the general f (T, B) model. Limiting ourselves to f (T ) gravity, we explore the possibility of perturbative solutions in §.IV where exact solutions are found up to perturbative order in the spherically symmetric setting. These were also investigated in the literature [94,95] and remain an interesting avenue of research in the TG context. In this part of the work, we investigate two possible routes to the perturbative analysis which both agree in their PPN limit. The traditional gravitomagnetic effects of the geodetic and Lense-Thirring phenomena are determined in §.V. The Geodetic effect naturally emerges for a static system with a rotating observer. This is achieved by a coordinate transformation, as prescribed in Eq. (69). This eventually produces Eq.(74) which is our result for the geodetic precession rate and the main result of that subsection. The Lense-Thirring effect is then determined for this TG case where the main result result is shown in Eq. (77) which is comparable to the Gravity Probe B mission result. In fact, in §.VI we use the results of this mission to constrain our parameters for the various potential models under investigation. Gravitomagnetic effects have the potential to have an important impact on understanding which modified theories of gravity are viable and may play an important role in the coming years for developing realistic modified theories of gravity.
2020-02-20T02:01:10.779Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "f0025d6f62fb0b178d8ba1fd003175b8392aae7a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1997/6/2/34/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "f0025d6f62fb0b178d8ba1fd003175b8392aae7a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258674391
pes2o/s2orc
v3-fos-license
Schiff Bases Functionalized with T-Butyl Groups as Adequate Ligands to Extended Assembly of Cu(II) Helicates The study of the inherent factors that influence the isolation of one type of metallosupramolecular architecture over another is one of the main objectives in the field of Metallosupramolecular Chemistry. In this work, we report two new neutral copper(II) helicates, [Cu2(L1)2]·4CH3CN and [Cu2(L2)2]·CH3CN, obtained by means of an electrochemical methodology and derived from two Schiff-based strands functionalized with ortho and para-t-butyl groups on the aromatic surface. These small modifications let us explore the relationship between the ligand design and the structure of the extended metallosupramolecular architecture. The magnetic properties of the Cu(II) helicates were explored by Electron Paramagnetic Resonance (EPR) spectroscopy and Direct Current (DC) magnetic susceptibility measurements. Introduction The search for new routes to obtain new metallosupramolecular architectures and the study of their potential applications is a field of great interest in Metallosupramolecular Chemistry. The knowledge of the different factors that influence the self-assembly process is essential to control the obtainment of a specific type of compound, so it is necessary to deepen our understanding by designing new systems. Among all factors, the ligand design directly influences the structure of the final metallosupramolecular architecture and thus its properties and applications [1]. The term "helicate" was introduced by Jean-Marie Lehn in 1987 to describe a class of copper(I) compounds exhibiting a helicoidal architecture with similar characteristics to the DNA double helix [2]. A helicate consists of one or more organic ligands that wrap helically around a series of metal ions that define the helix axis [3]. To obtain helicoidal architectures, the precursor ligands should contain two or more binding domains separated by a flexible spacer to allow helical coiling, but also it should be rigid enough to prevent multiple binding domains coordinating to the same metal ion, giving rise to mononuclear species [4][5][6]. Moreover, it was demonstrated that the isolation of helicoidal architectures over other possible arrangements can be controlled by the intra-and intermolecular interactions established by the ligand units [7,8]. In the literature there is a large variety of examples of helicate-type extended architectures whose formation is favored and determined by the existence of weak non-covalent π-π or CH···π interactions [4,9]. Currently, the research on metal helicates is mainly directed towards the search of their potential applications [10][11][12][13]. Among these properties there is special interest on those helicates exhibiting relevant magnetic behaviors that could be used as new magnetic Synthesis and Characterization of the Ligands H 2 L 1 and H 2 L 2 In the present work, we approach the obtainment of extended helicates using dianilinederived Schiff base ligands. It should be highlighted that the long and semi-flexible dianiline-type spacers have been widely used by Hannon and co-workers, proving to be an effective unit for obtaining a wide variety of helicoidal architectures using different metal ions [22][23][24]. For this purpose, we designed two new Schiff base ligands containing the dianiline spacer and two terminal hydroxybenzaldehyde rings decorated with ortho and para-tert-butyl groups (H 2 L 1 and H 2 L 2 , Scheme 1). those helicates exhibiting relevant magnetic behaviors that could be used as new magnetic materials [14][15][16]. However, the factors that selectively lead to a particular type of metallosupramolecular compound, and to helicates in particular, continue to be of interest and deserve to be further investigated. Schiff base ligands have been extensively used in Coordination Chemistry [17][18][19] and more particularly in Metallosupramolecular Chemistry to obtain helicates [11,9], with some of them showing relevant biomedical [20] or photophysical [11] properties. In this context, these types of ligands were employed by our research group to obtain the first example of a network assembled from Cu(II) helicates through intermolecular π-π interactions showing antiferromagnetic behavior [21]. In this primary work the antiferromagnetic character was attributed to the establishment of weak π-π interactions between neighboring helicate units. With this precedent in mind, in an attempt to explore the relationship between the ligand design, the extended helical structure and the magnetic properties, we approach the obtainment of helicates combining Schiff base ligands, copper(II) ions and an electrochemical methodology. Herein, we report two novel copper(II) helicates derived from two Schiff base ligands substituted with t-butyl groups and their crystal structures. We studied their magnetic properties by EPR spectroscopy and DC magnetic susceptibility. Synthesis and Characterization of the Ligands H2L 1 and H2L 2 In the present work, we approach the obtainment of extended helicates using dianiline-derived Schiff base ligands. It should be highlighted that the long and semi-flexible dianiline-type spacers have been widely used by Hannon and co-workers, proving to be an effective unit for obtaining a wide variety of helicoidal architectures using different metal ions [22][23][24]. For this purpose, we designed two new Schiff base ligands containing the dianiline spacer and two terminal hydroxybenzaldehyde rings decorated with ortho and para-tert-butyl groups (H2L 1 and H2L 2 , Scheme 1). The ligands H2L 1 and H2L 2 are potentially dianionic with two bidentate [NO] domains separated by a semi-flexible aromatic spacer, factors that should favor the isolation of helical-type complexes. The main objective is to find out whether the position of the tert-butyl group influences the final discrete and extended architecture and the magnetic properties of the final compounds. Both ligands were synthesized by a reaction between the corresponding hydroxybenzaldehyde functionalized with tert-butyl groups [25] and 4,4′-methylenedianiline in a 2:1 ratio, using absolute ethanol as a solvent (Scheme 2). H2L 1 and H2L 2 were fully characterized by melting point determination, elemental analysis, infrared spectroscopy, mass spectrometry and 1 H NMR spectroscopy techniques ( Figures S1 and S2 The ligands H 2 L 1 and H 2 L 2 are potentially dianionic with two bidentate [NO] domains separated by a semi-flexible aromatic spacer, factors that should favor the isolation of helicaltype complexes. The main objective is to find out whether the position of the tert-butyl group influences the final discrete and extended architecture and the magnetic properties of the final compounds. Both ligands were synthesized by a reaction between the corresponding hydroxybenzaldehyde functionalized with tert-butyl groups [25] and 4,4 -methylenedianiline in a 2:1 ratio, using absolute ethanol as a solvent (Scheme 2). H 2 L 1 and H 2 L 2 were fully characterized by melting point determination, elemental analysis, infrared spectroscopy, mass spectrometry and 1 H NMR spectroscopy techniques ( Figures S1 and S2 Synthesis and Characterization of the Copper Complexes Two neutral copper complexes were isolated from H2L 1 and H2L 2 using an electrochemical methodology (see details in experimental section and in references [26][27][28]). The electrochemical synthesis of the neutral metal complexes was carried out by oxidation of a copper plate in a conductive solution of the corresponding ligand in acetonitrile. The efficiency values calculated for the electrochemical synthesis of both complexes have values around 0.5 mol·F −1 , so the proposed mechanism would involve the loss of two electrons for each metal atom, as shown below: Cathode: 2 H2L 1|2 + 4 e − → 2 (L 1|2 ) 2− + 2 H2(g) Anode: 2 Cu → 2 Cu 2+ + 4 e − Global: 2 (L 1|2 ) 2− + 2 Cu 2+ → Cu2(L 1|2 )2 The resulting brown solid complexes were characterized by melting point determination, elemental analysis, infrared spectroscopy, X-ray diffraction and mass spectrometry ( Figures S3-S6, Supplementary Material). Both analytical and spectroscopic data allow us to propose dinuclear stoichiometries of the type [Cu2(L 1|2 )2], with the ligands being coordinated to the copper centers in their dianionic form [L 1|2 ] 2− . The infrared spectra of both complexes exhibit a slight shift of the characteristic bands of the ligand skeletons to lower wavenumbers due to the coordination of the metal ions. More in detail, a variation in the ν(C=N) band is observed, indicating that the ligand is bound to the metal via the imine nitrogen atoms. In addition, the increase in intensity and the shift of the vibration band (C-O) suggest the coordination of the copper(II) ions through the phenolic oxygen atoms of the ligand. Similarly, it is observed that both vibrational bands, ν(C-HAr) and ν(CH2), increase in intensity due to the effect of the coordination. The formation of the copper(II) complexes derived from the Schiff base ligands H2L 1 and H2L 2 was also confirmed by MALDI-TOF (+) mass spectrometry ( Figure S4), as the peaks corresponding to the dinuclear fragments [Cu2L2 + H] + are observed in the mass spectra of both complexes. X-ray Structures Slow evaporation of the mother liquors from the synthesis of [Cu2(L 1 )2] and [Cu2(L 2 )2]·CH3CN complexes allowed us to achieve good-quality crystals for X-ray diffraction studies. The crystal structures of the complexes ([Cu2(L 1 )2]·4CH3CN and [Cu2(L 2 )2]·CH3CN are depicted in Figures 1 and 2. Table S1 contains the main crystallographic data for these complexes, whereas Tables S2-S5 summarizes the most relevant distances and angles. Synthesis and Characterization of the Copper Complexes Two neutral copper complexes were isolated from H 2 L 1 and H 2 L 2 using an electrochemical methodology (see details in experimental section and in references [26][27][28]). The electrochemical synthesis of the neutral metal complexes was carried out by oxidation of a copper plate in a conductive solution of the corresponding ligand in acetonitrile. The efficiency values calculated for the electrochemical synthesis of both complexes have values around 0.5 mol·F −1 , so the proposed mechanism would involve the loss of two electrons for each metal atom, as shown below: Cathode: 2 H 2 L 1|2 + 4 e − → 2 (L 1|2 ) 2− + 2 H 2 (g) Anode: 2 Cu → 2 Cu 2+ + 4 e − Global: 2 (L 1|2 ) 2− + 2 Cu 2+ → Cu 2 (L 1|2 ) 2 The resulting brown solid complexes were characterized by melting point determination, elemental analysis, infrared spectroscopy, X-ray diffraction and mass spectrometry ( Figures S3-S6, Supplementary Material). Both analytical and spectroscopic data allow us to propose dinuclear stoichiometries of the type [Cu 2 (L 1|2 ) 2 ], with the ligands being coordinated to the copper centers in their dianionic form [L 1|2 ] 2− . The infrared spectra of both complexes exhibit a slight shift of the characteristic bands of the ligand skeletons to lower wavenumbers due to the coordination of the metal ions. More in detail, a variation in the ν(C=N) band is observed, indicating that the ligand is bound to the metal via the imine nitrogen atoms. In addition, the increase in intensity and the shift of the vibration band (C-O) suggest the coordination of the copper(II) ions through the phenolic oxygen atoms of the ligand. Similarly, it is observed that both vibrational bands, ν(C-H Ar ) and ν(CH 2 ), increase in intensity due to the effect of the coordination. The formation of the copper(II) complexes derived from the Schiff base ligands H 2 L 1 and H 2 L 2 was also confirmed by MALDI-TOF (+) mass spectrometry ( Figure S4), as the peaks corresponding to the dinuclear fragments [Cu 2 L 2 + H] + are observed in the mass spectra of both complexes. X-ray Structures Slow evaporation of the mother liquors from the synthesis of [Cu 2 (L 1 ) 2 ] and [Cu 2 (L 2 ) 2 ]·CH 3 CN complexes allowed us to achieve good-quality crystals for X-ray diffraction studies. The crystal structures of the complexes ([Cu 2 (L 1 ) 2 ]·4CH 3 CN and [Cu 2 (L 2 ) 2 ]·CH 3 CN are depicted in Figures 1 and 2. Table S1 contains the main crystallographic data for these complexes, whereas Tables S2-S5 summarizes the most relevant distances and angles. The discrete crystal structures of both compounds are similar, so a joint discussion is performed here, highlighting differences. Both structures show neutral dinuclear helicatetype architectures formed by two strands of the bideprotonated ligand [L 1|2 ] 2− that cross each other when coordinating the two Cu(II) ions (Figures 1 and 2). The ligands act in such a way that each of their bidentate [NO] branches coordinate to a different metal ion, giving rise to a distorted tetrahedral geometry ( =109.5 • ) for the Cu(II) ions. The O-M-N bond angles clearly show the distortion of the tetrahedral geometry (Tables S2 and S3). The discrete crystal structures of both compounds are similar, so a joint discussion is performed here, highlighting differences. Both structures show neutral dinuclear helicatetype architectures formed by two strands of the bideprotonated ligand [L 1|2 ] 2− that cross each other when coordinating the two Cu(II) ions (Figures 1 and 2). The ligands act in such a way that each of their bidentate [NO] branches coordinate to a different metal ion, giving rise to a distorted tetrahedral geometry (≠109.5°) for the Cu(II) ions. The O-M-N bond angles clearly show the distortion of the tetrahedral geometry (Tables S2 and S3). The main bond distances Cu-O and Cu-N are in the expected ranges for Cu(II) complexes derived from Schiff base ligands with phenol groups [29], with the bond distance of Cu-O being slightly smaller than Cu-N (see Tables S2 and S3). The intermetallic distances of Cu···Cu (11.76 Å for [Cu2(L 1 )2]·4CH3CN and 11.87 Å for [Cu2(L 2 )2]·CH3CN) are in the order of those found for other Cu(II) helicates with dianiline-type spacers and do not deserve further comments [30]. Each helicate molecule displays eight aromatic rings, which makes possible the establishment of aromatic π-π or CH···π stacking interactions. Thus, both copper(II) helicates display weak π-π interactions between the aromatic rings of the two aniline spacers The discrete crystal structures of both compounds are similar, so a joint discussion is performed here, highlighting differences. Both structures show neutral dinuclear helicatetype architectures formed by two strands of the bideprotonated ligand [L 1|2 ] 2− that cross each other when coordinating the two Cu(II) ions (Figures 1 and 2). The ligands act in such a way that each of their bidentate [NO] branches coordinate to a different metal ion, giving rise to a distorted tetrahedral geometry (≠109.5°) for the Cu(II) ions. The O-M-N bond angles clearly show the distortion of the tetrahedral geometry (Tables S2 and S3). The main bond distances Cu-O and Cu-N are in the expected ranges for Cu(II) complexes derived from Schiff base ligands with phenol groups [29], with the bond distance of Cu-O being slightly smaller than Cu-N (see Tables S2 and S3). The intermetallic distances of Cu···Cu (11.76 Å for [Cu2(L 1 )2]·4CH3CN and 11.87 Å for [Cu2(L 2 )2]·CH3CN) are in the order of those found for other Cu(II) helicates with dianiline-type spacers and do not deserve further comments [30]. Each helicate molecule displays eight aromatic rings, which makes possible the establishment of aromatic π-π or CH···π stacking interactions. Thus, both copper(II) helicates display weak π-π interactions between the aromatic rings of the two aniline spacers The main bond distances Cu-O and Cu-N are in the expected ranges for Cu(II) complexes derived from Schiff base ligands with phenol groups [29], with the bond distance of Cu-O being slightly smaller than Cu-N (see Tables S2 and S3). The intermetallic distances of Cu···Cu (11.76 Å for [Cu 2 (L 1 ) 2 ]·4CH 3 CN and 11.87 Å for [Cu 2 (L 2 ) 2 ]·CH 3 CN) are in the order of those found for other Cu(II) helicates with dianiline-type spacers and do not deserve further comments [30]. Each helicate molecule displays eight aromatic rings, which makes possible the establishment of aromatic π-π or CH···π stacking interactions. Thus, both copper(II) helicates display weak π-π interactions between the aromatic rings of the two aniline spacers that contribute to the stabilization of the helicoidal structure (distance between centroids: 3.890 Å for [Cu 2 (L 1 ) 2 ]·4CH 3 CN; 3.92 Å and 3.86 Å for [Cu 2 (L 2 ) 2 ]·CH 3 CN, Figure 3). that contribute to the stabilization of the helicoidal structure (distance between centroids: 3.890 Å for [Cu2(L 1 )2]·4CH3CN; 3.92 Å and 3.86 Å for [Cu2(L 2 )2]·CH3CN, Figure 3). It should be noted that the only interaction that can be observed in the crystal lattice of the helicate [Cu2(L 2 )2]·CH3CN involves one of the phenyl rings of the spacer, with the benzene of a linker domain (centroid-centroid distance 3.79 Å) being an important difference compared to the [Cu2(L 1 )2]·4CH3CN helicate ( Figure S5). Figure 3). It should be noted that the only interaction that can be observed in the crystal lattice of the helicate [Cu 2 (L 2 ) 2 ]·CH 3 CN involves one of the phenyl rings of the spacer, with the benzene of a linker domain (centroid-centroid distance 3.79 Å) being an important difference compared to the [Cu 2 (L 1 ) 2 ]·4CH 3 CN helicate ( Figure S5). In addition, the copper(II) helicate [Cu 2 (L 2 ) 2 ]·CH 3 CN, which incorporates the tertbutyl groups adjacent to the phenolic groups, establishes hydrogen bond interactions between the CH 3 of the tert-butyl groups and the phenolic oxygen atoms ( Figure 5) [31]. In addition, the copper(II) helicate [Cu2(L 2 )2]·CH3CN, which incorporates the tert-butyl groups adjacent to the phenolic groups, establishes hydrogen bond interactions between the CH3 of the tert-butyl groups and the phenolic oxygen atoms ( Figure 5) [31]. It is remarkable to mention that in the case of the two helicates described in this work the distance between the Cu(II) ions of the closest stacked helicates (intermolecular metal distance) is notably smaller than the distance between the two metal atoms within the molecule, in the same way as the copper(II) helicate reported by us in 2003 [21] and the cobalt(II) helicate reported later on by Andruh and co-workers [32]. This interesting structural arrangement could affect the magnetic properties of the two helicates, as discussed below. It is also worth mentioning that the intermolecular distance between metal ions is smaller in the case of the [Cu2(L 1 )2]·4CH3CN helicate (~5.6 Å) (Figure 6), which exhibits the tert-butyl substituent in the para position with respect to the phenolic oxygen, compared with that in the [Cu2(L 2 )2]·CH3CN helicate, which incorporates the tert-butyl substituent in the ortho position (~7.1 Å) (Figure 7). It is remarkable to mention that in the case of the two helicates described in this work the distance between the Cu(II) ions of the closest stacked helicates (intermolecular metal distance) is notably smaller than the distance between the two metal atoms within the molecule, in the same way as the copper(II) helicate reported by us in 2003 [21] and the cobalt(II) helicate reported later on by Andruh and co-workers [32]. This interesting structural arrangement could affect the magnetic properties of the two helicates, as discussed below. It is also worth mentioning that the intermolecular distance between metal ions is smaller in the case of the [Cu 2 (L 1 ) 2 ]·4CH 3 CN helicate (~5.6 Å) (Figure 6), which exhibits the tert-butyl substituent in the para position with respect to the phenolic oxygen, compared with that in the [Cu 2 (L 2 ) 2 ]·CH 3 CN helicate, which incorporates the tert-butyl substituent in the ortho position (~7.1 Å) (Figure 7). In addition, the copper(II) helicate [Cu2(L 2 )2]·CH3CN, which incorporates the tert-butyl groups adjacent to the phenolic groups, establishes hydrogen bond interactions between the CH3 of the tert-butyl groups and the phenolic oxygen atoms ( Figure 5) [31]. It is remarkable to mention that in the case of the two helicates described in this work the distance between the Cu(II) ions of the closest stacked helicates (intermolecular metal distance) is notably smaller than the distance between the two metal atoms within the molecule, in the same way as the copper(II) helicate reported by us in 2003 [21] and the cobalt(II) helicate reported later on by Andruh and co-workers [32]. This interesting structural arrangement could affect the magnetic properties of the two helicates, as discussed below. It is also worth mentioning that the intermolecular distance between metal ions is smaller in the case of the [Cu2(L 1 )2]·4CH3CN helicate (~5.6 Å) (Figure 6), which exhibits the tert-butyl substituent in the para position with respect to the phenolic oxygen, compared with that in the [Cu2(L 2 )2]·CH3CN helicate, which incorporates the tert-butyl substituent in the ortho position (~7.1 Å) (Figure 7). All this information confirms that the highly aromatic Schiff base ligands H 2 L 1 and H 2 L 2 are suitable to obtain extended helicate structures through weak intermolecular interactions. In addition, the position of the bulky t-butyl groups influences the microarchitecture of the extended structure, as demonstrated by the shorter intermolecular Cu-Cu distance displayed when the ligand exhibits the tert-butyl groups far away from the binding sites (para position). All this information confirms that the highly aromatic Schiff base ligands H2L 1 and H2L 2 are suitable to obtain extended helicate structures through weak intermolecular interactions. In addition, the position of the bulky t-butyl groups influences the microarchitecture of the extended structure, as demonstrated by the shorter intermolecular Cu---Cu distance displayed when the ligand exhibits the tert-butyl groups far away from the binding sites (para position). Magnetic Properties of Helicates It is well known that in coordination compounds metal ions can interact with each other when the distance between them is small [33]. Additionally, in the literature there are examples of helicoidal supramolecular architectures with large intramolecular M-M distances showing relevant magnetic behavior, for which interesting nanotechnological applications are proposed. As mentioned above, the origin of this magnetic behavior could be due to the fact that the interhelicoidal M-M distance is fairly small and, therefore, the interaction between metal ions of adjacent molecules takes place [21,32,34]. Thus, taking into account the above background, the magnetic properties of the crystalline samples of both copper(II) [Cu2(L 1 )2]·4CH3CN and [Cu2(L 2 )2]·CH3CN helicates were studied by DC magnetic susceptibility and EPR spectroscopy. The temperature dependence of the magnetic susceptibility, χ, is shown in Figure 8. At first sight both compounds show a Curie-like behavior, without any hint of magnetic ordering down to 5 K. The two copper complexes show a χMT ≈ 0.7 emu K mol −1 at low temperature, close to the χMT ≈ 0.75 emu K mol −1 expected for a molecule with two independent Cu 2+ ions with spin-only contribution (µ = 1.73 µB). However, increasing temperature enhances χMT for the [Cu2(L 1 )2]·4CH3CN helicate. This behavior is similar to that previously observed by us for a network assembled from Cu(II) helicates [21]. On the other hand, χMT decreases slightly when increasing temperature in [Cu2(L 2 )2]·CH3CN. Considering the total orbital contribution to the magnetic moment in Cu 2+ ions will result in a µ = 3.54 µB/Cu, and hence χMT ≈ 3.5 emu K mol −1 for a lattice with two Cu sites. In the tetrahedral d 9 configuration, the unpaired electron can occupy the dxz or dyz orbitals, so that the complex acquires an orbital angular momentum. The observed increase in χMT in [Cu2(L 1 )2]·4CH3CN suggests a substantial orbital contribution from a partially distorted octahedral configuration (attributed to acetonitrile coordination), whose orbital occupation changes with temperature. Magnetic Properties of Helicates It is well known that in coordination compounds metal ions can interact with each other when the distance between them is small [33]. Additionally, in the literature there are examples of helicoidal supramolecular architectures with large intramolecular M-M distances showing relevant magnetic behavior, for which interesting nanotechnological applications are proposed. As mentioned above, the origin of this magnetic behavior could be due to the fact that the interhelicoidal M-M distance is fairly small and, therefore, the interaction between metal ions of adjacent molecules takes place [21,32,34]. Thus, taking into account the above background, the magnetic properties of the crystalline samples of both copper(II) [Cu 2 (L 1 ) 2 ]·4CH 3 CN and [Cu 2 (L 2 ) 2 ]·CH 3 CN helicates were studied by DC magnetic susceptibility and EPR spectroscopy. The temperature dependence of the magnetic susceptibility, χ, is shown in Figure 8. At first sight both compounds show a Curie-like behavior, without any hint of magnetic ordering down to 5 K. The two copper complexes show a χ M T ≈ 0.7 emu K mol −1 at low temperature, close to the χ M T ≈ 0.75 emu K mol −1 expected for a molecule with two independent Cu 2+ ions with spin-only contribution (µ = 1.73 µ B ). However, increasing temperature enhances χ M T for the [Cu 2 (L 1 ) 2 ]·4CH 3 CN helicate. This behavior is similar to that previously observed by us for a network assembled from Cu(II) helicates [21]. On the other hand, χ M T decreases slightly when increasing temperature in [Cu 2 (L 2 ) 2 ]·CH 3 CN. Considering the total orbital contribution to the magnetic moment in Cu 2+ ions will result in a µ = 3.54 µ B/Cu , and hence χ M T ≈ 3.5 emu K mol −1 for a lattice with two Cu sites. In the tetrahedral d 9 configuration, the unpaired electron can occupy the dxz or dyz orbitals, so that the complex acquires an orbital angular momentum. The observed increase in χ M T in [Cu 2 (L 1 ) 2 ]·4CH 3 CN suggests a substantial orbital contribution from a partially distorted octahedral configuration (attributed to acetonitrile coordination), whose orbital occupation changes with temperature. The differences in the local coordination of copper in both complexes is further demonstrated by the differences observed in the EPR spectra of Cu 2+ species, shown in Figure 9. The [Cu 2 (L 2 ) 2 ]·CH 3 CN shows the typical EPR spectrum for an axial complex of Cu 2+ (S = 1/2) with g//(≈2.26) > gx⊥(≈2.08). These values are in the range reported for copper tetracoordinated by two oxygen and two nitrogen atoms [N 2 O 2 ] [35,36]. The hyperfine coupling with the copper nucleus (I = 3/2) is not resolved at g//, which could be due to broadening by dipole-dipole interactions. The differences in the local coordination of copper in both complexes is further demonstrated by the differences observed in the EPR spectra of Cu 2+ species, shown in Figure 9. The [Cu2(L 2 )2]·CH3CN shows the typical EPR spectrum for an axial complex of Cu 2+ (S = 1/2) with g//(≈2.26) > gx⊥(≈2.08). These values are in the range reported for copper tetracoordinated by two oxygen and two nitrogen atoms [N2O2] [35,36]. The hyperfine coupling with the copper nucleus (I = 3/2) is not resolved at g//, which could be due to broadening by dipole-dipole interactions. On the other hand, [Cu2(L 1 )2]·4CH3CN shows a more complex spectrum, consistent with a distorted structure, which could justify a larger and temperature-dependent orbital contribution to χMT, discussed before. The contribution from two copper sites cannot be discarded, and a complete elucidation of the EPR spectrum of this helicate requires further investigation. The differences observed in the magnetic behavior of the two reported helicates show that the position of the tert-butyl group in para (H2L 1 ligand) or ortho (H2L 2 ligand) with respect to the phenol group affects to the magnetic behavior of the compounds and, therefore, that the magnetic properties in the helicates can be modulated by small structural changes in the ligands. Materials and Methods All solvents, 4,4′-methylenedianiline, 3-tert-butyl-2-hydroxybenzaldehyde, 5-tertbutyl-2-hydroxybenzaldehyde and copper plates, were purchased from commercial sources and were used without purification. Melting points were determined using a BU-CHI 560 instrument. Elemental analysis of compounds (C, N and H) was carried out on a FISONS EA model 1108 analyzer. Infrared spectra were recorded from 4000 to 500 cm −1 On the other hand, [Cu 2 (L 1 ) 2 ]·4CH 3 CN shows a more complex spectrum, consistent with a distorted structure, which could justify a larger and temperature-dependent orbital contribution to χ M T, discussed before. The contribution from two copper sites cannot be discarded, and a complete elucidation of the EPR spectrum of this helicate requires further investigation. The differences observed in the magnetic behavior of the two reported helicates show that the position of the tert-butyl group in para (H 2 L 1 ligand) or ortho (H 2 L 2 ligand) with respect to the phenol group affects to the magnetic behavior of the compounds and, therefore, that the magnetic properties in the helicates can be modulated by small structural changes in the ligands. Materials and Methods All solvents, 4,4 -methylenedianiline, 3-tert-butyl-2-hydroxybenzaldehyde, 5-tertbutyl-2-hydroxybenzaldehyde and copper plates, were purchased from commercial sources and were used without purification. Melting points were determined using a BUCHI 560 instrument. Elemental analysis of compounds (C, N and H) was carried out on a FISONS EA model 1108 analyzer. Infrared spectra were recorded from 4000 to 500 cm −1 on a BRUKER FT-MIR spectrophotometer model VERTEX 70V in solid state using KBr pellets. Mass spectra were obtained using Bruker Microtof spectrometers for the ESI+ technique (electrospray ionization in positive mode) and Bruker Autoflex for the MALDI technique (matrix assisted laser desorption/ionization), both coupled to a time-of-flight (TOF) analyzer. A Varian Inova 400 spectrometer was employed to record the 1 H NMR spectra operating at room temperature using acetone-d 6 as the deuterated solvent. Chemical shifts are reported as δ (in ppm). Synthesis and Characterization of the Neutral Copper(II) Dihelicates The neutral copper(II) helicates were obtained by electrochemical synthesis using acetonitrile as solvent, applying a current intensity of 10 mA and potential values in the interval of 10-15 V. As an example, we describe below the electrochemical synthesis of the [Cu 2 (L 1 ) 2 ] helicate. The electrochemical cell can be denoted as Pt(-)|H 2 L 1 + CH 3 CN|Cu(+). The H 2 L 1 ligand (0.05 g, 0.10 mmol) was previously dissolved in acetonitrile (80 mL) and a small amount of tetraethylammonium perchlorate was added to act as a conducting electrolyte. The electrolytic reaction was carried out under N 2 (g) atmosphere at 10 mA and 13.0 V for 31 min. The resulting solution was concentrated, giving rise to a brown solid that was filtered off and dried in vacuo. Caution! Although perchlorate salts were used in very small quantities in these reactions, they are potentially explosive and should be used with care. The main analytical and characterization data of both copper(II) complexes are given below. [ In all cases, an absorption correction (SADABS) [37] was applied to the measured reflections. Structures were solved with SHELXT2018/2 [38]. All structures were refined using SHELXL2018/3 [39]. The hydrogen atoms were included in the model in geometrically calculated and refined positions. The images included in this chapter were prepared using Mercury [39]. CCDC no. 2257783 and 2257784 contain the supplementary crystallographic data for the [Cu 2 (L 1 ) 2 ]·4CH 3 CN and [Cu 2 (L 1 ) 2 ]·CH 3 CN dihelicates. Magnetic Susceptibility Measurements DC magnetic susceptibility measurements for microcrystalline copper(II) helicates were performed at different fields in an MPMS SQUID magnetometer from Quantum Design, from 5-300 K. Conclusions Two novel Cu(II) neutral dinuclear helicates were isolated using an electrochemical methodology and precursor Schiff base ligands functionalized with bulky tert-butyl groups in ortho and para positions. The discrete crystal structures of both copper(II) compounds [Cu 2 (L 1 ) 2 ]·4CH 3 CN and [Cu 2 (L 1 ) 2 ]·CH 3 CN confirm their helicoidal dinuclear nature. These structures are extended through the establishment of weak π-π or CH···π stacking interactions, with the intermolecular metal distance being smaller than the distance between the metal ions within the molecule, especially in the case of [Cu 2 (L 1 ) 2 ]·4CH 3 CN with the external tert-butyl groups located far away from the binding domains, thus confirming the influence of the bulky group location. This structural fact also influences the magnetic properties of the helicates in terms of local environments of the Cu(II) ions, but this finding will require further studies.
2023-05-14T15:13:32.249Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "e2122b19413208180e5f43efd34391f7f680dbc6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms24108654", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef762240216060fae1201e6ab9488545d9733b71", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
20655249
pes2o/s2orc
v3-fos-license
Multiple conformations in the ligand-binding site of the yeast nuclear pore-targeting domain of Nup116p. The yeast nucleoporin Nup116p plays an important role in mRNA export and protein transport. We have determined the solution structure of the C-terminal 147 residues of this protein, the region responsible for targeting the protein to the nuclear pore complex (NPC). The structure of Nup116p-C consists of a large beta-sheet sandwiched against a smaller one, flanked on both sides by alpha-helical stretches, similar to the structure of its human homolog, NUP98. In unliganded form, Nup116p-C exhibits evidence of exchange among multiple conformations, raising the intriguing possibility that it may adopt distinct conformations when bound to different partners in the NPC. We have additionally shown that a peptide from the N terminus of the nucleoporin Nup145p-C binds Nup116p-C. This previously unknown interaction may explain the unusual asymmetric localization pattern of Nup116p in the NPC. Strikingly, the exchange phenomenon observed in the unbound state is greatly reduced in the corresponding spectra of peptide-bound Nup116p-C, suggesting that the binding interaction stabilizes the domain conformation. This study offers a high resolution view of a yeast nucleoporin structural domain and may provide insights into NPC architecture and function. Carefully controlled nucleocytoplasmic transport is critical for the eukaryotic cell, playing a role in key functions such as gene expression and cell division. This transport is mediated by nuclear pore complexes (NPCs), 5 large proteinaceous assemblies embedded in the nuclear envelope. Each NPC consists of ϳ30 distinct proteins termed nucleoporins, each present in at least eight copies, reflecting the octagonal symmetry of the complex (1,2). The core region of the NPC additionally possesses bilateral symmetry in the plane of the membrane, whereas peripheral structures (the cytoplasmic filaments and the nuclear basket) distinguish the two faces. Although NPCs have been studied extensively via genetics, biochemistry, and electron microscopy, there currently exist only limited atomic resolution data on the structures of NPC components. Thus, an important goal in improving our understanding of NPC function is the structural characterization of its constituent parts. Here, we focus on a structural domain from the yeast nucleoporin Nup116p. Nup116p is involved in mRNA export and protein transport (3)(4)(5). It is a member of the GLFG class of nucleoporins, containing a large number of Gly-Leu-Phe-Gly sequence motifs that interact with soluble transport receptors (5). The Nup116p deletion mutant is temperaturesensitive, forming a double membrane seal over the cytoplasmic face of NPCs and shutting down nucleocytoplasmic transport when shifted to the non-permissive temperature of 37°C (6). Nup116p has an unusual localization pattern within the NPC: it is found on both faces of the pore, but the majority is localized at the cytoplasmic face (1). This suggests that Nup116p possesses at least two distinct binding partners within the pore. One known binding partner, Nup82p, is found exclusively on the cytoplasmic face of the pore (7,8), whereas the nuclear binding partner is not known. Nup82p is required for poly(A) ϩ RNA export (9); and thus, Nup116p may contribute to the formation of a subcomplex at the cytoplasmic face of the pore that is responsible for a terminal step in mRNA export (10). Nup116p consists of three functional domains (Fig. 1A): an N-terminal Gle2p-binding site, which allows it to form a stable complex with Gle2p, an important nucleoporin component of the mRNA export machinery; a large GLFG repeat region spanning the N-terminal twothirds of the sequence; and an ϳ150-residue C-terminal domain responsible for NPC targeting. This latter domain is predicted to have a high degree of secondary structure, in contrast to much of the rest of the sequence, and may act as a tether to attach the long flexible N-terminal GLFG repeat regions and the Gle2p-binding site to the NPC (4,7,8). Given the economy of composition of the yeast NPC (1), it is intriguing that Nup116p is homologous to two other nucleoporins, Nup100p and Nup145p-N (Fig. 1A), most likely due to gene duplication events (11). This apparent redundancy could play a biological role in ensuring continued NPC function in the event that one of the components is mutated or absent (12); however, in the higher eukaryotic NPC, a single nucleoporin (NUP98) fulfills the role of the three yeast proteins (13). The Nup116p homolog Nup145p possesses autoproteolytic activity that is unique among yeast nucleoporins (14). This protein is expressed as a single polypeptide chain, cleaving itself post-translationally near the * This work was supported in part by National Institutes of Health Grant GM36373 (to P. A. S.) and Grants GM47467 and CA68262 (to G. W.). The maintenance of some of the instruments used for this research was supported by National Institutes of Health Grant GM066360. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. □ S The on-line version of this article (available at http://www.jbc.org) contains supplemental data and Figs. S1 and S2. The atomic coordinates and structure factors ( middle of the sequence to produce two species, Nup145p-N and Nup145p-C (15,16). The human homolog, which originates as an ϳ190-kDa polyprotein, has a similar activity, cleaving itself into the N-terminal NUP98 and the C-terminal NUP96 (17). Although the biological purpose of this cleavage is not clear, it is essential in each case for proper localization of the cleaved species, suggesting an important role for this activity (14). The autoproteolytic sites of Nup145p and NUP98/NUP96 each occur immediately C-terminal to the pore-targeting domain. Thus, although this domain occurs in the middle of each precursor polypeptide, it winds up at the C-terminal end of the N-terminal cleavage product. The resulting N-terminal products (Nup145p-N in yeast and NUP98 in humans) each bear a strong similarity to the non-cleavable Nup116p and Nup100p. In a multiple sequence alignment of the pore-targeting domains (Fig. 1B), the overall homology of these regions is evident, as well as the fact that Nup116p and Nup100p terminate only 3-4 residues downstream of the residues corresponding to the autoproteolytic sites of Nup145p and NUP98/NUP96. The structure of the pore-targeting domain of NUP98 has been recently determined by x-ray crystallography (18). This structure, the first pore-interacting domain of any nucleoporin to be characterized in atomic detail, offers insight into the mechanism of autoproteolysis. The peptide immediately downstream of the autoproteolytic site binds along a groove in the structure; the binding interactions between the poretargeting domain and the covalently linked partner induce sufficient strain in the scissile peptide bond to facilitate cleavage (18). The authors of this previous work were unable to determine a high quality structure of free NUP98, however; and thus, a comparison of bound and unbound structures could not be carried out. To better understand the structure-function relationship for this class of pore-targeting domains, we have used NMR to determine the solution structure of the free 147-residue C-terminal domain of Nup116p (Nup116p-C). In addition, we show that Nup116p-C is able to bind a peptide from the N terminus of Nup145p-C in a manner analogous to the protein-peptide interaction seen in the NUP98 crystal structure. This binding interaction appears to stabilize the conformation of Nup116p-C. We discuss the implications of our structure and binding studies for NPC function. MATERIALS AND METHODS Cloning and Expression-Cloning was performed using the Gateway ® recombination system (Invitrogen). For the GST-Nup116p-C construct, two rounds of PCR were conducted. In the first round, primers MAR116 (5Ј-CTGGAAGTTCTGTTCCAGGGGCCCAATGAGAA-CTACTATATCTCACC-3Ј) and MAR117 (5Ј-GGGGACCACTTTG-TACAAGAAAGCTGGGTCTCAGGTCTGCTCTGCAGCG-3Ј) were used with yeast genomic DNA as a template; and in the second round, primers MAR114 (5Ј-GGGGACAAGTTTGTACAAAAAAG-CAGGCTTCCTGGAAGTTCTGTTCCAGGG-3Ј) and MAR117 were used with the first-round product as a template. The resulting PCR product was recombined into the destination vector pDEST15 (Invitrogen), which encodes an N-terminal GST tag. Recombinant fusion proteins were expressed in Escherichia coli BL21-CodonPlus ® -RIL (Stratagene). Cells were grown at 37°C to midlog phase in media containing 16 g of Tryptone, 10 g of yeast extract, 5 g of NaCl, and 100 mg of ampicillin per liter (Qbiogene, Inc.), chilled on ice for 5 min, induced with 0.5 mM isopropyl ␤-D-thiogalactopyranoside, and grown at 25°C for an additional 6 -8 h. To express uniform isotopically labeled protein, cells were grown to mid-log phase at 37°C in M9 minimal medium with the appropriate labeled reagent substituted into the medium. The following reagents were used to achieve the corresponding isotope labeling: 98ϩ% [ 15 N]NH 4 Cl (1 g/liter) and 99ϩ% [ 13 C]glucose (2 g/liter). Cells were induced as described above and grown at 25°C for 24 h post-induction. Selective labeling of a particular amino acid was achieved by adding the following for each liter of M9 minimal medium: 125 mg of labeled amino acid; 200 mg each of the other unlabeled 19 amino acids; and 10 mg each of adenine, guanine, cytosine, and thymine. Nup116p-C Purification-All purification steps were conducted at 4°C. Frozen cell pellets containing overexpressed GST-Nup116p-C were resuspended in 50 ml of lysis buffer (50 mM Tris (pH 7.0), 500 mM NaCl, 10 mM dithiothreitol (DTT), and one tablet of Complete protease inhibitors (Roche Applied Science))/liter of original culture. Cells were lysed by passage through an EmulsiFlex-C5 cell disrupter (Avestin Inc.) and centrifuged for 30 min at 17,000 rpm in a Beckman JA-20 rotor. The lysate was passed two to three times over glutathione resin (Amersham Biosciences) equilibrated with wash buffer (50 mM Tris (pH 7.0), 500 mM NaCl, 1 mM DTT, and 1 mM EDTA). Glutathione resin was washed with 15 column volumes of wash buffer and then incubated for 30 min in elution buffer (50 mM Tris (pH 8.0), 150 mM NaCl, 1 mM DTT, and 20 mM glutathione) and eluted. Fusion protein was mixed with PreScission TM protease (Amersham Biosciences) and dialyzed overnight using 3500-Da molecular mass cutoff dialysis cassettes (Pierce) against dialysis buffer (50 mM Tris (pH 7.0), 150 mM NaCl, 1 mM DTT, and 1 mM EDTA) to allow buffer exchange and protease cleavage. The dialyzed sample was concentrated to 2-3 ml using Vivaspin 20 concentrators with a 5-kDa molecular mass cutoff (Sartorius AG), filtered through Spin-X filters (Costar), and applied to a HiPrep 26/60 Sephacryl TM S-100 HR gel filtration column (Amersham Biosciences) equilibrated with wash buffer. The purified samples were analyzed by mass spectrometry (Molecular Biology Core Facilities, Dana-Farber Cancer Institute) to confirm protein mass and sample purity. Peptide Preparation-Unlabeled Nup145p-C peptide corresponding to the N-terminal 15 amino acids of Nup145p-C (Ser-Ile-Trp-Gly-Leu-Val-Asn-Glu-Glu-Asp-Ala-Glu-Ile-Asp-Glu) was ordered from Invitrogen and synthesized on a 10-mg scale. Because the peptide did not dissolve directly in neutral aqueous buffer, lyophilized powder was resuspended in a minimal volume of 10 mN NaOH sufficient to dissolve the peptide, and the pH was neutralized by adding an equal amount of HCl. To produce isotopically labeled peptide, the GST-TEV-Nup145p-C peptide fusion protein was purified as described above for GST-Nup116p-C. Following glutathione elution, the fusion protein was concentrated to Ͻ2 ml. To this sample were added 20 l of 0.1 M DTT, 40 l of 0.5 M EDTA, and 1 l of AcTEV TM protease (Invitrogen) per mg of fusion protein and gel filtration buffer sufficient to bring the sample to 2 ml. The sample was incubated overnight at room temperature to allow protease cleavage and then filtered and applied to a HiPrep 16/60 Sephacryl TM S-100 HR gel filtration column (Amersham Biosciences) equilibrated with wash buffer. Peptide desalting was accomplished using a 3-ml Oasis hydrophilic-lipophilic balance cartridge (Waters Corp.). The purified peptide was analyzed by mass spectrometry to confirm peptide mass and purity. NMR Samples-Purified isotopically labeled Nup116p-C was dialyzed into NMR buffer (100 mM sodium/potassium phosphate (pH 6.5), 50 mM NaCl, and 1 mM DTT) and concentrated to 0.5-1.0 mM using Vivaspin-20 concentrators with a 5-kDa molecular mass cutoff. In some cases, the peptide was mixed with the concentrated protein sample to form a complex. 25 l of D 2 O (99.8ϩ%; Cambridge Isotope Laboratories, Inc., and Aldrich) was added to 250 l of the protein or complex mixture to provide a lock signal. The following spectra of Nup116p-C were acquired for use in backbone assignment: 15 (24), an automated assignment software package, was used to generate an initial assignment. Additionally, assignment was aided using 15 N HSQC spectra of samples selectively 15 N-labeled at the following residues: alanine, arginine, isoleucine, leucine, lysine, tyrosine, and valine. The 15 N NOESY-HSQC spectrum was used to confirm assignments. HC(CO)NH, HCCH-TOCSY, D 2 O-TOCSY, and 13 C HSQC methyl spectra were acquired for use in side chain assignment. Side chain resonances were assigned using the above spectra as well as 15 N TOCSY-HSQC and 15 N NOESY-HSQC spectra. 13 C NOESY-HSQC and D 2 O-NOESY (mixing times of 100 ms) spectra were acquired to generate distance constraints. Based upon these spectra as well as the 15 N NOESY-HSQC spectrum (mixing time of 80 ms), NOE resonances were assigned and tabulated. A constant time 13 C HSQC spectrum acquired from a 10% 13 C-labeled Nup116p-C sample was used for stereospecific methyl assignments. Distance and angle constraints were used to determine the structure of Nup116p-C. The intensity of each assigned NOE cross-peak was categorized as strong, medium, weak, or very weak and converted into a distance upper limit of 3.0, 4.2, 5.2, or 6.0 Å, respectively, using NMR-View. For non-stereospecifically assigned diastereotopic protons and aromatic and methyl homotopic protons, the ambiguous distance constraint method was used (25). In each case, the lower distance bound was set to 1.8 Å. TALOS (torsion angle likelihood obtained from shift and sequence similarity) (26) was used to generate and angle constraints based on comparing observed chemical shifts with data base values. Structures were calculated using the CYANA (27) and CNS (28) software packages. For the reported structure, 100 structure calculations were performed using CNS, and the 15 structures with the lowest energies were chosen for the structure ensemble. MOLMOL (29) and Mol-Script (30) were used to visualize the structures and to make figures. The quality of the final structure ensemble was evaluated using PRO-CHECK-NMR (31). Isothermal Titration Calorimetry (ITC)-Purified Nup116p-C was dialyzed extensively into ITC buffer (25 mM sodium/potassium phosphate (pH 6.5)) and brought to a concentration of 10 M. Dissolved peptide stock was diluted 20-fold with ITC buffer. The final peptide concentration was adjusted to 200 M with ITC buffer. A VP-ITC calorimeter (MicroCal, LLC), under the control of Origin 5.0 software, was used to perform ITC on protein-peptide complexes. The sample chamber was filled to capacity with protein and equilibrated at 25°C. The peptide was then titrated into the sample chamber in 10-l increments (30 in total), with 210 s between injections and a mixing speed of 270 rpm. For base-line determination, identical experiments were run with peptide titrated into blank ITC buffer as well as with blank ITC buffer titrated into protein sample. The heats of dilution determined from these experiments were subtracted from the main experimental data using the Origin software. The data were fit to a single binding site model, from which thermodynamic parameters were determined. RESULTS Structure Determination of Nup116p-C-The solution structure of Nup116p-C was determined using standard methods of NMR spectroscopy. This domain consists of the final 147 residues of Nup116p as well as two extraneous residues (Gly-Pro) from the protease cleavage site at the N terminus of the domain. An ensemble of 15 calculated structures ( Fig. 2A and TABLE ONE) was chosen based on the criteria described under "Materials and Methods." Consistent with the crystal structure of the human NUP98 homolog (18), Nup116p-C adopts a predominantly ␤-strand structure (Fig. 2B). The molecule consists of a six-stranded ␤-sheet sandwiched against a two-stranded ␤-sheet and flanked by ␣-helical regions. The N-terminal helical region consists of two short helices (residues 12-16 and 21-24), whereas the stretch on the opposite side of the molecule consists of a single, longer helix. Comparison with the Crystal Structure of NUP98-Superposition of a representative solution structure of Nup116p-C and the crystal structure of NUP98 (Fig. 2C) makes it clear that the structures share a common architecture. However, the domains differ functionally in that NUP98 is an autoproteolytic protein, cleaving itself into N-and C-terminal molecules (32), whereas Nup116p does not possess this activity. The NUP98 crystal structure includes this autoproteolytic juncture, and the first several residues of the C-terminal cleavage product (NUP96) are visible in the electron density. In contrast, the Nup116p-C structure does not include a bound peptide. Thus, it is not surprising that the NUP98 binding site and the corresponding region of Nup116p-C exhibit notable differences. In particular, helix ␣3 in the Nup116p-C structure is shifted up to 5Å away from the corresponding helix and binding site in NUP98, creating an expanded cleft between helix ␣3 and strand ␤5. In the crystal structure, this cleft is occupied by the NUP96 peptide, with which the long helix forms key hydrogen bonds and hydrophobic contacts. These contacts are clearly absent in Nup116p-C. In the ensemble of calculated structures ( Fig. 2A), helix ␣3 exhibits variation mainly along the direction of the helical axis from structure to structure, with less variation toward or away from the small ␤-sheet. Thus, the substantial observed shift of the long helix in this latter dimension, compared with NUP98, is presumed to be a real difference between the two structures, rather than an artifact of the uncertainty in the NMR structure determination. As mentioned above, the conformations of the loop regions of Nup116p-C, particularly loop 3, vary substantially among calculated models, presumably due in part to flexibility and consequently fewer NOE constraints per residue in these regions. It is therefore difficult to make direct comparisons with the loop conformations in NUP98. Nevertheless, it is worth noting that the sequences in loop 3 and, to a lesser extent, in loop 2 are fairly well conserved (Fig. 1B). In NUP98, loop 3, which has several basic residues, may contribute to peptide binding by contacting the acidic portion of the peptide that is not visible in the electron density, whereas loop 2 plays an important role in autoproteolysis (18). The sequence conservation suggests that, although Nup116p-C does not autoproteolyze, it may be capable of binding a partner peptide in a functionally similar manner to NUP98. Nup116p-C Interacts with the Nup145p-C Peptide-Of the three yeast homologs of NUP98, only Nup145p possesses autoproteolytic activity, post-translationally cleaving itself into Nup145p-N and Nup145p-C. It has been shown that only a minimal region of 10 residues or less, C-terminal to the cleavage site, is required to maintain autoproteolytic activity in Nup145p (14) and NUP98/NUP96 (32), and autoproteolytic activity is thought to depend strongly on correct peptide orientation in the binding pocket (18). Additionally, only the first 7 peptide residues have readily interpretable electron density in the x-ray structure of NUP98, suggesting that only a short peptide is required for the interaction. Because the amino acids that make up the NUP98 binding site are surprisingly well conserved across the human and yeast homologs (18), we hypothesized that Nup116p-C may bind a peptide in a similar fashion to NUP98. A plausible binding partner could be Nup145p-C, the yeast homolog of NUP96. To test this idea, we synthesized a short peptide corresponding to the first 15 residues of Nup145p-C. Residues 3-5 of this peptide (Trp-Gly-Leu) are very similar to NUP96 residues 3-5 (Tyr-Gly-Leu), which are the most highly ordered of the peptide residues in the crystal structure and which participate in the bulk of the observed protein-peptide interactions. The Nup145p-C peptide also includes a substantial stretch of acidic residues in its C-terminal half, a conserved feature that may be important in binding. ITC was used to assess whether the Nup145p-C peptide binds to Nup116p-C. The observed injection trace of rate of heat evolved versus time (Fig. 3A) demonstrated that protein-peptide binding was occurring. By fitting the integrated values of enthalpic change for each injection point to a best fit curve, thermodynamic parameters could be determined. The dissociation constant was found to be 2.32 Ϯ 0.05 M, with a stoichiometry of one molecule of peptide/molecule of Nup116p-C. Values for ⌬H and T⌬S were found to be Ϫ5.94 Ϯ 0.04 and ϩ1.75 Ϯ 0.06 kcal/mol, respectively. Both the negative change in enthalpy and the positive change in entropy contributed favorably to a spontaneous binding reaction. The quality of the ITC data and the close match to the best fit curve, especially in comparison with ITC binding data for Nup116p-C and another likely ligand (see supplemental data), provide a strong argument in favor of the significance of the interaction between Nup116p-C and the Nup145p-C peptide. With peptide binding established by ITC, the protein-peptide interaction was examined by NMR to determine whether peptide binding occurs at the expected site. Excess unlabeled peptide was added to an 15 N-labeled sample of Nup116p-C, and an 15 N HSQC spectrum was acquired. The resulting peptide-bound spectrum clearly differs from the unbound spectrum, with a strikingly large percentage of resonances noticeably shifted, although the overall pattern is similar (Fig. 3B). This result not only confirmed that Nup116p-C binds the Nup145p-C peptide, but because of the large number of shifted resonances, it also suggested that a significant conformational change in Nup116p-C may occur upon binding. We next sought to identify the residues whose peaks had shifted most dramatically. An overall value for the chemical shift change was calculated for each residue, and residues with the largest changes were plotted on a ribbon representation of the protein (Fig. 3C). Because this is a map of backbone amide chemical shift changes, rather than a map of residues physically contacted by the peptide, residues outside the binding pocket were expected to be affected through indirect perturbations of their chemical microenvironments. Still, the immediate vicinity of the hypothesized binding site is the region most affected by chemical shift changes. In particular, residues along strand ␤5 of the small ␤-sheet are shifted substantially, as high as 0.6 ppm in the case of Thr 66 . In the NUP98 crystal structure, Val 782 , which corresponds to Cys 67 in Nup116p-C, forms two backbone-backbone hydrogen bonds with the peptide residue Tyr 3 . In that structure, the peptide effectively functions as a third strand in the small ␤-sheet, interacting extensively with its neighboring strand. The results described here support the idea that the Nup145p-C peptide interacts in a similar manner with strand ␤5 of Nup116p-C. The other region exhibiting the largest chemical shift changes is the C-terminal tail of Nup116p-C, residues 146 -149. Although it is unclear what role this short stretch may play in binding, it is not surprising to see these residues shifted to such a large extent. In NUP98 and its autoproteolytic homologs, this corresponds to the region that is cleaved, forming the bound peptide visible in the structure. Upon Nup116p binding of the Nup145p-C peptide, the environment of the tail region is likely to be substantially altered, as the chemical shift data show. Notably, although a few residues in the long ␣3 helix exhibit significant backbone amide chemical shift changes, most do not. In NUP98, this helix forms hydrophobic side chain interactions with Leu 5 of the peptide, and the final residue of the helix, Gln 842 , makes two side chain hydrogen bonds with the peptide backbone at Gly 4 . However, the backbone amide groups are principally involved in helical backbone hydrogen bonds; thus, although the overall position and environment of the helix likely change significantly upon peptide binding (Fig. 2C), this would not necessarily be reflected in the 15 N HSQC spectrum. Peptide Binding Stabilizes the Conformation of Nup116p-C-In addition to the chemical shift changes described above, peptide addition also caused new resonances to appear in the 15 N HSQC spectrum. Intriguingly, during the process of backbone assignment of peptide-bound Nup116p-C, we observed that the peaks from Ile 69 and Tyr 70 were easily assigned, unlike in the case of the unbound protein, for which these residues had no identifiable backbone amide resonances. These residues immediately follow strand ␤5 of the small ␤-sheet and, in fact, may form a continuation of this strand in the presence of the peptide. Indeed, in the NUP98 structure, the homologous residues Val 784 and Tyr 785 form the last 2 residues of the corresponding ␤-strand. Although these residues do not interact directly with the peptide in that structure, they do form multiple important interactions that stabilize the small ␤-sheet and its position relative to the rest of the structure. The lack of resonances for Ile 69 and Tyr 70 in the unbound form of Nup116p-C suggested that conformational exchange occurs in the intermediate exchange regime. This exchange hypothesis is consistent with the observation that the adjacent residues, Ile 68 and Ala 71 , exhibit the shortest T 2 15 N relaxation times of all measured residues, 23.2 and 21.3 ms, respectively. These are each less than half of the average T 2 value of 49.0 ms (data not shown). In contrast, the presence of discrete backbone amide resonances for Ile 69 and Tyr 70 in the new spectra indicated that these residues may lock into a fixed conformation in the context of peptide binding. To confirm the appearance of resonances for Ile 69 and Tyr 70 upon peptide addition, the spectra of selectively labeled Nup116p-C samples were acquired in the absence and presence of the peptide. In the peptide-bound [ 15 N]Ile HSQC spectrum (Fig. 4A), a new resonance is clearly visible, the location of which agrees with our sequential assignment of peptide-bound Ile 69 . Significant amide chemical shift changes in isoleucines 59, 68, and 89 are also evident. Ile 59 corresponds to Ile 774 in the NUP98 structure, with which Tyr 785 forms a hydrogen bond; Ile 68 is a neighboring residue on strand ␤5; and Ile 89 lies on strand ␤6, with the side chain facing into the peptide-binding site. Each of these residues could play a role in stabilizing the small ␤-sheet or interacting with the peptide. A new resonance can also be seen in the peptide-bound [ 15 N]Tyr HSQC spectrum (Fig. 4B). Because there are fewer resonances and no degeneracy in these spectra, the appearance of a new resonance upon peptide binding is even clearer than in the isoleucine spectra. As expected, the new resonance is consistent with the previous assignment of Tyr 70 . In a similar fashion, the tail residues 146 -148 exhibit evidence of intermediate exchange in the unbound state, with the Glu 147 amide peak unassigned and the Ala 146 and Gln 148 amide peaks almost too weak to detect. This raises the suggestion that the tail alternates among discrete conformations, possibly flipping in and out of the binding pocket. Upon peptide binding, a clear Glu 147 peak appears, and the Ala 146 and Gln 148 peaks are strengthened significantly, suggesting a stabilized conformation. Because the experiments described above give information only on backbone amide changes, we also examined the effect of peptide binding on Nup116p-C side chains. In particular, unbound and bound 13 C HSQC methyl spectra were compared (Fig. 4C). Strikingly, the number of observed peaks dropped dramatically, from Ͼ140 in the unbound spectrum to ϳ85 in the bound spectrum. The number of expected resonances would ordinarily equal the number of distinct methyl groups in the protein side chains, which add to 84 (one methyl group each from 6 alanines and 8 threonines and two methyl groups each from 12 isoleucines, 13 leucines, and 10 valines). Thus, peptide addition brought the number of peaks in the spectrum into almost exact agreement with the number of methyl groups. As a key example, Leu 120 , which is located toward the C-terminal end of helix ␣3, exhibits multiple resonances in the unbound state for each methyl group. H-␦1 gives rise to multiple weak resonances, whereas H-␦2 is split into two clear, distinct peaks. Upon peptide addition, each set of peaks collapses into a single, discrete peak (Fig. 4C). These observations suggest that, in the unbound state, the Leu 120 side chain exists in multiple conformational states undergoing slow exchange, whereas in the peptide-bound state, a single conformation is stabilized, likely involving hydrophobic contacts between the leucine side chain and the peptide. Similarly, two unbound Ala 145 methyl peaks merge into a single peak upon peptide addition, suggesting that the conformation of this tail residue is stabilized by the presence of the peptide. Binding Studies with Isotopically Labeled Nup145p-C Peptide-To better characterize the protein-peptide complex, isotopically labeled Nup145p-C peptide was produced and analyzed by acquisition of 15 N HSQC spectra in the absence and presence of excess unlabeled Nup116p-C (Fig. 4D). The spectrum of the free peptide reveals 14 strong peaks, corresponding to backbone amides and the side chain of Trp 3 . In the spectrum of the complexed peptide, nine strong peaks corresponding to Asn 7 -Glu 15 are visible; a weaker Val 6 peak is evident; and four more extremely weak peaks can be seen corresponding to Trp 3 -Leu 5 and the Trp 3 side chain. By comparing free and bound spec- Large shifts are indicated with red arrows, and peak assignments are shown. A resonance for Ile 69 appears upon peptide binding. B, the conditions were the same as described for A, but a selectively labeled [ 15 N]tyrosine sample was used. A resonance for Tyr 70 appears upon peptide binding. C, shown is an overlay of 13 C HSQC methyl spectra in the absence and presence of the unlabeled peptide. Many fewer peaks are present in the bound spectrum, as the circled resonances for Leu 120 and Ala 145 demonstrate. D, shown is an overlay of 15 N HSQC spectra of the Nup145p-C peptide in the absence (filled black) and presence (open red) of unlabeled Nup116p-C. The most shifted residues are indicated with arrows and labels. Inset, the relative intensities of 15 N HSQC backbone amide peaks are shown by residue. The Trp 3 side chain (W sc ) peak is also included. The number of contour levels of each peak above noise (factor of 1.1 per contour level) was calculated for both the unbound (black bars) and bound (red bars) peptides, and the two sets of values were normalized for comparison by setting the intensities for the C-terminal residue (Glu 15 ) equal. tra, it is evident that the most shifted and most intensity-changed resonances belong to the N-terminal putative binding region of the peptide. Backbone peaks for Gly 4 and Leu 5 are particularly affected, each shifted by Ͼ0.9 ppm. Both backbone and side chain peaks for Trp 3 are also shifted substantially, as is the peak for Val 6 . The other peaks are shifted to a much lesser degree. Because bound peptide tumbles significantly more slowly in solution than free peptide, it was anticipated that bound peptide resonances would weaken. Indeed, although all the free peptide resonances have approximately equal intensities, the complexed peptide resonances exhibit a clear pattern of steadily weakening intensities, moving from the C terminus to the N terminus (Fig. 4D, inset). The N-terminal residues Trp 3 -Leu 5 have particularly weak resonances. Despite the high sensitivity of the 15 N NOESY-HSQC peptide spectrum, the peaks in this spectrum corresponding to Trp 3 -Leu 5 are still quite weak. However, relatively strong NOEs are observed between the backbone amide of Leu 5 and the two Gly 4 ␣-protons as well as between the backbone amide of Val 6 and the Leu 5 ␣-proton (data not shown). These d ␣N NOEs occur in the ␤-strand region of the spectrum and suggest that the peptide may adopt this form of secondary structure when bound to Nup116p-C. Further structural characterization of the complex was hampered by a low signal-to-noise ratio in the NOESY spectra, possibly due to exchange phenomena (see "Discussion") Taken together, the above data support a model in which the Nup145p-C peptide binds to Nup116p-C using several N-terminal residues, possibly adding a third ␤-strand to the small ␤4-␤5 ␤-sheet of Nup116p-C. This model corresponds closely to what is seen in the NUP98/NUP96 crystal structure, in which the first several residues of the peptide form a ␤-strand, and the remainder of the peptide is disordered (18). The fact that the bound Nup145p-C peptide exhibits weakened N-terminal peak intensities, relative to the free peptide, is consistent with a model in which the peptide is tethered at its N-terminal end. DISCUSSION In this work, we have used NMR to determine the solution structure of the pore-targeting domain of Nup116p. As expected, Nup116p-C has a fold similar to that of its human homolog, NUP98, whose structure was determined while these studies were in progress. HSQC spectra indicate that numerous residues of Nup116p-C in the unbound state appear to exist in an ensemble of discrete conformations undergoing slow to intermediate conformational exchange. By analogy with the human homolog, we reasoned that Nup116p-C may bind the N-terminal portion of Nup145p-C, and we were able to demonstrate an interaction using ITC and NMR. Intriguingly, the conformational exchange observed in spectra of unliganded Nup116p-C disappears or is greatly reduced upon peptide binding, suggesting that the presence of a binding partner stabilizes the domain conformation. Conformational Diversity of Unliganded Nup116p-C-The strongest evidence of conformational exchange comes from the 15 N HSQC and 13 C HSQC spectra (Fig. 4). The 15 N HSQC spectra reveal that no backbone amide peaks are present for several residues, including Ile 69 and Tyr 70 , the residues immediately following strand ␤5. The absence of peaks for these residues, demonstrated unambiguously with selectively labeled spectra, as well as the very weak intensities of resonances for nearby residues, suggests that these amide groups exist in two or more discrete conformations, alternating among them on an intermediate time scale. T 2 relaxation data support this idea, as the adjacent residues, Ile 68 and Ala 71 , have extremely short transverse relaxation times, ruling out the possibility that this region of the protein is highly flexible in the manner of a disordered loop region. The 13 C HSQC spectrum of unbound Nup116p-C also provides evidence of conformational exchange, with many more peaks than side chain methyl groups in the spectrum. Interestingly, the time regimes of the backbone and side chain dynamics differ: the backbone amides of Ile 69 and Tyr 70 exhibit intermediate exchange, as evidenced by the absence of expected peaks, whereas the side chain methyl groups of Leu 120 and Ala 145 exhibit slow exchange, as shown by the presence of additional peaks. Based on the ensemble of calculated structures, loop 3 in particular exhibits a dramatic degree of conformational diversity, and the lateral position of helix ␣3 varies appreciably across structures relative to the highly fixed six-stranded ␤-sheet. Consistent with the lack of backbone amide resonances for Ile 69 and Tyr 70 , the conformations of these residues vary substantially from one calculated structure to the next. These conformationally variable regions all map to the putative ligand-binding site. The presence of multiple conformations may explain why extensive efforts to crystallize unliganded Nup116p-C were unsuccessful. Conformational changes from one molecule to the next or changes within a single molecule over time could easily disrupt the regular network of subunit-subunit interactions that must form for a crystal to grow. In the case of the crystal structure of the human homolog, NUP98, only the liganded form of the pore-targeting domain could be solved. Although crystals of unliganded NUP98 were able to be grown, the crude structural model derived from these crystals could not be properly refined, suggesting substantial disorder in the structure (18). Thus, unliganded NUP98 may exhibit multiple conformations in a similar fashion to Nup116p-C. Conformational diversity in Nup116p-C could exist for numerous reasons. One possibility is that the observed binding site plasticity enables Nup116p-C to bind to multiple targets within the NPC. Previous work has shown that Nup116p-C interacts with the nucleoporin Nup82p; furthermore, studies with human homologs show that NUP98 utilizes the same binding site to interact with NUP96 (its C-terminal autoproteolytic partner) and NUP88 (the homolog of Nup82p) (13). Thus, the Nup116p-C binding site described in this work is likely involved in interactions with at least two nucleoporins (see supplemental data) and possibly others as well. To accommodate distinct ligands, Nup116p-C may need to exist in an ensemble of conformations in the unbound state. Another possibility is that Nup116p is a shuttling protein, much like its human homolog, NUP98 (33,34). Although immunogold electron microscopy localization data show that Nup116p is found on both faces of the NPC, with an asymmetrical distribution skewed toward the cytoplasmic face (1), it is unclear whether this is a static or dynamic distribution. In the case of NUP98, the mobility of the protein has been hypothesized to link RNA transcription and export (33). If Nup116p behaves in a similar fashion, the conformational diversity of its C-terminal domain may result in lower affinity binding and faster dissociation rates. It is possible that the "tail" of Nup116p-C (residues 146 -149) contributes to this dissociation by flipping in and out of the binding site in the absence of ligand and acting as a covalently attached competitive inhibitor in the presence of ligand. This idea is supported by the substantial chemical shift changes between unbound and Nup145p-C peptide-bound states seen for these tail residues as well as the strengthened amide resonances in the bound state. Further work, including in vivo experiments, is necessary to definitively establish whether Nup116p is in fact a mobile nucleoporin as well as the role of the tail residues in binding and dissociation. Interaction with the Nup145p-C Peptide-The crystal structure of NUP98, which reveals a binding interaction with the first several N-terminal residues of NUP96, led to the hypothesis that Nup116p-C may bind Nup145p-C in a similar fashion. Using ITC and NMR, we demonstrated an interaction using a 15-mer peptide from Nup145p-C. It should be noted that this interaction provides a convenient explanation for the unusual cytoplasmically biased localization pattern of Nup116p in the NPC. Nup145p-C is a symmetric nucleoporin, whereas Nup82p, the other known binding partner of Nup116p, is exclusively found on the cytoplasmic face of the NPC; combining these localizations results in a cytoplasmically biased distribution, consistent with the observed pattern (Fig. 5). The relatively weak binding interaction observed between Nup116p-C and the Nup145p-C peptide gives further support to the notion discussed above that Nup116p may be a dynamic component of the pore. In the presence of the Nup145p-C peptide, the effects of conformational exchange observed in the unbound spectra of Nup116p-C are greatly reduced, suggesting that a single bound conformation may be present. This is underscored in the selectively labeled [ 15 N]Ile and [ 15 N]Tyr HSQC spectra, in which peaks that are entirely absent in the unbound case appear as strong, single peaks in the complex. Furthermore, in the 13 C HSQC methyl spectra, many more peaks than expected are present in the unbound case, whereas the number of bound peaks corresponds almost precisely to the number of side chain methyl groups. The two methyl groups of Leu 120 provide a particularly clear example of this, with several unbound peaks coalescing into a single bound peak for each methyl group. Likewise, the two unbound peaks corresponding to the methyl group of Ala 145 coalesce into one bound peak. Interestingly, our studies of isotopically labeled peptide revealed evidence of peptide conformational exchange upon binding Nup116p-C. The resonances for the first several N-terminal peptide residues are considerably weaker than expected for a complex consisting of a 16.8-kDa protein and a 1.7-kDa peptide. As an extreme example, the Gly 4 resonance is fully 25 times weaker in normalized intensity than the corresponding unbound peptide resonance (Fig. 4D, inset). Indeed, due to the weakness of the peptide resonances directly involved in binding, further structural analysis of the protein-peptide complex could not be performed. The observed weak peaks likely reflect exchange between a single bound state and a large ensemble of disordered conformations in the unbound state. This emergence of exchange upon binding is an initially unexpected result given the reduction of exchange phenomena in the spectra of labeled Nup116p-C bound to the unlabeled peptide. However, we attribute the peptide exchange to rapid association with and dissociation from Nup116p-C, with a dissociation rate in approximately the same millisecond time regime as the NMR experiments, consistent with the relatively weak micromolar scale binding constant of the interaction. The much larger Nup116p-C protein appears to remain in its observed bound conformation even as the peptide associates and dissociates rapidly, likely because the protein responds more slowly than the peptide to changes in its environment. Pre-existing Equilibrium Model-In general terms, Nup116p-C appears to be an instance of a growing class of protein structures that support the "pre-existing equilibrium" model (35)(36)(37) as a mechanism of protein-protein interaction. In this model, proteins exist in an ensemble of conformations in an unbound state, and the presence of a particular ligand alters this distribution to favor one or more bound states. This model challenges the dogma that sequence uniquely specifies structure and that structure in turn uniquely specifies function. Instead, a given sequence may fold into a variety of related structures, each of which may have distinct binding or other activities (38). The notion of pre-existing equilibrium, at least as applied to Nup116p-C, goes beyond the induced fit principle of plasticity in the binding site or in the structure as a whole, suggesting that multiple discrete conformations, which in some cases may differ markedly from one another, exist for the same protein sequence. An equilibrium exists among these conformations in the unbound state, all typically at similar free energies, but only one of which may possess a particular activity (39). A ligand "selects" for its binding-competent conformation by binding only to that particular unbound state when it exists, thus biasing the conformational distribution (35). The NMR results for Nup116p-C support this model, as clear evidence is observed for multiple discrete conformations in slow or intermediate exchange in the unbound state and a single conformation (at least in the case of individual residues or chemical groups) in the bound state. Conclusion-We have presented structural and binding studies of a nucleoporin domain that may reveal important principles of NPC architecture. The NPC as a whole is a remarkably dynamic structure, with pronounced variations in overall shape and diameter having been observed (40); thus, it is fascinating to observe conformational diversity at the level of its constituent parts. Because so little high resolution structural information about nucleoporins is currently available, further studies of other nucleoporins will be required to determine whether these nucleoporins exhibit the type of conformational diversity seen for Nup116p-C. The NPC has provided a daunting challenge to structural biologists, given its enormous size, large-scale conformational changes, and lack of ordered structure in many regions of its constituent nucleoporins. It is anticipated that the approach taken here could be successfully extended to other nucleoporin domains and nucleoporin-nucleoporin interfaces, with the goal of deepening our understanding of the structural underpinnings of nucleocytoplasmic transport.
2018-04-03T00:56:22.487Z
2005-10-21T00:00:00.000
{ "year": 2005, "sha1": "dfeb19e7347f1355ba97b5b3232e0ff68785bf6b", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/42/35723.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7e19275a7d25728fdfff3d2dd85b3908e5b7655b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
254410253
pes2o/s2orc
v3-fos-license
fNIRS-based brain functional response to robot-assisted training for upper-limb in stroke patients with hemiplegia Background Robot-assisted therapy (RAT) has received considerable attention in stroke motor rehabilitation. Characteristics of brain functional response associated with RAT would provide a theoretical basis for choosing the appropriate protocol for a patient. However, the cortical response induced by RAT remains to be fully elucidated due to the lack of dynamic brain functional assessment tools. Objective To guide the implementation of clinical therapy, this study focused on the brain functional responses induced by RAT in patients with different degrees of motor impairment. Methods A total of 32 stroke patients were classified into a low score group (severe impairment, n = 16) and a high score group (moderate impairment, n = 16) according to the motor function of the upper limb and then underwent RAT training in assistive mode with simultaneous cerebral haemodynamic measurement by functional near-infrared spectroscopy (fNIRS). Functional connectivity (FC) and the hemisphere autonomy index (HAI) were calculated based on the wavelet phase coherence among fNIRS signals covering bilateral prefrontal, motor and occipital areas. Results Specific cortical network response related to RAT was observed in patients with unilateral moderate-to-severe motor deficits in the subacute stage. Compared with patients with moderate dysfunction, patients with severe impairment showed a wide range of significant FC responses in the bilateral hemispheres induced by RAT with the assistive mode, especially task-related involvement of ipsilesional supplementary motor areas. Conclusion Under assisted mode, RAT-related extensive cortical response in patients with severe dysfunction might contribute to brain functional organization during motor performance, which is considered the basic neural substrate of motor-related processes. In contrast, the limited cortical response related to RAT in patients with moderate dysfunction may indicate that the training intensity needs to be adjusted in time according to the brain functional state. fNIRS-based assessment of brain functional response assumes great importance for the customization of an appropriate protocol training in the clinical practice. Background: Robot-assisted therapy (RAT) has received considerable attention in stroke motor rehabilitation. Characteristics of brain functional response associated with RAT would provide a theoretical basis for choosing the appropriate protocol for a patient. However, the cortical response induced by RAT remains to be fully elucidated due to the lack of dynamic brain functional assessment tools. Objective: To guide the implementation of clinical therapy, this study focused on the brain functional responses induced by RAT in patients with different degrees of motor impairment. Methods: A total of 32 stroke patients were classified into a low score group (severe impairment, n = 16) and a high score group (moderate impairment, n = 16) according to the motor function of the upper limb and then underwent RAT training in assistive mode with simultaneous cerebral haemodynamic measurement by functional near-infrared spectroscopy (fNIRS). Functional connectivity (FC) and the hemisphere autonomy index (HAI) were calculated based on the wavelet phase coherence among fNIRS signals covering bilateral prefrontal, motor and occipital areas. Results: Specific cortical network response related to RAT was observed in patients with unilateral moderate-to-severe motor deficits in the subacute stage. Compared with patients with moderate dysfunction, patients with severe impairment showed a wide range of significant FC responses in the bilateral hemispheres induced by RAT with the assistive mode, especially taskrelated involvement of ipsilesional supplementary motor areas. Conclusion: Under assisted mode, RAT-related extensive cortical response in patients with severe dysfunction might contribute to brain functional organization during motor performance, which is considered the basic neural substrate of motor-related processes. In contrast, the limited cortical response related to RAT in patients with moderate dysfunction may indicate that the training intensity needs to be adjusted in time according to the Introduction The recovery of upper-limb motor function is still limited in stroke survivors, which significantly impacts their independence of daily living (Lawrence et al., 2001;Stoykov et al., 2009). Recently, robot-assisted therapy (RAT) for the upper-limb has emerged as a popular rehabilitation intervention for stroke. Several studies have verified the clinical effectiveness of RAT based on clinical assessment (Sale et al., 2014) and biomechanical parameters, including kinematic and kinetic parameters (Mazzoleni et al., 2011(Mazzoleni et al., , 2013. Besides, recent reviews have reported heterogeneous outcomes of RAT among stroke patients (Veerbeek et al., 2017;Mehrholz et al., 2018), mainly because the severity of hemiparesis was not considered as an important feature when choosing an appropriate pattern of RAT for a patient. There is a lack of clinically effective assessment tools to assist the therapist to deliver appropriate therapeutic interventions according to the specific need of each patient, especially for the patients with moderate to severe hemiplegia. Plastic reorganization of the brain is essential for functional recovery after stroke (Cirillo et al., 2020). As regard this issue, it is necessary to evaluate the specific functional response patterns associated with RAT in stroke patients with different degree of motor impairment. Real-time characterization of the brain functional responses to specific interventions assumes great importance for the customization of an appropriate protocol training as to reach substantial improvement in the clinical practice. The real-time monitoring of cortical responses during motor intervention still remains challenging due to the low tolerance of motion artifact of some imaging techniques, such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). Functional near-infrared spectroscopy (fNIRS) is an emerging noninvasive method that monitors cortical haemodynamics with advantages including safety, portability, and motion artifact tolerance (Murata et al., 2002(Murata et al., , 2006. In recent years, significant progress made in fNIRS technology make it attract considerable attention in various research and clinical settings (Hong and Yaqub, 2019). Brain functional features collected based on fNIRS can be used as a biomarker of mild cognitive impairment (MCI) (Yang et al., 2020). fNIRS can also be used to evaluate the characteristics of brain function response induced by cognitive interventions such as acupuncture therapy in patients with MCI (Ghafoor et al., 2019). In addition, several fNIRS studies have demonstrated the feasibility of fNIRS for braincomputer interface (BCI), which can be widely used in the field of neurorehabilitation, motor rehabilitation and entertainment, etc. (Naseer and Hong, 2015). The combination of fNIRS, EEG, and other technologies can improve the classification accuracy of BCI system by decoding brain activities under multimodal neuroimaging modalities (Hong and Khan, 2017). Based on the principle of optical imaging, fNIRS can be utilized in combination with electromagnetic neuromodulation technologies (such as transcranial electrical stimulation) to monitor the cortical response and provide real-time feedback for these interventions (Yang et al., 2021). The application of fNIRS can not only get insight into the mechanism underlying neuromodulation in neurorehabilitation, but also provide targeted neuromodulation based on closed-loop regulation to achieve personalized therapy for brain disorders (Hong et al., 2022). Additionally, a comprehensive review described fNIRS-related applications for stroke and suggested fNIRS as a promising technology for detecting brain function responses during specific rehabilitation interventions (Saitou et al., 2000). Using fNIRS, we previously reported the involvement of the prefrontal, motor and occipital areas during limb-linkage training and unilateral/bilateral upper limb training in stroke patients . The prefrontal cortex (PFC) contributes to attention, planning, decision-making and the synthesis of diverse information needed for goal-directed behaviour (D'Esposito et al., 2000;Miller and Cohen, 2001). Motor-related areas are mainly involved in the coordination and execution of sensory and motor functions for complex movements (Moran and Desimone, 1985;Peterka and Loughlin, 2004). The occipital lobe (OL) is crucial for the conscious perception of body parts and can be modulated by visual stimuli, visual-guided attention and motor action (Miller et al., 1993;Astafiev et al., 2004). Functional connectivity (FC) analysis is commonly used to explore the neural interactions within the brain functional network, providing insight into the understanding of cortical reorganization and behaviour deficits after stroke (Friston, 2011;Corbetta et al., 2018). It is speculated that these regions may be involved in a specific coordinated pattern in response to motor therapy for stroke rehabilitation. At such, the primary aim of this study was to evaluate the specific cortical network response patterns to RAT based on fNIRS in combination with FC analysis in subacute stroke patients with unilateral moderate-to-severe motor deficits, supporting the Frontiers in Aging Neuroscience 03 frontiersin.org hypothesis of different functional response patterns associated with RAT depend on the degree of impairment. Real-time assessment of brain functional response can provide a theoretical basis for choosing the appropriate protocol for a patient to support the clinical decision. Participants The flow-chart through this study is shown in Figure 1. Thirty-two first-ever patients with stroke hemiplegic participated in this study. All subjects were right-handed according to the Chinese edition of the Handedness Inventory (Oldfield, 1971). Inclusion criteria: (Stoykov et al., 2009) unilateral lesions; (Lawrence et al., 2001) moderate to severe motor impairment of the hemiplegic upper limb; (Sale et al., 2014) ability to understand and follow experimental tasks; (Mazzoleni et al., 2013) age between 18 and 80 years. Exclusion criteria: (Stoykov et al., 2009) clinically unstable medical disorders; (Lawrence et al., 2001) severe cognitive impairment. The baseline characteristics (age, sex, time post stroke, lesion location) and clinical assessments, including the National Institutes of Health Stroke Scale (NIHSS), Mini-mental State Examination (MMSE) and Fugl-Meyer assessment for upper-extremity (FMA-UE), were Flow-chart through this study. Frontiers in Aging Neuroscience 04 frontiersin.org assessed for each patient (Table 1). Of note, patients were classified into 2 groups (those with a low score reflecting severe impairment and those with a high score reflecting moderate impairment) by the median FMA-UE score (median: 18) to investigate the effects of motor impairment on the brain functional response to RAT. Experiments were conducted with the understanding and written consent of each patient or the family members. The experimental study (Trial Registration: ChiCTR2100048433) was approved by the Medical Ethics Committee of Qilu Hospital and carried out according to the ethical standards defined by the Helsinki Declaration in 1975 (revised in 2008). RAT task and fNIRS data acquisition During the experiment, patients were asked to undergo data acquisition in the sitting position in both the resting state (10 min) and task state (10 min). During the resting state, patients were asked to remain still and relax with their eyes closed but stay awake. The robotic system (Arm Motus™, Shanghai Fourier Intelligence Technology Co., Ltd., China) designed for clinical rehabilitation applications was used for this study. During the task state, the hemiparetic forearm of patients was positioned on the robot-assisted upper-limb training instrument (end effector type) with an arm bracket secured to the forearm and a handle was fixed to the affected hand with bandages. The robotic system provides the goal-directed and planar reaching movements of shoulder and elbow through an "assisted as needed" control strategy at an equally speed (5.0 cm/s) and range of motion (medium: Y-axis = 20 cm, X-axis = 30 cm) around a centre target. The Movement trajectory and space is shown in Figure 1. During the task state, the patients were requested to avoid any movements other than those needed for motor tasks. A professional therapist was involved in the whole experiment to ensure the safety of the participants. During each session, cerebral haemodynamics were continuously monitored using a continuous-wave fNIRS device (Nirsmart, Danyang Huichuang Medical Equipment Co, Ltd., China) with 23 sources and 14 detectors at a sampling rate of 10 Hz. The differential path-length factors (DPFs) were set to 6. A total of 40 channels were positioned over the left and right PFC (symmetric with FpZ as a reference), motor cortex (corresponding areas C3 and C4) and OL (symmetric with OZ as a reference) according to the international 10-10 system of electrode placement (Figure 2A). The interoptode distance was 30 mm. As shown in Figure 2B, the regions of interest (ROIs), including the bilateral PFC, primary motor cortex (M1), primary somatosensory cortex (PSC), premotor and supplementary motor area (PSMA), and OL, were defined based on fNIRS channel locations recorded by 3D digitization. For patients with lesions on the right, the lesion side was uniformly set to the left hemisphere by flipping the fNIRS channels from right to left about the midsagittal line for patients with lesions on the right. In this study, the ROIs in the ipsilesional and contralesional hemispheres were presented as i-PFC, i-PSC, i-PSMA, i-M1, i-OL, and c-PFC, c-PSC, c-PSMA, c-M1, c-OL, respectively. fNIRS data preprocessing For fNIRS data, first, the absorbance signals recorded by fNIRS were bandpass filtered at 0.0095-2 Hz (zero-phase, fifth-order Butterworth filter) to reduce the uncorrelated noise components and low-frequency baseline drift. Then, fluctuations in the concentration of oxygenated haemoglobin (delta HbO 2 ) were calculated from the filtered light density according to the modified Beer-Lambert law (Cope and Delpy, 1988). The first 1 min of delta HbO 2 data were excluded to reach a steady state, with 5,400 remaining time points for each patient. We then applied principal and independent component analysis to reduce physiological interference in fNIRS measurements to extract the functional response in the brain (Santosa et al., 2013). The components of interest were visually identified according to the criteria that the relevant time course has a remarkable low-frequency spectrum (0.01-0.08 Hz) for functional haemodynamic responses (Cordes et al., 2001). This study focuses on delta HbO2 signal for subsequent analysis, mainly delta HbO2 data has better signal to noise ratio and a stronger correlation with blood-oxygenation level-dependent signal measured by fMRI (Anwar et al., 2013;Visani et al., 2015). Data preprocessing including motion artifact removal was described in our previous study . FC analysis based on wavelet phase coherence Wavelet transforms have the ability to decouple signal components and provide localized phase information. With the complex Morlet wavelet, the wavelet coefficients are complex numbers and can define the instantaneous relative phase information for each frequency and time. Wavelet transforms can be used to examine the relationship among oscillations (Li et al., 2014). FC can be calculated based on the wavelet phase coherence (WPCO) index to describe the statistical interdependencies between two haemoglobin oscillatory components by examining how phase differences align within a specific frequency range (Bandrivskyy et al., 2004;Bernjak et al., 2012). The amplitude-adjusted Fourier transform (AAFT) surrogate test was used to confirm whether the detected coherence parameters were genuine or spurious (Stankovski et al., 2017). Tan et al. described the calculation procedure for the A B FIGURE 2 Multichannel fNIRS configurations in international 10-10 system (A) and corresponding brain regions of interest (B). Frontiers in Aging Neuroscience 06 frontiersin.org WPCO and AAFT tests in detail (Tan et al., 2015). In this study, oscillators of delta HbO 2 signals in 0.01-0.08 Hz were identified using wavelet transform. Based on the AAFT test, significant channel-wise FC was obtained for each channel pair among the fNIRS oscillations for each condition. Interregional and intraregional FC analysis Based on the significant channel-wise FC matrix for each condition, we calculated the interregional and intraregional FC of the ROIs to analyse task-related changes in the large-scale network. The interregional FC among ROIs was calculated by averaging the WPCO values across all involved channel-wise connection edges based on the fNIRS channel distribution, generating a 10 10 × region-wise FC matrix. The intraregional FC was calculated by averaging the WPCO values of the involved channel-wise connection edges within each of the ROIs, generating 10 intraregional FC values. Correlation analysis of task-related FC changes and clinical variables To identify the relationship between the brain functional response related to the RAT and the clinical functional status of the upper-limb, partial correlations were employed to assess the correlation between task-induced changes in FC (delta FC = FC FC task rest − ) and the FMA-UE score with age and time poststroke as nuisance regressors for each group of stroke patients. Brain lateralization analysis based on the hemisphere autonomy index The connection-based hemispheric autonomy index (HAI) was calculated for each significant FC matrix to further describe the functional network architecture of specific states for stroke patients. Based on the significant channel-wise FC matrix, the HAI was calculated according to the definition: where m represents any fNIRS channel (m = 1,2, 40); Ni _ m and Nc _ m are the number of channels connected to channel m within (ipsilateral) hemisphere and between (contralateral) hemisphere, respectively; and Ti _ m and Tc _ m represent the total number of channels connected with channel m in the ipsilateral and contralateral hemispheres, respectively. The HAI is calculated for each channel as the index describing brain lateralization based on the difference between intrahemispheric and interhemispheric connectivity with each channel. This approach yielded HAI values that ranged between −1 and 1. A higher HAI value indicated more intrahemispheric connectivities than interhemispheric connectivities. Statistical analysis In this study, we used the G-power (v3.1.9.2; Franz Faul, University of Kiel, Kiel, Germany) for calculation of the sample size based on a previous fNIRS study that investigated the functional network patterns of stroke patients related to rehabilitation training (Lu et al., 2019). We set the effect size as 0.52, an α -error of 0.05 and a β of 0.20 (power level of 0.80). According to the analysis, at least 32 patients were needed in order to make an adequate group size, thus a sample size of 16 per group. The Kolmogorov-Smirnov test was used to determine whether values for the assessments were normally distributed. Demographic data including sex and type of stroke was compared by groups using a chi-square test. Age, duration of stroke, and functional assessment (MMSE, NIHSS, BI, and FMA-UE) were compared using one-way ANOVA. Based on this group classification, significant within-group and between-group differences in connection related indices (channel-wise FC, interregional FC, and intraregional FC) were evaluated using repeated-measures ANOVA and post hoc t-test with false discovery rate (FDR)-corrected for multiple comparisons. The association of upper-limb functional status and cortical response was examined by correlating FMA-UE with FC changes related to RAT with age and time post stroke as nuisance regressors. The Mann-Whitney U test was used to analyse the within-group and between-group differences in the HAI obtained from each condition. Statistical significance was set at p < 0.05. Demographic information All 32 participants completed the study. Patients were classified as having moderate (high score group, n = 16, FMA-UE: 33.31 ± 9.04) or severe (low score group, n = 16, FMA-UE: 12.56 ± 4.16) upperlimb motor impairment according to the median of the FMA-UE. No significant between-group differences were noted in the characteristics of patients, including age, sex, time poststroke, stroke type and MMSE (p > 0.05, see Table 2). One-way ANOVA showed that the NIHSS score of patients in the low score group significantly higher than that of patients in the high score group (p = 0.006). Effects of motor impairment on RAT-related changes in FC For within-group statistical result of channel-wise FC, the low-score group showed significantly decreased WPCO values in the RAT state compared to the resting state ( Figure 3A), which were mainly distributed between the prefrontal and motor areas, between the prefrontal and occipital brain areas, and between the bilateral motor related brain areas. For the high-score group ( Figure 3B), only one channel-wise WPCO value (between Ch. 33 and Ch. 16) was significantly decreased in the task state compared with the resting state, which were significant at strict FDR-corrected thresholds. For the largescale interregional and intraregional FC, task-state FC correlation strengths were consistently lower than restingstate FC strengths among cortical regions in both groups of patients. The results showed that in the low score group, during the task state compared with the resting state, significantly decreased interregional FC values were observed in the network ( Figure 3C Changes in channel-wise FC (A, B) and region-wise FC (C, D) in response to RAT for the low score and high score groups. In the first row, the upper triangle represents the t-value of channel-wise FC between the two states, while the blue dot on the bottom triangle represents the statistically significant difference of channel-wise FC after FDR correction between the two states in each stroke group. The second row displays the t-values of region-wise FC between the two states. The *represents the statistically significant difference of region-wise FC after FDR correction between the two states in each stroke group. The size of a node indicates how many significant edges are connected to this region. Different node colours represent different brain regions. *FDR-corrected p < 0.05. Frontiers in Aging Neuroscience 08 frontiersin.org (Figure 3D), the interregional FC of connectivities between the i-M1 and i-PFC (t = 3.697; p = 0.002), c-PSC (t = 3.583; p = 0.003), c-M1 (t = 2.980; p = 0.009) was significantly decreased in the task state compared with that in the resting state. There were no significant differences between groups after FDR correction. As shown in Figure 4A, a significant decrease in intraregional FC values was observed in the low score group in ROIs in the i-PFC (t = 5.444, p < 0.001), i-PSMA (t = 3.739, p = 0.002), i-PSC (t = 2.671, p = 0.017), i-M1 (t = 2.671, p = 0.017), c-PSMA (t = 3.836, p = 0.002) and c-OL (t = 3.018, p = 0.009). For the high score group, the intraregional FC of i-M1 (t = 3.823, p = 0.002) was significantly decreased in the task state compared with the resting state. The results of correlation analysis show a significant negative correlation between FMA-UE scores and task-evoked decreases in intraregional FC of i-PSC (r = −0.564, p = 0.045) in the low score group and i-M1 (r = −0.612, p = 0.026) in the high score group, as shown in Figure 4B. Figure 5 shows the connection-based HAI values in the resting state and RAT state for the two groups. The results show that compared with the resting state, the HAI values of channel 15 (Z = −2.497, p = 0.012) in the low score group and HAI values of channel 20 (Z = −2.497, p = 0.005) in the high score group were significantly increased in the RAT state. There were no significant differences between groups after correction. In addition, a significant difference in the bilateral hemisphere between channel 5 and channel 10 (Z = −2.844, p = 0.007) was observed in the resting state of the high score group. Alterations in intraregional FC in response to RAT in the two groups (A) and the relationship between the delta FC and FMA-UE score (B). *FDR-corrected p < 0.05. RAT-related changes in HAI values in patients with different degrees of motor impairment Frontiers in Aging Neuroscience 09 frontiersin.org Discussion The goal of this study was to investigate the differences in RAT-related brain functional responses in stroke patients with different degrees of upper-limb motor impairment. Specifically, we analysed the task-related changes in interregional and intraregional FC and the brain lateralization index of the functional network based on fNIRS. The main findings were that patients with severe impairment showed a wide range of significant FC responses induced by RAT with the same assistive mode as the moderate impairment group, involving the interregional and intraregional FC among bilateral prefrontal, motor and occipital areas. The significant task-related intraregional FC response of i-PSC was significantly correlated with FMA-UE in the low score group. Additionally, the HAI value of channel 15 distributed in the i-PSMA areas was significantly increased in the RAT state compared with the resting state. RAT-related extensive cortical response in patients with severe dysfunction might contribute to brain functional organization during motor performance, which is considered the basic neural substrate of motor-related processes. In contrast, the limited cortical response related to RAT in patients with moderate dysfunction might imply that the RAT task with assisted mode failed to induce wide range of brain functional responses and the training intensity needs to be adjusted in time according to the brain functional state for patients with moderate motor impairment. All the above evidence indicates that different functional response patterns associated with RAT depend on the degree of impairment. Real-time characterization of the brain function responses to specific training tasks is important for the assessment of functional status of stroke patients and provide guidance for the customization of effective rehabilitation training protocol. Functional recovery after stroke is widely considered to be a consequence of central nervous system reorganization (Ward, 2004). In this study, we found that RAT substantially affected the functional networks of stroke patients by decreasing intrinsic Connection-based HAI values in the resting state and RAT state in the high score group (A) and low score group (B). *Denotes that the withingroup difference is statistically significant; # denotes that the difference between hemispheres is statistically significant. network FC. This was evidenced in both the interregional and the intraregional FC. Task-related changes in FC play an important role in dynamically reshaping brain network organization and strongly contributing to brain activations during task performance (Cole et al., 2021). Compared with the resting state, there were significant changes in the interregional and intraregional FC values among the bilateral prefrontal, motor-related and occipital areas in the task state in the low score group. Performing complex motor tasks assisted by the robot system may require a higher level of attention and sensor-motor processing to integrate visual, proprioceptive, and somatosensory feedback information associated with motor output (Betti et al., 2013;Spadone et al., 2015;Kim et al., 2018). However, only the connectivities between the i-M1 and the i-PFC, between the i-M1 and the c-PSC, and between the i-M1 and the c-M1 showed significant task-related changes in the patients in the high score group. This result suggests that patients in the low-score group need to recruit a wider range of brain regions to complete the same motor task than patients in the high-score group. Task-related network reconfigurations might facilitate the propagation of task-related activations, which are commonly considered the primary neural substrate of motor execution processes (Cole et al., 2021). For patients with severe motor impairment, the significant alterations in the interregional and intraregional FC involved bilateral hemispheres might be responsible for processing and integrating the central and peripheral information related to the task demands (Siegel et al., 2016). It has been confirmed that movement of the affected hand is related to increased neural activity not only in the ipsilesional but also in the contralesional hemisphere (Chollet et al., 1991). Additionally, it is suggested that when the task becomes more demanding, motor performance depends more on bilateral motor areas (Verstynen and Ivry, 2011). Motor recovery has been demonstrated to be accompanied by increased regional cerebral blood flow in the bihemispheric sensorimotor cortex (Chollet et al., 1991). All this evidence suggested that the contralesional motor areas might play a supportive role during motor rehabilitation for patients with severe motor impairment. In addition, correlation analysis showed that the task-induced changes in the functional network of the ipsilesional sensory area were significantly correlated with the upper-limb motor function status of patients with severe motor impairment. Brain lateralization analysis showed that the HAI values of fNIRS channels (ch13-ch17) covering the ipsilesional pre-motor and supplementary motor area (SMA) were increased in the RAT state compared with the resting state, which were significantly increased in the channel 15. More specifically, the location of channel 15 distributed in the ipsilesional SMA. Hemisphere lateralization is a property of the human brain that facilitates efficient and rapid information processing. The HAI can reflect cortical functional lateralization based on the imbalance of intrahemispheric and interhemispheric connectivity (Wang et al., 2014). This result might indicate the increased involvement of the ipsilesional SMA area in the brain functional network during RAT of the affected upper limb in patients with severe motor impairment. It was suggested that motor improvement in stroke is associated with cortical function and structural reorganization involving the lesion and its surrounding tissue (Buch et al., 2016). A previous study showed that functional improvement of constraint-induced movement of the affected upper limb after stroke is associated with an increased motor map area in the ipsilesional hemisphere (Sawaki et al., 2008). Taken together, all this evidence indicates that patients with severe motor dysfunction significantly induced involvement of the contralesional hemisphere and the sensorimotor and supplementary motor areas on the ipsilesional side during the RAT in assisted mode. This finding was in accord with previous research that showing high-intensity upper limb training in the early stage of rehabilitation can increase the activation of the motor areas in the ipsilesional hemisphere and enhance neuroplasticity (Zhang et al., 2017). The upper limb rehabilitation robot can control the expansion of the contralateral (the opposite side of hemiplegic limb) cortical motor area and the recruitment of the ipsilateral (the same side of hemiplegic limb) cortex through task-directed training, and promote the functional reorganization of the functional cortex to promote functional recovery of the upper limb (Singh et al., 2021). Patients in the high score group showed limited taskrelated significant changes in the functional network, which were mainly related to the M1 region on the affected side. There was a significant correlation between the FMA-UE and the intraregional FC of the i-M1 in the high score group. In addition, the HAI value of channel 20 distributed in the ipsilesional M1 was significantly increased in the RAT task state compared with the resting state in the high score group. In conclusion, the brain functional responses induced by the RAT task mainly focused on the ipsilesional M1 area for patients with moderate impairment. The above point of view and our results might imply that compared with the outcome for the low score group, the RAT task with assisted mode failed to induce wide range of brain functional responses for patients with moderate motor impairment. These results might indicate that the mode of motor training rehabilitation needs to be adjusted in real time according to the functional status of patients to ensure that an adequate brain functional reorganization response can be induced. Limitations Several limitations should be acknowledged in this study. First, stroke patients with moderate to serve upper extremity impairment in the subacute stage were recruited in the current experiment. The lack of a subgroup of stroke patients with mild motor impairment and controlled group based on healthy subjects or RAT with unaffected upper-limb of stroke patients is a potential limitation of this study. Future experimental design would recruit more stroke patients with different degrees of motor impairment (including mild, moderated and severe dysfunction) and the healthy controls to fully describe the characteristics of the brain functional response related to specific rehabilitation tasks. Second, due to the difference of cortical activation patterns related to active and passive upper limbs movements in stroke patients (Xia et al., 2022), the influence of active participation on brain functional response in the motor training should be considered. In this study, "assisted as needed" pattern of RAT was set uniformly for patients in the subacute stage with severe-moderate upper limb motor impairment. Under this training protocol, robotic device might provide more assistance to the patients with severe motor impairment than the patients with moderate motor impairment. However, due to the lack of kinematical variables in this study, it is difficult to estimate the influence of active participation degree of stroke patients with different degrees of dysfunction on the cortical response during assisted training. Thus, further studies are warranted to clarify the influence of the active participant on the cortical response induced by RAT by the collection of kinematic data simultaneously. Conclusion In conclusion, this study showed RAT-related changes of decreased intrinsic network FC in the brain functional networks of stroke patients, with evidence in both the interregional and the intraregional FC. Different functional responses related to RAT were observed in patients with different degrees of dysfunction. Patients with severe motor impairment showed a significant task-related FC response involving extensive areas in the bilateral hemispheres, especially the PSC and SMA in the affected side. The brain functional responses induced by the RAT task mainly focused on the ipsilesional M1 area for patients with moderate impairment. The limited cortical response related to RAT in patients with moderate dysfunction might imply that the RAT task with assisted mode failed to induce wide range of brain functional responses and the training intensity needs to be adjusted in time according to the brain functional state for patients with moderate motor impairment. Taken together, fNIRS-based real-time assessment of the effects of RAT on the brain functional network provides new insights into the mechanisms of neuroplasticity associated with treatment and provides theoretical guidance for stroke rehabilitation intervention protocol. Data availability statement The data that support the findings of this study is available from the corresponding author upon reasonable request. Ethics statement The studies involving human participants were reviewed and approved by Medical Ethics Committee of Qilu Hospital. The patients/participants provided their written informed consent to participate in this study. Author contributions CH, ZS, XL, HX, and YS: data collection and investigation. CH, HX, and GX: data analysis. CH: manuscript draft. ZS: manuscript revision. YW and ZL: supervision. All authors contributed to the article and approved the submitted version.
2022-12-09T14:29:07.452Z
2022-12-09T00:00:00.000
{ "year": 2022, "sha1": "53120ab422600ed0a4976ee0b31b0807adf769dc", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2022.1060734/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75ed664e51f5abf37b8394346780f5676f140442", "s2fieldsofstudy": [ "Psychology", "Biology", "Medicine" ], "extfieldsofstudy": [] }
235916003
pes2o/s2orc
v3-fos-license
Mechanism of Exosomes Involved in Osteoimmunity Promoting Osseointegration Around Titanium Implants With Small-Scale Topography Exosomes are nanoscale extracellular vesicles. Several studies have shown that exosomes participate in intercellular communication and play a key role in osseointegration. However, it is unclear whether exosomes and their contents participate in the communication between the immune and skeletal systems in the process of osseointegration. In this study, we obtained smooth titanium disks by polishing and small-scale topography titanium disks by sandblasted large-grit acid-etched (SLA) technology combined with alkali thermal reaction. After stimulating mouse RAW264.7 cells with these two kinds of titanium disks, we co-cultured the MC3T3-E1 cells and the RAW264.7 cells, obtained and identified the exosomes derived from RAW264.7 cells, and studied the effect of the osteoimmune microenvironment and the exosomes on the osseointegration of mouse MC3T3-E1 cells. Cell counting kit-8 (CCK-8), real time quantitative PCR, western blotting, alizarin red staining, and quantitative and confocal fluorescence microscopy were used to study the effects of exosomes on MC3T3-E1 cells; RNA sequencing and correlation analysis were performed. We found that the osteoimmune microenvironment could promote the osseointegration of MC3T3-E1 cells. We successfully isolated exosomes and found that RAW264.7 cell-derived exosomes can promote osteogenic differentiation and mineralization of MC3T3-E1 cells. Through RNA sequencing and gene analysis, we found differentially expressed microRNAs that targeted the signal pathways that may be related, such as mTOR, AMPK, Wnt, etc., and thus provide a reference for the mechanism of osteoimmunue regulation of implant osseointegration. The study further elucidated the mechanism of implant osseointegration and provided new insights into the effect of exosomes on implant osseointegration, and provided reference for clinical improvement of implant osseointegration and implant success rate. INTRODUCTION In recent years, implant dentures have gradually become an important treatment option for missing teeth. Titanium and titanium alloys have good biocompatibility and mechanical properties and are among the most widely used implant materials in clinics (Xue et al., 2020). However, because titanium is an inert metal with no biological activity, it can easily cause a host inflammatory reaction that can even progress to chronic inflammation, delaying implant osseointegration (Dar et al., 2018). Therefore, many techniques have been applied for titanium surface modification (Dohan Ehrenfest et al., 2010), such as sandblasted large-grit acid etching (SLA) and anodization (Loi et al., 2016;Miron and Bosshardt, 2016). In our previous study, we combined SLA technology with an alkali thermal reaction to construct titanium implants with small-scale topography. It was found that, small-scale topography promoted MC3T3-E1 cell proliferation and osteogenic differentiation better than the polished smooth surface, the micro-surface obtained by SLA technology and the nano-surface obtained by alkali thermal reaction . The process of osseointegration of implants involves the coordinated operation of the immune and skeletal systems, namely osteoimmunity (Dar et al., 2018). Macrophages can be differentiated into resident cells or myeloid precursor cells (mainly monocytes) and reside in the bone. The interaction between macrophages and osteocytes is crucial for bone formation and repair (Pieters et al., 2019). Some studies have found that different implant surface morphologies can induce macrophages to polarize to the pro-inflammatory M1 phenotype or anti-inflammatory M2 phenotype (Luu et al., 2015;Quan et al., 2020). In our previous study, we found that the smallscale topography can stimulate RAW264.7 cells to polarize to anti-inflammatory M2 phenotype and regulate the osteoimmune microenvironment to an anti-inflammatory environment, which is more conducive to implant osseointegration . Exosomes are nano-sized vesicles that are secreted by most cells. They were first found in reticulocytes in 1983 and named exosome in 1987 (Pan and Johnstone, 1983). The diameter of exosomes is in the range of 30-150 nm, with a lipid bilayer structure (Mathieu et al., 2019); exosomes can be directly absorbed by target cells and affect the phenotype of receptor cells (Rani et al., 2015;Yang et al., 2017). Therefore, they play an important role in cell communication and have attracted increasing attention. Wei et al. (2019) found that the expression of alkaline phosphatase (ALP) and BMP-2 markers of early osteoblast differentiation was significantly increased by using BMP-2/macrophage-derived exosomes to modify titanium nanotube implants, which confirmed that the combination of titanium nanotubes and BMP-2/macrophagederived exosomes could promote bone formation. Xiong et al. (2020) found that M2 macrophage-derived exosome microRNA-5106 could induce the osteogenic differentiation of bone marrow mesenchymal stem cells. Exosomes play an important role in target cells, mainly through intercellular communication and the delivery of key bioactive factors. However, it is not clear whether exosomes participate in osteoimmunity and influence osteointegration around titanium implants with small-scale topography. Furthermore, the gene information and function of macrophagederived exosomes have not been fully clarified. Therefore, we studied the effect of macrophage-derived exosomes stimulated by small-scale topography of titanium implants on MC3T3-E1 cells and screened key microRNAs in exosomes, to further explore the mechanism of macrophages stimulated by exosomal contents in small-scale topography titanium disks on MC3T3-E1 cells, and to provide reference for exploring the effect of osteoimmunity on osseointegration. Preparation and Characterization of Titanium Disk Ti6Al4V disks with a diameter of 19.5 mm and thickness of 1 mm (Taizhou Yutai Metal Materials Co., Ltd., Jiangsu, China) were used and polished to an average roughness of 0.2 mm and a thickness of 0.01 mm. The polishing process of the disks was listed in Supplementary File 1. The 60 mesh alumina particles (Gongyi Baolai Water Treatment Material Factory, Henan, China) were sprayed on the polished titanium plate surface at a spray angle of 90 • with a spray distance of no more than 5 cm. When the surface of the titanium disk was uniformly gray, it was removed and immersed in 0.5% hydrofluoric acid solution for 15 min at 25 • C. For alkali thermal treatment, the titanium disk was immersed in a 10 mol/L sodium hydroxide solution and treated at 80 • C for 24 h. All the titanium disks were placed into a 5% concentrated cleaning solution (micro-90, International Products Company, New York, United States), anhydrous ethanol, distilled water, ultrasonic vibration cleaning for 5 min, and air dried at room temperature for standby. For the surface characterization of the titanium disk, the surface morphology was observed using a cold field emission scanning electron microscope (Carl Zeiss, Germany) and 3D laser scanning microscope (VK-X200 K, Japan). The surface contact angle was measured by the suspension drop method with 2 µL artificial saliva, and the average surface roughness (RA) of the titanium disk was measured using a Wyko nt9300 optical profiler (Veeco, United States). The surface composition of titanium disks was analyzed by X-ray photoelectron spectrometer (Thermo Fisher Scientific, United States). Cell Culture RAW264.7 cells, a widely used mouse derived macrophage cell line, were provided by Shandong Key Laboratory of oral tissue. Mouse embryonic osteoblast precursor cells (MC3T3-E1 cells) were purchased from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China). The complete medium was α-minimum essential medium (α-MEM, Hyclone, United States), in which 10% fetal bovine serum (FBS, Hyclone, United States) was added. The culture medium contained double antibodies (100 IU/mL penicillin G and 100 µg/mL streptomycin, Solebo, China). The cells were cultured at 37 • C in a 5% CO 2 incubator, and the medium was changed on alternate days. Osteogenesis induction medium (OM) consisted of complete medium supplemented with 50 mg/L ascorbic acid (Sigma Aldrich), 10 mmol/L β-glycerophosphate (Sigma Aldrich), and 10 nmol/L dexamethasone (Sigma Aldrich). The supernatant of RAW264.7 cell culture medium was centrifuged at 1000 × g for 5 min, and then the conditioned medium (CM) was prepared by adding 20% FBS osteogenic induction medium. Exosome Isolation The supernatant of RAW264.7 cell culture medium was centrifuged at 1000 × g for 5 min, and then 20 mL of each group was added into a 50 mL centrifuge tube. PBS was added to bring it to a total of 40 mL in each tube, centrifuged at 4 • C for 10 min at 300 × g, and the supernatant was transferred to a new 50 mL, the supernatant was transferred to a centrifuge tube (40 mL), matched with the ultracentrifuge, and centrifuged at 4 • C for 30 min at 10,000 × g. The supernatant was transferred to a centrifuge tube (40 mL), matched with the ultracentrifuge, and centrifuged at 4 • C for 70 min at 10,0000 × g. The exosomes were identified by transmission electron microscopy, nanoparticle tracking analysis, and western blotting. Nanoparticle Tracking Analysis (NTA) The sample pool was cleaned with deionized water, the instrument was calibrated with polystyrene microspheres (110 nm), and the sample pool was cleaned with 1 × PBS buffer; the exosome solution was diluted with 1 × PBS buffer and injected for detection. Each sample was detected three times, and the data were processed and mapped using the Origin software. CCK-8 Detection Cell proliferation was examined using the cell counting kit 8 (CCK-8; Dojindo, Tokyo, Japan). MC3T3-E1 cells were treated with osteogenic induction medium or CM. On days 1, 3, 5, and 7, the original medium was replaced with CCK-8 reagent and complete medium at a ratio of 1:10. MC3T3-E1 cells were incubated at 37 • C for 1 h, and the absorbance value was measured at 450 nm. Alizarin Red S Staining and Cetylpyridinium Chloride Determination MC3T3-E1 cells were fixed with 4% paraformaldehyde for 30 min and then incubated with alizarin red S (Sigma Aldrich) for 10 min. After washing with deionized water, the calcium deposition was observed using an optical microscope. Cetylpyridinium chloride (10%; Sigma Aldrich) was used for quantification, and the absorbance value was determined at 562 nm. Real Time Fluorescent Quantitative PCR (qRT-PCR) Total RNA was extracted using TRIzol reagent (Invitrogen, NY, United States). cDNA was synthesized using the PrimeScript RT Master Mix Kit (Takara Biotechnology, China). GAPDH was selected as the internal reference gene, and the relative gene expression was calculated by the "2− CT" method. The primer sequences are listed in Table 1. Western Blot Cells were lysed with Ripa buffer (Wanleibio, China) containing 10 mM protease inhibitor (PMSF; Wanleibio, China). The lysate was centrifuged at 12,000 rpm and 4 • C for 10 min. The supernatant was separated, and the protein concentration was determined using a BCA protein concentration assay kit (Wanleibio, China). Equivalent amounts of protein (40 g) were added to the 8%-15% SDS-PAGE gel and then transferred to a PVDF membrane (Millipore, Billerica, United States). The membrane was sealed in 5% skimmed milk powder solution for 1 h, and then incubated with the following primary antibodies: runt-related transcription factor 2 (Runx2, wl03358; Wanleibio, China), Collagen I (wl0088; Wanleibio, China), β-actin (wl01845; Wanleibio, China), CD9 (ab92726, Abcam, United Kingdom), CD63 (ab216130, Abcam, United Kingdom), and TSG101 (ab125011, Abcam, United Kingdom) at 4 • C overnight. After washing with TBST four times, the membrane was incubated with sheep anti-rabbit IgG HRP (wla023; Wanleibio, China) at room temperature for 45 min. An ECL detection kit (Wanleibio, China) was used to visualize the protein bands. Proteins on the same membrane were compared quantitatively by determining the optical density of the target strip using a gel image processing system (Gel-Pro-Analyzer software, Media Cybernetics, United States). Exosome RNA Sequencing and Gene Analysis According to the standard procedure provided by Illumina (San Diego, United States), the miRNA sequencing library was prepared using the TruSeq Small RNA Sample Prep Kit (Illumina, San Diego, United States). After library preparation, the constructed library was sequenced using Illumina HiSeq2000/2500, and the reading length was 1 × 50 bp. Clean reads were obtained from original data after quality control. The 3 -linker was removed from the clean reads, and the length of the 3 linker was screened. Sequences with a base length of 18-25 nt (plant) or 18-26 nt (animal) were retained. The remaining sequences were then aligned to various RNA database sequences (excluding miRNA), such as mRNA database, Rfam database (including rRNA, tRNA, snRNA, snoRNA, etc.) and RepBase database (repetitive sequence database), and filtered. Finally, the data obtained were valid. MicroRNA data analysis software Acgt101 Mir Program (LC Sciences, Houston, Texas, United States) was used to analyze the differentially expressed miRNAs. (smooth, smooth titanium disk; micro-nano, small-scale topography titanium disk. *p < 0.05, **p < 0.01, ****p < 0.0001). Bioinformatics Analysis The number of genes annotated for each function or pathway corresponding to the target genes which were corresponded to the differentially expressed miRNAs was counted, and then the hypergeometric test was applied to determine the number of genes in the Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) database corresponding to the target gene mRNAs which were corresponded to all the selected microRNAs. Compared all the gene in the database above (all genes with functional annotation, or all miRNA target genes with functional annotation), the significantly enriched genes or pathways were selected under the standard that p-value less than or equal to 0.05. The formula for calculating the p-value is as Statistical Analysis Quantitative results are expressed as mean ± standard deviation (SD). The experiment was repeated independently at least three times. Univariate analysis of variance and multiple t-tests were used for statistical evaluation, and p < 0.05 was set as a statistically significant threshold. Figure 1A and Supplementary File 2 exhibits that the surface of smooth titanium disk is flat, without scratches; under the low power microscope, the surface of small-scale topography titanium disk is rough, no residual alumina particles are found, and a large number of pits can be seen. The pits are in the form of a multilevel continuous superposition. The secondary depression with a diameter of 2-8 µm is superimposed on the surface of the first depression with a diameter of 10-50 µm, and nanopores with a diameter of approximately 50-200 nm can be viewed under a high-power microscope. Figure 1B shows the surface droplet morphology and average surface hydrophilicity of the two types of titanium disks. Using 2 µL artificial saliva as a wetting agent, the contact angle of the small-scale topography titanium disk was significantly lower than that of the smooth titanium disk (p < 0.05). The superficial 3D microstructure and roughness are shown in Figure 1C. The average surface roughness, Ra, of the two kinds of titanium disks was further observed and compared using an optical profiler. In comparison with the smooth titanium disk, the average surface roughness of small-scale topography titanium disk increased significantly (p < 0.05; Figure 1C, Supplementary File 3), which is consistent with the results of scanning electron microscope . The analysis results of the surface composition of the titanium disc are shown in the Figure 1D and Supplementary Files 4, 5. All the results above proved that the titanium disk obtained by SLA technology combined with alkali heat treatment was small-scale topography, which increased the surface roughness of the titanium disk and was more conducive to osseointegration. Small-Scale Topography Titanium Disk Induces Macrophages to Polarize to Anti-inflammatory M2 Type After RAW264.7 cells were cultured on the surface of smooth and small-scale topography titanium disks, the expression of inflammation-related genes was detected (Figure 2). Compared to that on the surface of the smooth titanium disks, the expression level of the anti-inflammatory related gene IL-10 on the surface of small-scale topography titanium disk was increased and the expression level of the pro-inflammatory related gene IL-1β was decreased, with statistical significance, which was consistent with our previous research , and proved that smallscale topography titanium disk could stimulate the differentiation of RAW264.7 cells into M2 type to form an anti-inflammatory immune microenvironment. M2 Type RAW264.7 Cells Can Induce Osteogenic Differentiation of MC3T3-E1 Cells, and Exosomes May Play a Key Role in It In order to study the effect of the osteoimmune microenvironment formed by M2 type RAW264.7 cells on MC3T3-E1 cells, we co-cultured the two types of cells. The CCK-8 result showed that compared with OM, CM could promote MC3T3-E1 cells proliferation better ( Figure 3D). The qRT-PCR and Western blot results showed that compared with OM, CM enhanced the expression of Collagen I and Runx2 in MC3T3-E1 cells (Figures 3E,F). AND the alizarin red staining and semi-quantitative studies showed that the mineral deposition of cells of CM was increased (Figure 3G). We speculated that the exosomes played an important role in the process that osteoimmunity promoted the osteogenic differentiation of MC3T3-E1 cells. Therefore, the exosomes derived from RAW264.7 cells stimulated by the two kinds of titanium disks were separated and studied. First, we characterized the exosomes derived from RAW264.7 cell. Transmission electron microscopy (TEM) results showed that the separated vesicles had a round bilayered membrane structure ( Figure 3A). NTA results showed that the peak diameter of vesicles was 63.3 nm, accounting for 64.9% of the total area. The average diameter of vesicles was 117.1 nm, the distribution of vesicles was in the range of 30-200 nm ( Figure 3B); the expression of exosome surface proteins CD9, CD63 and TSG101 was detected by western blot (Figure 3C); the above results demonstrated that the vesicles isolated from RAW264.7 cells were exosomes. Next, we studied the effect of the exosomes on osteogenic differentiation of MC3T3-E1 cells. The CCK-8 assay was used to study the effect of cell proliferation (Figure 3D). In comparison with the control group, exosomes promoted the proliferation of MC3T3-E1 cells, but the effect was not as pronounced as that of the CM. qRT-PCR and western blotting were used to detect the expression of osteogenesis-related marker genes and proteins (Figures 3E,F). The results showed that compared to OM and CM, the expression of Collagen I and Runx2 in exosome-stimulated MC3T3-E1cells had significantly increased. Alizarin red staining and semi-quantitative studies showed that the effect of exosomes on promoting extracellular mineralization was similar to that of the MC3T3-E1 cells in CM ( Figure 3G). The above studies had proved that exosomes played a key role in the process of oateoimmunity to promote osseointegration. Differentially Expressed MicroRNA Detected in RAW264.7 Cell-Derived Exosomes Stimulated by Smooth Titanium Disk and Small-Scale Topography Titanium Disk By RNA sequencing, we analyzed the differentially expressed microRNAs in RAW264.7 cell-derived exosomes stimulated by two kinds of titanium disks. The abscissa of the heat map represents the sample cluster, and the ordinate represents the gene cluster ( Figure 4A). A total of 260 mature miRNAs and 20 specific miRNAs were expressed in RAW264.7 cell-derived exosomes ( Figure 4B). Exosome microRNAs can regulate related signaling pathways by targeting downstream target genes, thereby affecting successful differentiation. According to further GO and KEGG pathway analyses, differentially expressed miRNAs are involved in most biological processes, and are mainly regulated in the Hippo signaling pathway, Wnt signaling pathway, and mTOR signaling pathway (Figures 5A,B). Table 2 summarizes and lists the 11 upregulated and downregulated miRNAs and generates the regulatory network of these 11 miRNAs (Figure 6). Some genes involved in cell proliferation and differentiation are involved, which helps to further clarify the mechanism of differential expression of microRNAs in regulating osteoimmunity and promoting osseointegration. DISCUSSION In this study, we used titanium disks to stimulate RAW264.7 cells to polarize into anti-inflammatory M2 phenotype, and isolated exosomes from RAW264.7 cells. We found that exosomes induced osteogenic differentiation, and mineralization of MC3T3-E1 cells. Meanwhile, we summarized the expression of microRNA of RAW264.7 cell-derived exosomes, which could provide a basis for exploring the mechanism of RAW264.7 cell-derived exosomes involved in the osteogenic differentiation of MC3T3-E1 cells. Titanium and titanium alloys are the most widely used dental implant materials in clinics. The surface morphology of the implant has an important influence on osseointegration. A large number of studies have shown that the small-scale topography of the implant surface is more conducive to the biological behavior of osteoblasts than the simple micro-or nano-morphology (Wang et al., 2012Ren et al., 2018). Micro-morphology can improve the mechanical properties of the implant by increasing the surface area of the implant and enhancing the mechanical chimerism between the implant and the bone cells (Dohan Ehrenfest et al., 2010). Nanomorphology can regulate the behavior of the implant by regulating the information transmission between cells (Chen et al., 2018). Compared to the traditional SLA technology, smallscale topography has a stronger ability to form hydroxyapatite in vitro, which can better promote the adhesion and extension of bone cells and promote osteogenesis cell proliferation and differentiation (Wang et al., 2012Ren et al., 2018;Yang et al., 2020). During osseointegration, there is a close relationship between the immune system and skeletal system. The immune system plays a key role in tissue repair and regeneration (Forbes and Rosenthal, 2014); macrophages and myeloid heterogeneous immune cells play an important role in the process of osseointegration. They can polarize to the pro-inflammatory M1 phenotype or anti-inflammatory M2 phenotype according to different stimuli (Gordon and Taylor, 2005;Gordon and Martinez, 2010;Michalski and McCauley, 2017). Many studies have shown that immune cells can be affected by various factors of the implant, including surface morphology (Fink et al., 2008;Spiller et al., 2009). In our previous study, we found that the small-scale topography titanium disk can stimulate RAW264.7 cells to polarize to the M2 phenotype, secrete IL-10 and VEGF, regulate the immune environment, and promote the osteogenic differentiation of MC3T3-E1 cells . Exosome biogenesis is a dynamic but highly ordered process involving two invasions of the plasma membrane and the formation of intraluminal vesicles (ILVs) and intracellular multivesicular bodies (MVBs) Kalluri and LeBleu, 2020). Then, MVBs fuse with lysosomes to degrade, or with the plasma membrane to release ILV into the extracellular environment and become exosomes (van Niel et al., 2018;Mathieu et al., 2019), which then actively participate in the functional changes of many cells. The nature and content of exosomes are cell type-specific (Kalra et al., 2016), which are usually influenced by the physiological or pathological state of donor cells, the stimulation, and the molecular mechanism of biogenesis (Minciacchi et al., 2015). Studies have shown that many cell types and molecular mechanisms contribute to the coupling between bone resorption and bone formation (Sims and Martin, 2014). Exosomes from mononuclear phagocytes are most likely to play a role in maintaining bone homeostasis (Silva et al., 2017). Belonging to Runx family, expressed in osteoblasts, Runx2 is responsible for activating osteoblast differentiation marker genes, and plays a key role in the process of osteogenic differentiation (Vimalraj et al., 2015). Ekström et al. (2013) showed that LPS-stimulated monocytes can communicate with mesenchymal stem cells through exosomes, which can increase the expression of Runx2 and BMP-2 in mesenchymal stem cells. This is considered to be an intercellular signal transduction mode from the stages of injury and inflammation till bone regeneration. Qin et al. (2016) confirmed that the secretion of human bone marrow mesenchymal stem cells can effectively promote the proliferation and differentiation of rat bone marrow mesenchymal stem cells. Qi et al. (2016) combined human induced pluripotent stem cell-derived exosomes with β-tricalcium phosphate (β-TCP) scaffolds to repair rat skull defects. It was found that the repair effect of the exosome composite scaffolds was significantly better than that of the β-TCP scaffolds alone, and exosomes could promote the osteogenic differentiation of bone marrow mesenchymal stem cells by activating the PI3K/Akt signaling pathway. In this study, exosomes derived from M2 RAW264.7 cells induced upregulation of Runx2 and Collagen I expression in MC3T3-E1 cells, and the effect was significantly better than that in the macrophage co-culture group, indicating that exosomes play an important role in the process of osteoimmunity-promoting osseointegration. At the same time, we found that three miRNAs were upregulated and eight miRNAs were downregulated in M2 RAW264.7 cell-derived exosomes, and the corresponding target genes involved in the regulation of multiple signaling pathways, such as mTOR, AMPK and Wnt signaling pathways, which play an important role in the process of osseointegration. After RNA sequencing of exosomes derived from hMSCs induced by osteogenesis, Zhai et al. (2020) found that exosomes induce osteogenic differentiation through microRNA, among which the miRNAs HAS-Mir-146a-5p, HAS-Mir-503-5p, HAS-Mir-483-3P, and HAS-MIR) that contribute to bone formation. The upregulation of HAS-Mir-32-5p, HAS-Mir-133a-3P, and HAS-Mir-204-5p activated the PI3K/Akt and MAPK signaling pathways related to osteogenesis. CONCLUSION In conclusion, we successfully isolated exosomes from RAW264.7 cells, which were induced to polarize to the M2 phenotype by preparing small-scale topography titanium disks. After co-culture with MC3T3-E1 cells, we found that exosomes significantly promoted the osteogenic differentiation and mineralization of MC3T3-E1 cells. Through RNA sequencing and gene analysis, we found differentially expressed microRNAs that targeted the signal pathways that may be related, such as mTOR, AMPK, Wnt, etc., and thus provide a reference for the mechanism of osteoimmunue regulation of implant osseointegration. The deficiency of this study is that the selection of RAW264.7 cells and MC3T3-E1 cells has certain limitations, and the related research is still insufficient. In the next research, we will focus on the exosome microRNA and its downstream key factors, and further study the molecular mechanism of osteoimmune effect on small-scale topography implant osseointegration through exosomes. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: GSE175428, https:// www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE175428. AUTHOR CONTRIBUTIONS HS, PY, and TZ conceived the project and designed experiments. TZ, MJ, and XY performed the experiments. TZ and MJ analyzed the data. TZ wrote the manuscript. All authors commented on the manuscript.
2021-07-16T13:19:03.519Z
2021-07-15T00:00:00.000
{ "year": 2021, "sha1": "0388008f31b8d1fc78b157f883ca8ecc145f4a97", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2021.682384/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0388008f31b8d1fc78b157f883ca8ecc145f4a97", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
23867657
pes2o/s2orc
v3-fos-license
Anaphylaxis to Topically Applied Sodium Fusidate Fusidic acid is a bacteriostatic antibiotic that is effective primarily on gram-positive bacteria, such as Staphylococcus and Corynebacterium species. It is often topically applied to the skin, but is also given systemically as a tablet or injection. Allergic contact dermatitis, or urticaria, has been reported as a side effect of fusidic acid treatment, whereas anaphylaxis to topically administered fusidic acid has not been reported previously. A 16-year-old boy visited an outpatient clinic for further evaluation of anaphylaxis. He suffered abrasions on his arms during exercise, which were treated with a topical ointment containing sodium fusidate. Within 30 minutes, he developed urticaria and eyelid swelling, followed by a cough and respiratory difficulty. His symptoms were relieved by emergency treatment in a nearby hospital. To investigate the etiology, oral provocation with fusidate was performed. After 125 mg (1/2 tablet) of sodium fusidate was administered, he developed a cough and itching of the throat within 30 minutes, which was followed by chest discomfort and urticaria. Forced expiratory volume in 1 second (FEV1) dropped from 4.09 L at baseline to 3.50 L after challenge, although wheezing was not heard in his chest. After management with an inhaled bronchodilator using a nebulizer, chest discomfort was relieved and FEV1 rose to 3.86 L. The patient was directed not to use fusidate, especially on abrasions. Here we report the first case of anaphylaxis resulting from topical fusidic acid application to abrasions. INTRODUCTION Fusidic acid is a bacteriostatic antibiotic that is obtained from the fungus Fusidium coccineum that is primarily effective on gram-positive bacteria. It has a steroid structure and displays very high activity against Staphylococcus aureus, including methicillin-resistant strains, and Staphylococcus epidermidis. It is also active against some Corynebacterium species, when it is used via oral, intravenous, and topical application routes. Fusidic acid has commonly been used for the treatment of mild to moderate skin and soft-tissue infections for more than 30 years. 1,2 Despite the wide use of fusidic acid, it presents few serious side effects. Occasional allergic reactions associated with allergic contact dermatitis have been reported. 3,4 Most of these cases had underlying stasis dermatitis or atopic dermatitis. 5 However, anaphylaxis to topically administered fusidic acid has not been reported previously. Here, we report a case of a 16-yearold boy who showed anaphylactic reactions after topical administration of fusidic acid ointment to abrasions on his arms. CASE REPORT A 16-year-old boy visited an outpatient clinic for further eval- Anaphylaxis to Topically Applied Sodium Fusidate Mi-Ran Park, 1 uation of anaphylaxis. He presented to the school nurse after acquiring abrasions on his arms during exercise. The nurse applied a topical ointment containing sodium fusidate to the abrasions. Within 30 minutes, the patient began experiencing eyelid swelling and urticaria with pruritus of his whole body. He then began coughing and had difficulty breathing. He was taken to the nearby hospital and received emergency treatment. Two years earlier, he had been brought to the emergency department with symptoms of urticaria, eyelid edema, a cough, and dyspnea after taking medications containing acetaminophen, tiropramide, domperidone, and trimebutine. However, drug allergy was not confirmed at that time. No past medical history of asthma or allergic rhinitis or family history of allergic diseases was found. We performed an oral provocation test with 125 mg (1/2 tablet) of fusidic acid. Thirty minutes after the first dose, the patient Fusidic acid is a bacteriostatic antibiotic that is effective primarily on gram-positive bacteria, such as Staphylococcus and Corynebacterium species. It is often topically applied to the skin, but is also given systemically as a tablet or injection. Allergic contact dermatitis, or urticaria, has been reported as a side effect of fusidic acid treatment, whereas anaphylaxis to topically administered fusidic acid has not been reported previously. A 16-yearold boy visited an outpatient clinic for further evaluation of anaphylaxis. He suffered abrasions on his arms during exercise, which were treated with a topical ointment containing sodium fusidate. Within 30 minutes, he developed urticaria and eyelid swelling, followed by a cough and respiratory difficulty. His symptoms were relieved by emergency treatment in a nearby hospital. To investigate the etiology, oral provocation with fusidate was performed. After 125 mg (1/2 tablet) of sodium fusidate was administered, he developed a cough and itching of the throat within 30 minutes, which was followed by chest discomfort and urticaria. Forced expiratory volume in 1 second (FEV1) dropped from 4.09 L at baseline to 3.50 L after challenge, although wheezing was not heard in his chest. After management with an inhaled bronchodilator using a nebulizer, chest discomfort was relieved and FEV1 rose to 3.86 L. The patient was directed not to use fusidate, especially on abrasions. Here we report the first case of anaphylaxis resulting from topical fusidic acid application to abrasions. Case Report http://e-aair.org presented with a cough and an itching sensation on his neck followed by chest discomfort and urticaria on the forehead and right arm. Forced expiratory volume in 1 second (FEV1) dropped from 4.09 L at baseline to 3.50 L after challenge, although wheezing was not heard in his chest. The provocation test was terminated and an inhaled bronchodilator was given using a nebulizer. Chest discomfort was relieved and FEV1 rose to 3.86 L following management. His skin lesions improved with oral administration of antihistamine. Oral provocation tests with acetaminophen, amoxicillin, and cefadroxil were all negative. However, he showed conjunctival injection and itchy throat 30 minutes after a challenge test with 100 mg tiropramide. His final diagnosis was drug allergy to fusidic acid and tiropramide. We recommended that these drugs should not be administered systemically and not to use topical fusidic acid on abrasions. DISCUSSION Fusidic acid is metabolized mainly in the liver. Adverse reactions to fusidic acid are associated with intravenous administration, and have been related to the gastrointestinal tract and liver. Oral fusidic acid has also been shown to cause adverse reactions, which were classified as gastrointestinal (58%), constitutional (6.1%), neurologic (3.3%), allergic (4.6%), and other (27%). 6 These occurred most commonly within 6-10 days. With regard to adverse reactions, 29 cases involving allergic contact dermatitis to topical fusidic acid have been reported. 7 However, the incidence of hypersensitivity to topical fusidic acid is very low in most studies. In a study that performed patch tests with 26 commercially available antiseptic, antibacterial, and antifungal ointments, 45 out of 200 subjects (22%) showed one or more positive tests, but none was sensitive to fusidic acid. 8 A study investigating the comparative frequency of patch test reactions to topical antibiotics found a low incidence of positive reactions to fusidic acid (0.3%) as compared with 3.6% for neomycin and 0.7% for clioquinol. 9 It was also reported that there has been no increase in the frequency of allergic reactions to fusidic acid since the 1980s, despite its increasing use. The reason for fusidic acid being an inappropriate contact allergen may result from its large molecular weight (>500 kDa) and its unique structure, which is different from that of other antibiotics. To support the diagnosis of allergy to topically applied fusidic acid in our patient, we needed to exclude allergic reactions to other components of the ointment. The components of fusidic acid ointment are 2% sodium fusidate, lanolin, liquid paraffin, Vaseline, and cetyl alcohol. A study in the UK revealed that most fusidic-acid-allergic patients were also allergic to lanolin (52%), one of the constituents of Fucidin ® ointment. 9 However, our patient also manifested anaphylactic reactions in the provocation test with the fusidic acid tablet, which did not contain any of the additives present in the Fucidin ® ointment. This indicates that anaphylaxis was triggered by fusidic acid itself and not by any of the additives, including lanolin. In our study, the patient displayed anaphylactic reactions, including a cough and chest discomfort, following the application of fusidic acid ointment. When trying to establish the cause of anaphylactic reactions, materials that were previous injected or ingested are generally considered, but agents applied to the skin may be easily overlooked. However, it should be considered that systemic absorption of topically applied substances is possible, especially through a defective skin barrier. [10][11][12] Because we suspected anaphylaxis caused by systemic absorption of fusidic acid through abrasions, an oral provocation test was performed using a fusidic acid tablet. Although rare, anaphylactic reactions have been reported after application of bacitracin ointment, and the presence of specific IgE antibodies to bacitracin has been suggested. 10,13,14 Unfortunately, we did not examine the presence of IgE antibodies specific to fusidic acid in our patient. Conclusively, we report the first case of anaphylaxis following topical administration of fusidic acid. Specifically, fusidic acid ointment was applied to abrasions on the arms of a 16-year-old boy. This rare, life-threatening adverse event is clearly worth the attention of practitioners.
2017-08-15T13:18:43.441Z
2012-11-02T00:00:00.000
{ "year": 2012, "sha1": "17dbd046cca9ab389e47683678ea6ea72c7f51fc", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3579090?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "17dbd046cca9ab389e47683678ea6ea72c7f51fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15490106
pes2o/s2orc
v3-fos-license
Acetylcholinesterase of the sand fly, Phlebotomus papatasi (Scopoli): construction, expression and biochemical properties of the G119S orthologous mutant Background Phlebotomus papatasi vectors zoonotic cutaneous leishmaniasis. Previous expression of recombinant P. papatasi acetylcholinesterase (PpAChE1) revealed 85% amino acid sequence identity to mosquito AChE and identified synthetic carbamates that effectively inhibited PpAChE1 with improved specificity for arthropod AChEs compared to mammalian AChEs. We hypothesized that the G119S mutation causing high level resistance to organophosphate insecticides in mosquitoes may occur in PpAChE1 and may reduce sensitivity to inhibition. We report construction, expression, and biochemical properties of rPpAChE1 containing the G119S orthologous mutation. Methods Targeted mutagenesis introduced the G119S orthologous substitution in PpAChE1 cDNA. Recombinant PpAChE1 enzymes containing or lacking the G119S mutation were expressed in the baculoviral system. Biochemical assays were conducted to determine altered catalytic properties and inhibitor sensitivity resulting from the G119S substitution. A molecular homology model was constructed to examine the modeled structural interference with docking of inhibitors of different classes. Genetic tests were conducted to determine if the G119S orthologous codon existed in polymorphic form in a laboratory colony of P. papatasi. Results Recombinant PpAChE1 containing the G119S substitution exhibited altered biochemical properties, and reduced inhibition by compounds that bind to the acylation site on the enzyme (with the exception of eserine). Less resistance was directed against bivalent or peripheral site inhibitors, in good agreement with modeled inhibitor docking. Eserine appeared to be a special case capable of inhibition in the absence of covalent binding at the acylation site. Genetic tests did not detect the G119S mutation in a laboratory colony of P. papatasi but did reveal that the G119S codon existed in polymorphic form (GGA + GGC). Conclusions The finding of G119S codon polymorphism in a laboratory colony of P. papatasi suggests that a single nucleotide transversion (GGC → AGC) may readily occur, causing rapid development of resistance to organophosphate and phenyl-substituted carbamate insecticides under strong selection. Careful management of pesticide use in IPM programs is important to prevent or mitigate development and fixation of the G119S mutation in susceptible pest populations. Availability of recombinant AChEs enables identification of novel inhibitory ligands with improved efficacy and specificity for AChEs of arthropod pests. Background Leishmaniasis is a widespread debilitating and neglected disease of intertropical and temperate regions affecting millions of people throughout the world. The most common form is cutaneous leishmaniasis, with an estimated 0.7 to 1.3 million new cases annually, caused by flagellated protozoans in the genus Leishmania transmitted by the bite of several sand fly species [1][2][3]. Leishmania major is the predominant pathogen of zoonotic cutaneous leishmaniasis that is vectored (transmitted) in the Middle East, Asia, Africa and Southern Europe by Phlebotomus papatasi (Scopoli) [4][5][6]. The vector of cutaneous leishmaniasis, P. papatasi, impacted U.S. military readiness and operations in Iraq and Afghanistan [7][8][9][10], and the ability to control P. papatasi is important to millions of people in endemic areas of the world. The primary means to control zoonotic leishmaniasis transmission is through reduction of rodent habitat or rodent treatment to reduce local sand fly populations and the use of chemical insecticides and insecticide-treated bednets to reduce human bites by sand flies [2,[11][12][13][14][15][16][17]. Organophosphate and carbamate insecticides may be used for control of insect vectors of infectious disease, acting through the inhibition of acetylcholinesterase in the central nervous system. We previously reported genetic and biochemical properties of recombinant acetylcholinesterase (AChE) of P. papatasi (rPpAChE1), and noted that PpAChE1 had 85% amino acid sequence identity to AChEs of Culex pipiens and Aedes aegypti mosquito species [18]. Point mutations resulting in production of an altered, insensitive AChE comprise a major mechanism of resistance to organophosphate and carbamate insecticides [19][20][21], and preliminary evidence of organophosphate resistance has been reported in sand flies [22][23][24]. It was previously hypothesized that the major mutation responsible for high level resistance to organophosphate inhibition in mosquito AChE (G119S, Torpedo AChE nomenclature [25]) [26][27][28] may occur in P. papatasi [18]. Here, we report the construction, baculoviral expression, and biochemical properties of recombinant PpAChE1 (rPpAChE1) containing the G119S orthologous mutation. Sand flies, RNA, cDNA synthesis, and agarose gel electrophoresis Sand flies used in this study were from a laboratory colony of P. papatasi maintained at the USDA-ARS, Knipling-Bushland U.S. Livestock Insects Research Laboratory in Kerrville, Texas. Sand fly colony derivation, maintenance, preparation of RNA, cDNA synthesis and agarose gel electrophoresis were as previously described [18]. Biochemical characterization and inhibition assays In this study, three categories of AChE inhibitors were chosen to define the pharmacological profiles of wild type and G119S rPpAChEs. They included catalytic site inhibitors (organophosphates, carbamates, tacrine, and eserine), peripheral site inhibitors (tubocurarine and ethidium bromide), and bivalent inhibitors (bis(8)-tacrine, bis(12)-tacrine, and donepezil). Note that tacrine differs from the other catalytic site inhibitors in that it is reversible, and does not covalently bind the catalytic serine. Tacrine binds in the choline-binding site, and does not extend into the oxyanion hole or acyl pocket [32]. The compounds were made into stock solutions by dissolving in DMSO, and all enzyme assays were run in constant 0.1% DMSO as a carrier. Inhibition of rPpAChE by these inhibitors was determined using the Ellman assay in a 96-well plate configuration [33]. The rPpAChE cell lysates were pre-incubated with at least six concentrations of inhibitors Figure 1 Chemical structures and names of experimental anticholinesterases used in this study. Bold numbers beside the names denote the compounds as presented in the text. For the bis(n)-tacrines, "n" refers to the number of methylene groups in the linker. Each compound was assigned to an inhibitor class as given in Table 1. for 30 minutes at room temperature prior to adding 300 μM 5,5′-dithiobis-(2-nitrobenzoic acid) (DTNB) and 400 μM acetylthiocholine enzyme substrate (AcSCh), which were both dissolved in 0.1 M sodium phosphate buffer, pH 7.0. The kinetic reading of absorbance at 405 nm was started immediately after adding DTNB and AcSCh with a Dynex Triad multimode plate reader (Dynex Technologies, Chantilly, VA, USA). Inhibitor concentration-response curves and inhibition parameters were constructed by nonlinear regression to a four parameter logistic equation using GraphPad Prism 4.0c software (GraphPad Software, San Diego, CA, USA). Construction of a ligand docking molecular homology model of PpAChE1 A molecular homology model of P. papatasi AChE1 (wild type) was built in ICM [34] by homology [35] based on a 2.6 Å resolution mouse AChE X-ray structure, Protein Data Base code 4B84 [36]. The template enzyme has 48% overall identity with the target sequence. Local homology in the active site region was significantly stronger. Seven tightly bound water molecules in the vicinity of the active site in the template structure were transferred into the model and their positions were refined by energy optimization (water molecules number 46, 49, 52, 55, 71, 72 and 146). The G119S mutation (position 256 in PpAChE1 sequence, GenBank: AFP20868.1) was next introduced into the model (in ICM). After optimization of the side chain conformation within the otherwise rigid protein, residual clashes of S256 with F425 and Y258 (P. papatasi numbering) were detected. The F425 clash was relieved by relaxation of its side-chain, while the Y258 clash could not be relieved by side chain relaxation alone but was resolved after backbone relaxation within the G255-S259 residue window (i.e., a loop including ±1 residue around S256 and Y258 each). Relaxation resulted in 1.1 Å /0.6 Å RMSD displacement of, respectively, all heavy atoms/only backbone atoms within this region. Docking of representative ligands was performed in ICM Docking module [37,38]. For ligands with a covalent inhibition mechanism (carbamates), the tetrahedral transition state on the reaction pathway between non-covalently bound inhibitor and acylated enzyme was modeled, using 'covalent docking' protocol in ICM [34]. Because observed AChE ligand-bound conformations often vary in the sidechain conformation of residue F/Y330 (T. californica numbering) in the active site gorge, a multiple receptor conformation '4D docking' approach [39] was applied to sample two rotamers of Y465 (P. papatasi numbering). Three lowest-scoring conformations were retained in each docking simulation, visually inspected and compared to available X-ray structures of the same or similar ligands bound to AChE (of other species such as mouse and T. californica). The final models chosen were either the lowest or secondlowest conformation (the latter was selected if it was in a significantly better agreement with experimentally observed interaction modes). To identify potentially adverse interactions caused by the G119S (Torpedo californica numbering) mutation, docked ligand/PpAChE (wt) complexes were superimposed with PpAChE1-G119S model and superimposed structures were analyzed for ligand/ PpAChE-G119S clashes. Test for G119S codon sequence in P. papatasi laboratory colony PpAChE1 The PCR-RFLP assay of Weill et al. [28] was modified to test for the presence of the G119S orthologous mutation in our laboratory colony of P. papatasi. A segment of P. papatasi genomic or cDNA was amplified by PCR using primers PpAChE-793U17 (5′-CCACGTCC CAAAAACTC-3′) and PpAChE-842 L23 (5′-GAGTGTG GATGTTCCTGAGTAGA-3′) and the 72 bp amplicon was tested for the presence of the G119S orthologous codon by incubation with Alu I restriction endonuclease (New England BioLabs) followed by gel electrophoresis. Positive (G119S orthologous rPpAChE1, this report) and negative (wild type rPpAChE1, [18]) control templates were used to validate the assay. If the G119S orthologous codon was present in the template, Alu I digestion resulted in cleavage of the DNA amplicon to 25 bp and 47 bp segments. A similar PCR-RFLP test was used to test for sequence polymorphisms (GGA vs GGC) in the G119S orthologous codon, using PCR primers PpAChE-814U26AluC (5′-GTTATGCTATGGATCTTCGGTGG TAG-3′) and PpAChE-854 L22 (5′-TCGTACACATC GAGTGTGGATG-3′). Alu I digestion of the 54 bp amplicon produced 28 bp +36 bp fragments if position 839 [GenBank: JQ922267] was the C nucleotide. Positive and negative control templates were used to validate the assay. Biochemical characterization and inhibition assays As shown in Figure 3, paraoxon was a potent inhibitor of wild type enzyme (rPpAChE1), but not rPpAChE1-G119S. The other anticholinesterases ( Figure 1) demonstrated a wide range of potencies as well as resistance ratios for the inhibition of both strains of rPpAChE (Table 1). The calculated IC 50 values and confidence limits had correlation coefficients, R 2 , of at least 0.95, except those curves with very wide confidence limits due to the high resistance of the G119S rPpAChE to OPs and carbamates. For wild type rPpAChE, all of the catalytic site inhibitors and bivalent inhibitors showed moderate to high potencies to inhibit enzyme activity, with IC 50 values from the middle nanomolar (e.g., propoxur and paraoxon) to sub nanomolar concentrations (compound 7), although most compounds fell in the range of 3-76 nM ( Table 1). On the other hand, the two peripheral site inhibitors had low potencies for rPpAChE inhibition of 17 μM (ethidium bromide) and 143 μM (tubocurarine) analogous to similarly low affinity of the peripheral site inhibitor propidium for mammalian AChE [40]. In contrast, the G119S rPpAChE showed strong resistance to the organophosphates (paraoxon and malaoxon) and all phenyl-substituted methylcarbamates (compounds 1, 2) with resistance ratios over 450. Interestingly, a group of alkyl-substituted pyrazole carbamates (compounds 3, 4, and 5), which include a smaller ring than phenyl methylcarbamates, had much lower resistance ratios (18-64 fold) compared to phenyl methylcarbamates ( Table 1). All other peripheral site inhibitors, bivalent inhibitors, and a catalytic site inhibitor, tacrine, showed the lowest resistance ratios, which were ≤7. An exception was eserine, which despite having a large pyrroloindole ring system, displayed much less cross resistance than the phenylcarbamates, but a bit more than tacrine and the bivalent inhibitors (Table 1). Current data with wild type rPpAChE showed good correlation to that previously published for 11 compounds (eserine, propoxur, carbofuran, tacrine, d-tubocurarine, ethidium bromide, donepezil, 1, 2, 6, and 7), which differed only in that a shorter 10 min preincubation with inhibitor was used [29]. The data sets collected in both studies for rPpAChE were not normally distributed (D' Agostino & Pearson omnibus normality test), but were highly correlated with nonparametric Spearman r = 0.884 (0.59-0.97; 95% CL) and two-tailed P < 0.0006. Inhibitor docking in a molecular homology model of PpAChE1 A molecular homology model of PpAChE1 (Figure 4) was constructed based on murine AChE. Selected inhibitors were docked into the model which was then adjusted for the G119S (T. californica numbering) mutation at PpAChE1 position 256. As shown in Figure 4, propoxur (4a) docked into the molecular homology model exhibits a fairly large region of Van der Waals overlap, suggesting that the G119S mutation (S256 in the model) results in a large interference with propoxur docking, in agreement with the results presented in Table 1 (resistance ratio 19,213). Eserine (4b) appears to exhibit a similarly large region of Van der Waals overlap, suggesting that it should also exhibit a significantly high resistance ratio in the G119S mutant; however, the experimentally measured resistance ratio (Table 1) is only 27. Compound 4 (4c) exhibits a significantly reduced Van der Waals overlap, in relative agreement with the measured resistance ratio of only 64. Tacrine (4d) is not directly impacted by the G119S substitution, but may be somewhat affectedby desolvation of the catalytic serine (S336) exhibiting a resistance ratio of only 5.8. Donepezil (4e) also shows no direct impact with the G119S substitution and provides a minimal resistance ratio of only 5. Similarly, ethidium (4f) exhibits no interaction with the G119S substitution (S256) and exhibits a resistance ratio of 0.4. a) b) c) d) e) f) Figure 4 Representative inhibitors (from Table 1 Test for G119S codon sequence in P. papatasi laboratory colony PpAChE1 The PCR-RFLP assay adapted from Weill et al. [28] failed to demonstrate the presence of the G119S orthologous mutation in our laboratory colony of P. papatasi; however, direct sequencing of a small percentage of cDNA clones that included the codon corresponding to the G119S orthologous site in PpAChE1 and a PCR-RFLP assay designed to detect the presence of a GGC codon at nucleotide positions 837-839 [GenBank: JQ922267] both indicated the presence of polymorphic GGC/GGA sequence at the codon position orthologous to the G119S mutation in mosquitoes ( Figure 5). Preliminary data suggests that the GGC codon at this locus is present in our laboratory flies at an estimated frequency between 10-20%. Discussion The G119S mutation of rPpAChE has significant effects on the catalytic properties and inhibitor sensitivity of the enzyme. The four-fold increase seen in K m is similar to the two-fold increase in K m seen in the G119S mutant of Anopheles gambiae AChE [30]. Furthermore, high enzyme resistance ratios are seen for aryl methylcarbamate (e.g., propoxur, carbofuran), as was seen for AgAChE-G119S [30,41]. High resistance ratios are also seen for paraoxon and malaoxon. Like the aryl methycarbamates, these compounds acylate the active site serine (acylation site inhibitors) and extend into the oxyanion hole, where G119 is located. In contrast, the pyrazol-4-yl methylcarbamates ( Table 1, compounds 3-5) possess significantly smaller insensitivity ratios, as we previously observed for AgAChE-G119S [30]. The smaller volume of pyrazol-4-yl core inhibitors (Figure 1, compounds 3-5) relative to aryl methylcarbamates presumably allows them to effectively enter the crowded active sites of G119S mutant Anopheles gambiae AChE and rPpAChE1-G119S. Tacrine is also a catalytic site inhibitor, but unlike carbamates and organophosphates, binds in the cholinebinding site, rather than the oxyanion hole. Thus, tacrine inhibition is largely unaffected by the G119S mutation and the resistance ratio is only 5.8 (Table 1). Similarly low resistance ratios are seen for bivalent inhibitors (compounds 6,7, and donepezil) and peripheral site inhibitors. Since neither class of inhibitor bind AChE near G119S, the mutation does not affect inhibition by these compounds. The molecular homology model docking of selected inhibitors ( Figure 4) is in good general agreement with the measured resistance ratios of selected inhibitors. The G119S mutation (S256) in the PpAChE1 model ( Figure 4) interferes with positioning of phenylcarbamates for acylation transition state, while for non-covalent inhibitors there are no direct steric issues. Desolvation of serine OH might explain small residual resistance. The exception is the docking model for eserine (Figure 4b), which is a special case among carbamates because as a bulky cationic lipophilic moiety, it may function as a non-covalent inhibitor even if acylation (i.e., covalent inhibition mechanism) is impaired by the mutation. For the three carbamates, docking was done assuming a covalent inhibition mechanism (actual models are of the acylation transition state). If eserine is modeled non-covalently, it could still have hydrophobic and cation-pi interactions at least as extensive as tacrine, therefore this discrepancy may not have so much to do with homology model accuracy as with more complex mechanistic issues. In summary, results indicate that the single amino acid substitution orthologous to the G119S mutation responsible for high level resistance to organophosphate and carbamate insecticides in mosquitoes can also generate high level resistance to inhibition by acylation site inhibitors in recombinant P. papatasi AChE1. The recent reports of aryl methylcarbamates that were shown to have improved targeting of pest AChEs relative to mammalian AChEs [41,42], suggests that use of the recombinant enzymes with various amino acid substitutions may offer platforms for SAR modeling and in vitro screening to design and identify novel inhibitors with specific targeting of insecticide-insensitive AChEs that also exhibit improved mammalian safety profile. Further studies are planned or underway to evaluate the effects of additional mutations in PpAChE1, to evaluate the presence of G119S orthologous codon polymorphism in natural populations of P. papatasi, to evaluate additional synthetic ligands to assess their efficacy against wild type and "mutant" forms of rPpAChE1, Figure 5 PCR-RFLP assay for polymorphism in laboratory P. papatasi. Template DNA (as indicated for each lane) was amplified by PCR then subjected to digestion with Alu I and electrophoretically separated on a 4% Metaphor agarose gel. Lane: Std, DNA size standards; 1, no template negative control; 2, wild type PpAChE1 plasmid; 3, PpAChE1-G119S plasmid; 4-6, genomic DNA extracted from individual P. papatasi colony females fed sugar water only, no blood. and to utilize molecular modeling and structure activity relationships (SARs) to improve construction and selection of inhibitory lead chemical structures. In mosquitoes, the G119S substitution produces high level organophosphate and carbamate insecticide resistance but also a high fitness cost (in the absence of insecticide) when homozygous [43][44][45], presumably due to 30-fold reduction in turnover number for substrate and approximately 70% decrease in cholinergic activity [46]. Reduction in G119S allele frequency was reported in Lebanon over a 3-4 year period presumably resulting from switching to pyrethroids for mosquito control and loss of the G119S allele due to fitness cost in the absence of inhibitor selection pressure [47]. In spite of the fitness cost, the G119S-containing ace-1 allele is widespread throughout the world [48] and the fitness cost may be reduced in the presence of kdr-resistance to pyrethroids [49] or by duplication of the ace-1 allele to permit maintenance of a heterozygous state, essentially fixing it in the population [50][51][52]. Agricultural pesticide use and mosquito control efforts have largely resulted in the spread of the ace-1 duplication in West Africa [53]. Together, these findings provide strong warnings about the need for careful use of insecticides that provide strong selection for resistance to organophosphates and carbamates. Once the G119S substitution occurs, pyrethroid use may allow reduction of the frequency of the G119S allele [47], or selection for kdr-based resistance to pyrethroids may result in multiple-resistant pest populations by reducing fitness cost of the G119S allele [49]. The finding of G119S orthologous codon polymorphism in a laboratory colony of P. papatasi strongly suggests that a single nucleotide transversion (GGC → AGC) might readily occur, causing relatively rapid development of resistance to organophosphate insecticides if subjected to strong selection. Careful management of pesticide use in IPM programs is important to prevent or mitigate development and fixation of the G119S mutation in susceptible pest populations. Availability of the recombinant AChEs may enable identification of novel inhibitory ligands with improved efficacy and specificity for AChEs of arthropod pests. Conclusions We demonstrated that the G119S orthologous substitution in PpAChE1 produces high levels of resistance to OP and carbamate inhibitors, suggesting a strong likelihood of resistance development if the subject codon is polymorphic (GGA + GGC) in natural populations of P. papatasi. PCR and sequencing tests indicate that the G119S orthologous codon is polymorphic (GGA or GGC) in our laboratory P. papatasi colony. We are currently seeking P. papatasi specimens from natural populations worldwide to determine if the G119S orthologous codon is polymorphic in natural populations. As noted by Weill et al., "The development of new insecticides that can specifically inhibit the G119S mutant form of acetylcholinesterase-1 will be crucial in overcoming the spread of resistance" [26]. Use of the recombinant P. papatasi AChE1 and revised molecular models may facilitate rapid screening in silico and in vitro to identify novel PpAChE1 inhibitor ligands, and comparative studies on biochemical kinetics of inhibition. Construction and expression of mutant forms of PpAChE1 will facilitate the development of rapid molecular assays and other tools to screen and characterize mutations giving rise to organophosphate-insensitive PpAChE1. Addition of new molecular data on PpAChE1may also be used in modeling studies to predict in vivo insecticidal activity for novel inhibitors as described by Naik et al. [54]. Availability of the recombinant PpAChE1 will enable the creation of mechanism-based screens to discover more effective inhibitors that may be developed to innovate safer vector control technologies. Endnotes a This article reports the results of research only. Mention of a proprietary product does not constitute an endorsement by the USDA for its use. b USDA is an equal opportunity provider and employer. c Copyright statement: Copyright protection is not available for any work of the United States Government.
2017-08-03T02:25:44.714Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "3d576c116532028e49cadc3e8bf47b0bb411e8f7", "oa_license": "CCBY", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-014-0577-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d576c116532028e49cadc3e8bf47b0bb411e8f7", "s2fieldsofstudy": [ "Biology", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
270075860
pes2o/s2orc
v3-fos-license
Long-Term Oncologic Outcomes of Off-Clamp Robotic Partial Nephrectomy for Cystic Renal Tumors: A Propensity Score Matched-Pair Comparison of Cystic versus Pure Clear Cell Carcinoma Few data are available on survival outcomes of partial nephrectomy performed for cystic renal tumors. We present the first long-term oncological outcomes of cystic (cystRCC) versus pure clear cell renal cell carcinoma (ccRCC) in a propensity score-matched (PSM) analysis. Our “renal cancer” prospectively maintained database was queried for “cystRCC” or “ccRCC” and “off-clamp robotic partial nephrectomy” (off-C RPN). The two groups were compared for age, gender, tumor size, pT stage, and Fuhrman grade. A 1:3 PSM analysis was applied to reduce covariate imbalance to <10% and two homogeneous populations were generated. Student t- and Chi-square tests were used for continuous and categorical variables, respectively. Ten-year oncological outcomes were compared between the two cohorts using log-rank test. Univariable Cox regression analysis was used to identify predictors of disease progression after RPN. Out of 859 off-C RPNs included, 85 cases were cystRCC and 774 were ccRCC at histologic evaluation. After applying the PSM analysis, two cohorts were selected, including 64 cystRCC and 170 ccRCC. Comparable 10-year cancer-specific survival probability (95.3% versus 100%, p = 0.146) was found between the two cohorts. Conversely, 10-year disease-free survival probability (DFS) was less favorable for pure ccRCC than cystRCC (66.69% versus 90.1%, p = 0.035). At univariable regression analysis, ccRCC histology was the only independent predictor of DFS probability (HR 2.96 95% CI 1.03–8.47, p = 0.044). At the 10-year evaluation, cystRCC showed favorable oncological outcomes after off-C RPN. Pure clear cell variant histology displayed a higher rate of disease recurrence than cystic lesions. A recent update to Bosniak classification subdivided cystic lesions into five categories based on CT or MRI diagnostic criteria.Renal cysts rated as IIF, III, and IV are malig-nant approximately in 0-38%, 50%, and 100% of surgically treated cases, respectively [2].Being considered a good predictor of malignancy, the classification suggests the clinical management for follow-up and treatment of cystic lesions. The most common histological subtype for Bosniak III cysts is ccRCC, with pseudocystic changes and low malignant potential [1].Other histological features, such as tubulocystic RCC or cystic nephroma/mixed epithelial and stromal tumors, also known as renal epithelial and stromal tumors (REST), appear as Bosniak type II/IV.Acquired cystic disease-associated RCC is associated with end-stage renal disease (ESRD).According to the WHO 2022 classification, multilocular cystic renal neoplasm (mcRCC) of low malignant potential is a new subtype of ccRCC, since multiple publications reported no recurrence or metastasis in patients with mcRCC [1,3].Tubulocystic RCC, acquired cystic diseaseassociated RCC, and eosinophilic cystic RCC were considered other independent tumor entities [4]. These variant histologies have a low malignant potential, and nephron-sparing surgery is the standard approach when it is technically feasible [1].Therefore, a minimally invasive partial nephrectomy is achievable according to surgeon skills and has been widely reported [5][6][7][8][9].There are few studies showing oncological outcomes of cystRCC compared to "pure" ccRCC, defined as a tumor without any cystic differentiation at pathologic examination.Additionally, there are limited data available with a long-term follow-up [9,10].All series reported in the literature, except the one by Novara et al. [8], are single-center, small, and retrospective experiences [7,9,10].As a result, comprehensive and conclusive information concerning the behavior of cystRCC versus solid pure ccRCCs, as well as their long-term survival outcomes, remains relatively insufficient. We report 10-year oncological results of cystRCC compared to pure ccRCC treated with robotic partial nephrectomy in a propensity score-matched (PSM) analysis. Study Design and Patient Selection Between January 2013 and January 2022, our internal IRB approved and prospectively maintained "renal cancer" database was queried for "off-clamp robotic partial nephrectomy" (off-C RPN) and "cystic" or "pure clear cell" variant histology at pathologic examination.A total of 859 patients who received robotic partial nephrectomy were included in the study, with 774 ccRCC and 85 cystRCC.After 1:3 PSM analysis, 170 patients with ccRCC, and 64 patients with cystRCC were selected (Figure 1).Exclusion criteria were gross hematuria or tumor infiltration of the urinary tract observed at conventional imaging.After obtaining informed written consent before the procedure, peri-operative data were collected. A recent update to Bosniak classification subdivided cystic lesions into five categories based on CT or MRI diagnostic criteria.Renal cysts rated as IIF, III, and IV are malignant approximately in 0-38%, 50%, and 100% of surgically treated cases, respectively [2].Being considered a good predictor of malignancy, the classification suggests the clinical management for follow-up and treatment of cystic lesions. The most common histological subtype for Bosniak III cysts is ccRCC, with pseudocystic changes and low malignant potential [1].Other histological features, such as tubulocystic RCC or cystic nephroma/mixed epithelial and stromal tumors, also known as renal epithelial and stromal tumors (REST), appear as Bosniak type II/IV.Acquired cystic disease-associated RCC is associated with end-stage renal disease (ESRD).According to the WHO 2022 classification, multilocular cystic renal neoplasm (mcRCC) of low malignant potential is a new subtype of ccRCC, since multiple publications reported no recurrence or metastasis in patients with mcRCC [1,3].Tubulocystic RCC, acquired cystic disease-associated RCC, and eosinophilic cystic RCC were considered other independent tumor entities [4]. These variant histologies have a low malignant potential, and nephron-sparing surgery is the standard approach when it is technically feasible [1].Therefore, a minimally invasive partial nephrectomy is achievable according to surgeon skills and has been widely reported [5][6][7][8][9].There are few studies showing oncological outcomes of cystRCC compared to "pure" ccRCC, defined as a tumor without any cystic differentiation at pathologic examination.Additionally, there are limited data available with a long-term follow-up [9,10].All series reported in the literature, except the one by Novara et al. [8], are single-center, small, and retrospective experiences [7,9,10].As a result, comprehensive and conclusive information concerning the behavior of cystRCC versus solid pure ccRCCs, as well as their long-term survival outcomes, remains relatively insufficient. We report 10-year oncological results of cystRCC compared to pure ccRCC treated with robotic partial nephrectomy in a propensity score-matched (PSM) analysis. Study Design and Patient Selection Between January 2013 and January 2022, our internal IRB approved and prospectively maintained "renal cancer" database was queried for "off-clamp robotic partial nephrectomy" (off-C RPN) and "cystic" or "pure clear cell" variant histology at pathologic examination.A total of 859 patients who received robotic partial nephrectomy were included in the study, with 774 ccRCC and 85 cystRCC.After 1:3 PSM analysis, 170 patients with ccRCC, and 64 patients with cystRCC were selected (Figure 1).Exclusion criteria were gross hematuria or tumor infiltration of the urinary tract observed at conventional imaging.After obtaining informed written consent before the procedure, peri-operative data were collected. Baseline imaging was reviewed for all cases to properly assess tumor characteristics.Baseline imaging was reviewed for all cases to properly assess tumor characteristics. Surgical Technique All patients received RPN with the pure enucleation technique, regardless of the cystic or solid macroscopic aspect of renal masses; enucleo-resection was performed only when necessary, sometimes to manage complex cystic lesions in order to avoid rupture.Off-clamp has always been the standard approach for RPN in our institution.All cases were performed by two skilled surgeons. A 30-degree scope was utilized for visualization, and a total of two robotic and two laparoscopic ports were placed.The assistant surgeon's two 12 mm ports were positioned between the camera and the robotic ports, forming a "U" shape targeting the tumor.The colon was medialized, and the Gerota fascia was carefully opened.Subsequently, dissection proceeded through renal capsula to discover the tumor margins and to start enucleation.Monopolar scissors were used to mark the tumor margins circumferentially.The pneumoperitoneum pressure was increased to 20 mmHg, and the enucleation plane was gradually developed employing blunt dissection.To ensure a bloodless field visualization, two suction devices were employed concurrently, which allowed for both irrigation and suction.The specimen, referring to the excised tissue, was placed in an endocatch bag and extracted.The resection bed was carefully examined, and any small arterial feeders that were not initially controlled during the development of the enucleation plane were managed using monopolar pinpoint coagulation. When needed, a hemostatic agent such as Tabotamp or absorbable fibrillar was applied to improve hemostasis and fill the parenchymal defect.A sliding-clip renography was performed in case of suspicious opening of the caliceal system.The renal capsule was sealed to reduce the risk of fluid leakage into the peritoneal space and a drain was left in place, which was usually removed on the first post-operative day. Moreover, we previously reported the feasibility of off-c PN even in challenging cases, such as large tumors (cT2 renal masses) [11,12], purely hilar lesions, and totally endophytic lesions, with the help of Indocyanine Green technology when indicated [13][14][15]. Follow-Up Schedule Follow-up visits encompassed complete biochemical blood tests (including serum creatinine, electrolyte levels, urea nitrogen and uric acid), an accurate physical examination, an abdominal ultrasonography and a chest X-ray or CT scan alternatively.Follow up was scheduled every six months for the first two years and yearly thereafter. Statistical Analysis The main clinical features were compared between the two groups.A 1:3 PSM analysis was used to obtain two populations homogeneous for age, gender, tumor size, RENAL nephrometry score [16], pT stage, and Fuhrman grade to decrease covariate imbalance to <10%.Continuous data were presented as median and interquartile ranges (IQR).Mann-Whitney and Chi-square tests were employed to compare continuous and categorical variables, respectively.A two-sided p-value < 0.05 was considered statistically significant.Ten-year disease-free survival (DFS), cancer-specific survival (CSS), renal recurrence-free survival (RRFS, defined as the time to evidence of tumor recurrence in the kidney or perirenal field), and metastasis-free survival (MFS) were compared between the two study cohorts using the log-rank test.Age, gender, Fuhrman grade, pT stage, tumor size, necrosis, and variant histology were included in a univariable Cox regression analysis to find predictors of disease development following partial nephrectomy.Statistical analysis was performed using the Statistical Package for Social Science (SPSS) v. 24.0 (IBM, Somers, NY, USA). Results All clinical and pathological features of the study cohorts are reported in Table 1.Overall, 85 cases were cystRCC and 774 were pure ccRCC at pathologic examination.Of the 85 cystRCC, 23 were pure cystic tumors and 62 were papillary or clear cell RCC mixed with cystic components.Figure 2 shows a radiologic and macroscopic feature of Bosniak IV cystic lesions, which were found to be cystic papillary type 2 and cystic clear cell RCC, respectively, at pathologic examination.The two groups were not homogeneous for age, tumor size, RENAL score, pT stage, or Fuhrman grade (p < 0.001; p = 0.001; p = 0.014; p < 0.001; p < 0.001, respectively). After applying the PSM analysis, two cohorts were selected, including 64 cystRCC and 170 pure ccRCC, which were homogeneous for all tested variables.The median follow-up was 46.5 months (IQR 21-80.75months).Conversion to an open approach or from partial to radical nephrectomy never occurred in either cohort.The high-grade (3-5) Clavien complication rate was comparable between the ccRCC and cystRCC groups (2.2% versus 2.5%, p = 0.65 respectively). After PSM analysis, the mean tumor size was 3.82 cm versus 3.85 cm for ccRCC and cystRCC, respectively (p = 0.65), and consequently, the most prevalent pathological tumor stage was T1a for both the solid and the cystic group (67.1% vs. 67.2%;p = 0.86).Fuhrman grade 2 was the most detected for both cohorts (66.3% vs. 50.6%for solid and cystic group, respectively; p < 0.001). Results All clinical and pathological features of the study cohorts are reported in Table 1.Overall, 85 cases were cystRCC and 774 were pure ccRCC at pathologic examination.Of the 85 cystRCC, 23 were pure cystic tumors and 62 were papillary or clear cell RCC mixed with cystic components.Figure 2 shows a radiologic and macroscopic feature of Bosniak IV cystic lesions, which were found to be cystic papillary type 2 and cystic clear cell RCC, respectively, at pathologic examination.The two groups were not homogeneous for age, tumor size, RENAL score, pT stage, or Fuhrman grade (p < 0.001; p = 0.001; p = 0.014; p < 0.001; p < 0.001, respectively). After applying the PSM analysis, two cohorts were selected, including 64 cystRCC and 170 pure ccRCC, which were homogeneous for all tested variables.The median follow-up was 46.5 months (IQR 21-80.75months).Conversion to an open approach or from partial to radical nephrectomy never occurred in either cohort.The high-grade (3)(4)(5) Clavien complication rate was comparable between the ccRCC and cystRCC groups (2.2% versus 2.5%, p = 0.65 respectively). After PSM analysis, the mean tumor size was 3.82 cm versus 3.85 cm for ccRCC and cystRCC, respectively (p = 0.65), and consequently, the most prevalent pathological tumor stage was T1a for both the solid and the cystic group (67.1% vs. 67.2%;p = 0.86).Fuhrman Discussion The indication of surgical treatment for renal cystic lesions arose from Bosniak classification [1,2].If surveillance is a viable option for selected Bosniak III cases, a surgical approach should be performed in Bosniak IV patients [1].Partial nephrectomy (PN) represents the standard treatment for renal masses whenever surgically feasible, including cystRCC [1]. Nevertheless, although most of the studies have focused on PN for renal masses in general, the feasibility and safety of this treatment for cystic lesions has been poorly investigated [7][8][9][10]. The absence of positive surgical margins is one of the crucial trifecta endpoints for PN. An accurate resection avoiding rupture during enucleation of a cystic renal lesion is expected in order to prevent possible tumor seeding.However, there are no reliable data showing differences in DFS probabilities in the case of intraoperative cyst rupture.Pradere et al. reported a comparable estimated 5-year RFS between patients with versus without intraoperative cystic rupture (100% vs. 92.7%,respectively, p = 0.20) [19].In the present study, a specific focus on cystic rupture is missing; however, we reported a more favorable 10-year oncological outcome of cystRCC versus pure ccRCC treated with off-c RPN.To the best of our knowledge, this is the first manuscript comparing two variant histologies of renal tumors without relying on cross-sectional imaging and providing such a long-term follow-up. In a single-center retrospective experience comparing solid versus complex cystic renal lesions treated with robotic on-clamp PN, Raheem et al. reported at a median follow-up of 58 months, with a recurrence rate and cancer-specific mortality of 4.1% and 2%, respectively, for solid tumors, while cystic lesions displayed no recurrence rate and 100% OS probability.Five-year DFS, CSS, and OS probabilities were comparable between the two groups [9]. Similar oncological outcomes were found by Zennami et al. in a single-center retrospective study of robotic on-clamp PN for cystic and solid tumors, with comparable DFS, CSS, and OS (log-rank p = 0.18, p = 0.55, and p = 0.18, respectively) [10].At 41-month followup, the recurrence rate (defined as tumor relapse in the operative field or the presence of lymph node or distant metastasis) was 3.7% in the solid group, while no recurrence for cystic tumors was detected (p = 0.368) [10].These data, provided from retrospective and not homogeneous series, weakened the reliability of the studies. Cystic lesions, being approximately up to 18% of solid kidney neoplasms, exhibited discrepancies between radiological and pathological evaluations [1].The updated Bosniak Classification of Cystic Renal Masses provides a framework for categorizing cystic lesions based on imaging characteristics [2].In a large cohort of patients with radiographically confirmed cystic renal masses managed through active surveillance or intervention, Lee et al. revealed a notable discordance between radiographic and pathological designations.Over 80% of radiographically Bosniak cystic lesions were not described as "cystic" on pathology reports [20].Since critical questions were raised about the reliability of imagingbased classifications in predicting pathological characteristics, we reported surgical and oncological outcomes of pathologic cystRCC treated with off-c RPN. Moreover, in retrospective analysis exploring the 2019 version of the Bosniak Classification, Tse J.R. et al. underscored the challenge of risk stratification, specifically for class III and IV cystic masses.The prevalence of malignancy ranged from 56% to 61% for class III and 83% for class IV, with subclassifications demonstrating varying malignancy rates [21].In view of the reported excellent outcomes of these patients, a surveillance approach could be an alternative to surgical treatment in Bosniak III cysts, avoiding the risk of overtreating 49% of the tumors that are lesions with a low malignant potential [1,22,23]. Interestingly, the use of machine-learning algorithms to predict ccRCC growth rate classes based on MRI or to identity malignant cystic lesions by a radiomic CT-based model could be useful for the individualized management of renal tumors [24,25]. The present study is not devoid of limitations.The single-center experience, the small sample size, and the low reproducibility of the off-clamp approach out of a tertiary referral center are considered the main drawbacks.Although our experience has shown excellent long-term results, as mentioned previously [11][12][13][14][15]17], we acknowledge that the reproducibility of this technique may vary across different institutions.Further prospective, multicenter studies would be useful to define the best management of cystic renal tumors. Conclusions Off-clamp RPN is a suitable treatment option for cystic renal tumors.At a 10-year evaluation, cystic variant histology showed favorable oncological outcomes, while pure ccRCC displayed a higher rate of renal recurrence than cystic tumors.These findings Table 1 . Clinical and pathologic data of the whole cohort and after the propensity score (PS)-matched analysis.SD: standard deviation; M: male; F: female; N: number. * Student t-and Chi-square tests were used for continuous and categorical variables, respectively.
2024-05-29T15:14:51.091Z
2024-05-27T00:00:00.000
{ "year": 2024, "sha1": "728544feb480f83b27daf2bded598a9e76c7cab6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1718-7729/31/6/227/pdf?version=1716787800", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ccd59ec2a3dfbd7783a92e8f7c9d86936e138fda", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
244773003
pes2o/s2orc
v3-fos-license
Bumblebee: A Path Towards Fully Autonomous Robotic Vine Pruning Dormant season grapevine pruning requires skilled seasonal workers during the winter season which are becoming less available. As workers hasten to prune more vines in less time amid to the short-term seasonal hiring culture and low wages, vines are often pruned inconsistently leading to imbalanced grapevines. In addition to this, currently existing mechanical methods cannot selectively prune grapevines and manual follow-up operations are often required that further increase production cost. In this paper, we present the design and field evaluation of a rugged, and fully autonomous robot for end-to-end pruning of dormant season grapevines. The proposed design incorporates novel camera systems, a kinematically redundant manipulator, a ground robot, and novel algorithms in the perception system. The presented research prototype robot system was able to spur prune a row of vines from both sides completely in 213 sec/vine with a total pruning accuracy of 87%. Initial field tests of the autonomous system in a commercial vineyard have shown significant variability reduction in dormant season pruning when compared to mechanical pre-pruning trials. The design approach, system components, lessons learned, future enhancements as well as a brief economic analysis are described in the manuscript. Introduction Pruning is a primary tool used by grape growers to manipulate vine size and shape which helps to regulate crop-load and maintain vine balance. Dormant season grapevine pruning involves removal of plant tissues in the form of spurs and excess one-year-old canes from the previous year's growth. It is a highly laborintensive task that requires skilled workers during winter season, which are becoming less available. As labor workers are paid per vine to prune, the short-term seasonal hiring culture often leads to workers rushing to prune more vines in less time. This leads to inconsistent pruning of vines that often results in over/under cropping and could take several years of careful mitigation to recover and remain profitable (Bates and Morris, 2009). Different pruning strategies have been extensively studied in various grape growing regions and grape varieties to achieve sustainable vine vegetative and reproductive growth, often referred to as vine or vineyard balance (Howell, 2001). Some of the contemporary vineyard mechanization systems in vineyards during the dormant season include (in sequence) mechanical pre-pruning, manual pruning follow-up, and mechanical shoot or fruit thinning to maintain vineyard balance (Bates, 2014). Grapes are the leading fruits based on the production volume in the United States (National Agricultural Statistics Service, 2019). Its grape industry mainly consists of wine, table, juice, and raisin varieties that combined produced around 7.4 million tons of produce in 2017 (National Agricultural Statistics Service, 2019), (Economic Research Service, 2016) and is currently valued at the U.S. $6.6 billion. Despite its impressive growth in the last decade, grape industry continues to rely on hand labor for many operations. Among the most labor-intensive and costly tasks in grape production are harvesting, pruning, cluster thinning and equipment operation. Pruning is often labeled as one of the top three costly tasks that could take up a quarter of labor costs in the fruit production cycle (Johnson and CourtneyRoss, 2016). According to the University of California Cost and Return Studies in 2017 (Fidelibus et al., 2018;Alston et al., 2018), table grape growers can annually incur operating costs up to $18,000 per acre to generate income of about $30,000 per acre (Fidelibus et al., 2018). These accounts to approximately 45 percent of costs just for labor. Future projection of the labor issue is expected to become more critical both in terms of uncertainty in the availability and increasing costs (Fennimore and Doohan, 2008;Calvin and Martin, 2010). These concerns about labor supply have promoted renewed focus and enhanced interest in mechanization and the use of advanced technologies to secure long-term sustainability of grape and fresh fruit industry, in general. To reduce labor cost, vineyard mechanization research has played an important role in the grape processing industry in the U.S. The invention and adoption of the mechanical grape harvester in the early 1970s eliminated hand harvest as a labor issue in the grape juice industry. Research and development of mechanical pruning has continued since the mid-1970's (Morris, 2007) and it alone has further reduced labor costs. However, the lack of specificity in retained nodes causes the vines to be over-cropped (out of balance) with poor fruit quality (Bates, 2008), (Bates and Morris, 2009). This lack of selective pruning capability only provides a partial solution as additional follow operations are often required to complete the task that further increases production cost. Grapevines are perennial plants with indeterminant growth habits, so canopy structures are highly vigorous, and the entanglement of canes quickly lead to canopies that are too complex to analyze even for trained human eyes, let alone for computer vision algorithms. Thus, a robotic pruner as a follow-up operation after mechanical pre-pruning could be a pragmatic solution. This work presents a systematic approach to integrate robotic systems to fully automate hand follow-up operations after mechanically pre-pruning. Further, the profit margins for commercial vine production in general are low, the quantity and quality of manual labor is declining while the cost for fuel and fertilizers are ever-increasing (Uzes and Skinkis, 2016). The development of automated robotic pruning as the mechanical pruning followup operation would further reduce labor costs and increase specificity in retained node quantity and quality. The growing region considered is located in western New York, one of the largest juice grape producers in the U.S. Our long-term goal is to develop a commercially viable and fully autonomous pruning system to reduce dependency on seasonal semi-skilled workers while improving productivity. The overall objective is to investigate robotic technology to significantly improve and stabilize the balance between vegetative and reproductive growth that would yield better fruit quality and predictable crop load. Our approach deviates significantly from the established paradigm in agricultural robotics in two major ways. Firstly, it is recognized that a grapevine training system that facilitates robotic technology in vineyards is key to successful implementation of autonomous and selective pruning of vines. Thus, the commercial vineyard in our study was specifically designed and is constantly modified to facilitate automation. Second, the design of the proposed robot is multi-functional with capability to perform other tasks such as autonomous multi-sensor data collection throughout the growing seasons while remaining compatible with different varieties and canopy architectures of vines that adds more novelty to existing systems and potential for commercialization. The remainder of this paper is organized as follows: Sections 2 discusses prior work in this field and how the outcomes of the previous research motivated some design selections. In Section 3, we describe the work environment modifications and basic viticultural terminologies for context. A key requirement for accurate perception for vine modeling and manipulation in complex environments in the outdoors was a robust illumination invariant camera system. The inclusion of such camera system to measure thin vine structures in this systems integration work is based on . Similarly, the 3D computer vision pipelines to generate and process vine models are based on shortcoming of 2D methods previously reported work by (Botterill et al., 2017a). The camera design consideration along with navigation and manipulation pipelines for robotic pruning are detailed under Section 4. Section 5 reports the results and lessons learned from field-testing of the pruner in a commercial vineyard. Finally, some concluding thoughts and discussion on further improving the robustness of the current design are in Section 6. Relevant literature In the past several decades, research on the development and use of robotic systems for various agricultural tasks have been widely studied by the scientific community. Automated solutions for sowing seeding, monitoring or pest-detection are widely documented in the literature (Gollakota and Srinivas, 2011;Diago and Tardaguila, 2015;Ebrahimi et al., 2017;Li et al., 2009). These are complex systems designed to work in unstructured environments and changing lighting conditions (Bac et al., 2014;Gongal et al., 2015). One of the most targeted applications of robotics in agriculture is harvesting of fruits and vegetables. In a recent work, (Bac et al., 2014) reviewed 36 different robotic projects completed between the years 1985 and 2012. All reviewed projects were developed for fruit or vegetable harvesting. Historically, the limiting factor in perception has been to robustly detecting fruit under occlusion and uncontrolled natural illumination (De-An et al., 2011;Li et al., 2011) while removing fruits without damaging and achieving picking speed comparable to human picker has been the major bottleneck in the manipulation side (Botterill et al., 2017b). Despite the obvious advantage of automated pruning and the underlying commercial benefit, automated pruning on the other hand has not received much attention when compared to harvesting. The lack of research interest and progress could be attributed to the complexity of the task itself. For harvesting applications, the target fruits are generally easy to reach, and simple point-to-point paths are enough without the need for collision avoidance (Botterill et al., 2017b). Pruning, on the other hand, presents significant challenges as the system not only has to detect the canopy structure but also measure topological parameters such as the location and orientation of the cutting points in the branches (He and Schupp, 2018;Tabb and Medeiros, 2017a). As vine structures become more vigorous, the entanglement of multiple canes could easily become too complex to solve. In the past, very limited attempts have been made to design and evaluate a full scale robotic system for pruning vines. A robot system to spur prune grape vine was designed by Vision Robotics Corporation in 2015 (Vision Robotics Corp., 2020). This commercial prototype used a stereo camera to identify and localize cut-point in canes and an industrial robot arm to prune highly manicured vine structures. However, the performance characteristics, design details on perception and manipulation are unknown and publicly not shared. A recent full-scale vine pruning prototype consisting of a robot arm, multi-camera system and over-the-row supportive structure for controlled lighting was proposed by (Botterill et al., 2017b). This system generated models of vines for collision free manipulation and autonomously pruned a row of a vineyard. Robotic systems designed to interact with plants such as pruning of dormant vines require robust perception capabilities for motion planning and manipulation in unstructured environments. Before such interaction happens in robotic pruning, locating the pruning points is a necessary step, which itself is a challenging problem given that vines lack of consistent structures in their natural form. To automate the process of pruning point detection in vines, (Corbett-Davies et al., 2012) presented an AI-based expert system. It was based on rules defined by a viticulturist and used 3D topological features of the tree such as length, curvature, angle, etc. in deciding to whether to keep or prune the branches. Similarly, (Katyara et al., 2021) used a combination of mean predictive histogram of gradients and statistical pattern recognition with K-means algorithm to classify pruning locations. These resent efforts used some form of optimization-based approach to identify pruning locations in vines. In pruning, the answer to where to make cuts is dictated by the pruning rules set by viticulturist. However, regardless of pruning rule, the number of buds retained plays an important role as the new parts of the vine (both vegetative and reproductive) emerges from the retained buds. To our best knowledge, only our work physically detects and associates buds individually to each cane for the pruning decision not only to closely resemble manual pruning but also to prevent accidental over-pruning. Furthermore, to identify pruning locations in complex vine structures accurately, additional semantic understanding of the scene is required. For example, the segmentation of the canes from vine structures and the precise measurement of important topological parameters such as bud distribution and cane lengths. Getting a detailed semantic map of plants in real time and consistently in the outdoors has always been a bottleneck in pruning and perception-based agricultural robotics, in general (Kazmi et al., 2014;Houle et al., 2010). One of the major factors affecting consistency in having such capability is the changing outdoor lighting conditions that affects image quality. Historically, to limit effects from changing outdoor illumination, researchers have relied on external structures with controlled lighting. Such as by (Marin et al., 2015;Botterill et al., 2013), (Vision Robotics Corp., 2020) where they addressed the background and illumination challenges by employing a wheeled platform with controlled lighting that completely covered the vines during imaging. The large platform had to be pulled along the rows at low speeds, resulting in a complex and slow application for pruning. Similarly, (Kicherer et al., 2017) presented two different approaches to avoid uncontrolled lighting conditions and the presence of the vines from another row in the background. First, manual segmentation of images using an artificial white background and secondly the use of a multi-camera system for depth reconstruction. A robotic system to measure tree traits by 3D reconstruction of a fruit-tree in field settings was also presented by (Tabb and Medeiros, 2017b). They measured parameters such as branching structure, branch diameter, length and angle with low mean square error but required extensive computation time (more than 5 minutes per tree) making it not suitable for real-time in-field applications. In a similar application, (Tabb and Medeiros, 2018) present a super-pixel based image segmentation method for semantic segmentation in field environments for tree reconstruction and apple flower detection. It also involved using a mobile background unit and capturing hundreds of images per tree. Furthermore, imagebased cane segmentation and applied Gibbs sampling was used to recover 2D structures of a dormant season grape plant from images by (Marin et al., 2015). They also presented a quantitative comparison of their method with previous work on 2D cane structure extraction (Botterill et al., 2013). Although their method performed well in detecting cane segments, it suffers from low precision due to its inability to detect branching points and hence ending up with disjoint cane segments. Their system also relied on a customized background screen in the field to perform foreground-background segmentation. In a similar study, (Millan et al., 2019) presented an image-based cane segmentation method to assess pruning weight in a vineyard. They overcame the background segmentation challenge in outdoor environment by using a white background to avoid the presence of the vines from another row in the scene and also by taking images at night without any background. Their research was more focused on background-cordon-trunk-cane segmentation for pruning weight assessment rather than pruning point identification. To achieve consistent image exposure in any lighting condition, (Pothen and Nuske, 2016) used high resolution stereo sensor with flash to predict yield in vineyards from image-based counting for grapes. The use of flash imagery in this study generated images with uniform white balance and had minimal effects from natural illumination. Our design of the camera system in this paper is motivated by this work. In a follow-up study, we show that the consistency in images not only facilitate classical computer vision algorithm but also tend to reduce the amount of data required to train deep-learning networks . Another aspect that makes a robotic system especially valuable in agricultural applications is its capability to navigate around the environment. Agricultural fields normally have off-road terrains where any vehicle has to drive in a safe, socially predictable, and in some cases energy-efficient manner. Challenges including noise in the sensors, loss of traction, space constraints, among others make this task specially complicated. Depending on the application and the type of crops, various strategies using perception, planning and control have been studied to develop autonomous or semi-autonomous systems to drive in these scenarios (Mousazadeh, 2013;Bechar and Vigneault, 2016). The perception subsystem normally uses data from cameras, laser range finders, inertial sensors (IMU) or GPS receivers to obtain information about the environment and localize the robot within a map or use SLAM algorithms (Abouzahir et al., 2018). With the continuous improvement in computation capabilities, machine learning approaches have become popular for this task, mainly using visual sensors (Chen et al., 2020). Additionally, a diverse group of planners and controllers have been designed and used to guide and command robots to navigate in Ag settings (Papadakis, 2013;Ding et al., 2018). For example, in (Chen et al., 2020) a local planner was combined with a custom control law for an Ackerman vehicle driving in a hazelnut orchard. In this case, both the planner and the controller were designed to account for the kinematic constraints of the vehicle as well as the space restrictions that limited the maneuverability. Other than custom control laws, the stability, accuracy, and smoothness in the navigation that predictive approaches provide made them especially suitable for agricultural applications (Ding et al., 2018). Furthermore, when the characteristics of the terrain strongly constrain the vehicle movement, predictive traction control strategies have arisen as a suitable solution (Sunusi et al., 2020). The mentioned perception, planning and control approaches have been successfully implemented mainly for supervision and sensing tasks (Fountas et al., 2020). However, little work has been reported in the integration of an autonomous navigation system to work alongside specific complex agricultural activities such as harvesting and pruning. In fact, the design and evaluation of a methodology for an integrated autonomous system capable to perform these tasks remains as a gap in field applications. In summary, because of very complex requirement in perception and actuation, extremely limited work and success has been seen in robotic pruning of grapevines and pruning in general. The existing prototypes rely on external physical structure (over the row platform) for acquiring images in the outdoors. This makes the robot's ability to turn, enter, and row following in agricultural terrain extremely challenging and less pragmatic. Most importantly, the rigid frame designs further limits compatibility to different varieties and canopy architecture that could potentially limit commercial adoption. We believe that our rugged and modular robot equipped with an illumination invariant camera system and novel approach to perception and manipulation will lead to a pragmatic and economical solution for automated pruning. 3 Field environment and workspace modifications The vineyard used for this study was located at the Cornell Lake Erie Research and Extension Laboratory in Portland, New York. Concord (Vitis labruscana, Baily) grapevines were own-rooted and planted in Chenango gravel-loam soil in 2012 at 2.6 m row by 2.4 m vine spacing and trained in bi-lateral cordon architecture with an average cordon height of 1.8 m. This variety of grape vine has indeterminant growth habits that results in canopy structures that are vigorous with high degree of cane entanglements, which creates a work environment where even manual pruning becomes a cumbersome task. A standard way in the industry eases labor intensity by mechanically pre-pruning the vines. The mechanical pre-pruners such as the VMech pre-pruning head comb grape canes up or down (white fingers Fig. 1 right) into reciprocating cutter bars (red vertical Fig. 1 right) that have an adjustable mechanism to retain longer or shorter canes. This mechanical pre-processing step, although a non-selective process, greatly minimized the complexity of the work environment. Following this industrial standard, in our experiments we mechanically pre-pruned the vines with an OXBO VMech 1210 Tool Arm and Sprawl pre-pruner (Vmech LLC, Fresno, CA). The mechanical pre-pruner was attached to a tractor and manually driven along the rows and was calibrated to remove canes greater than five nodes long. The result of mechanical pre-pruning and the pre-pruning machine are shown in Fig. 1. Additionally, during our latest field trip (which was pushed towards the end of the pruning season because of the COVID-19 global pandemic), some of the vines that started to show vegetative growth were trimmed to retain its original dormant shape by removing the new shoots. For context, the following brief definitions provide description of the vine canopy and commonly used terminologies in viticulture (see Fig 2). • Bud: A bud is a growing point that develops in the leaf axils and often regarded as a compressed shoot. • Shoot: New green growth developing from a bud. • Cane: A matured long, woody shoot after leaf fall. • Node: The bulged part of a cane where buds are attached. • Cordon: The main lateral expansion of the trunk that supports shoots, canes, and fruits. • Pruning rule: A set of rules that define a systematic way to remove older canes from grapevines. Methods This section describes all components of Bumblebee. First, we describe the mechanical design of the robot that includes a prismatic base to increase the reachable workspace and then the design of the end-effector to prune vines. Secondly, we then detail the perception pipeline that describes the camera system, 3D reconstruction of vines from multiple views and novel 3D cane segmentation algorithm. The rest of the section elaborates motion planning, navigation, and systems integration components. Manipulator An unrestrained rigid body in 3D space has 6 Degrees of Freedom (DoF) described by the three translations and rotational angles about the three independent axes (Donald, 1984). In theory, a robot arm with at least 6 DoF is required to achieve any pose in the workspace. In practice, this capability is severely limited by different factors such as singularities, self-collisions, collision models of the environment, etc. to name a few. However, in kinematically redundant mechanisms, the desired motion of the tool-end or the end-effector can be accomplished in an unlimited number of ways. In the design of the robotic manipulator for pruning dormant season vines, we extend the 6 DoF of an UR5 robot arm to 7 DoF by adding a prismatic joint to the base, as shown in Fig. 3, left. Since the DoF of the manipulator is greater than the Degrees of Constraints (DoC), our proposed design is kinematically redundant and offers several advantages. First, the kinematic redundancy physically allows the end-effector to achieve any combination of orientations required to reach pruning locations in a complex and unstructured work environment. Secondly, the motion of the robot arm is restricted by multiple constrains, such as joint limits and end-effector poses. These restrictions further narrow the convergence of the Inverse Kinematics (IK) and motion planning algorithms. Having redundancy greatly increases the odds of finding possible solutions and improves the convergence time and accuracy of these algorithms. And lastly, the span of the prismatic base drastically increases the reachable work envelope of the arm, as Fig. 3, right shows. This feature was particularly designed to provide the system with the ability to reach the entirety of the vine from a single stationary position without having to move the mobile base to new locations to work on the same vine. Thus, the possibilities of inducing errors in both manipulation and perception pipelines caused by the motion of the ground robot and the repetitive sensing of the same environment is highly reduced. The design of the redundant P6R open chain robot arm is shown in Fig. 3 left. Although posing joint restrictions limits the full range of motion of each joint, constraints are important factors in motion planning. To control unnatural, unachievable, or unnecessary motions, several joint limits and constraints were set, as described in Table 1. A virtual wall (behind the linear base in Fig. 3 left) imposed restrictions on the motion planner as target locations were always in front of the arm. This resulted in the "elongated hemispherical" shape of the work envelope for the 7-DoF arm. -3.14 rad 3.14 rad Workspace volume 3.95 m 3 Wrist 2 -3.14 rad 3.14 rad End-effector weight 0.5 kg (1.2 lbs.) Wrist 3 -3.14 rad 3.14 rad *Values used during the experiments. End-effector Pruning the grape vines require making precision cuts at a specific location in the canes. However, before such cuts could be made, it was important to understand the mechanical properties of the canes, especially the force required to cut dormant canes for the proper design of the pruning end-effector. Due to numerous environmental factors such as soil properties and access to nutrients and water, vines exhibit wide variation in the length and diameter of dormant canes (Bates, 2017). In this experiment, we selected 30 samples of Concord canes with a wide range in sizes and age to quantify the force required for a successful cut. On average, the cane diameter of the collected samples varied from 5 mm to 11 mm. While the freshly cut live canes showed the presence of hard outer shell with soft internal tissues, dead samples exhibited relatively harder, shrunk, and dry internals and required higher force to cut. An experimental setup for this quantitative experiment is shown in Fig. 5 (left). The mechanism of using hand-held shears to cut canes (like scissors) operates under the principles of the first-class lever, and it involves applying normal force on both handles at the same time. The experimental setup in Fig. 5 (left) simplifies this requirement to just one normal force by fixing one of the handles to a rigid surface. An incremental load was then applied at the top of the movable handle to cut the cane samples, which were set at a fixed distance from the pivot and placed orthogonal to the cutting plane. Then, the total weight (load) along with the mechanical advantage of the lever at the abscission point provided the cutting force. This experiment was repeated on all sample canes. On average, 320 N force (at the abscission point) were required to cut a typical cane with 8 mm diameter. In Fig. 5 (right), a fairly linear relationship (R 2 = 0.74) can be seen between cane diameter and the normal force required for cutting. A popular choice among professional pruners to prune grape vines is bypass pruning shears. This variety of pruning shears have blades that completely "bypass" each other for precise cuts and clean separations of the canes. Motivated by this pragmatic feature, the design of the end-effector includes a similar bypass mechanism (Fig. 4). In this custom designed end-effector, one end of the scissors was fixed and bolted to the frame of the end-effector, whereas the other end was movable and actuated with a combination of a high torque (8 Nm) servo motor and floating pulleys. The floating pulley mechanism transferred power from the motor to the blades with a 200 lb (90.72 kg) fishing line. This simple machine system also added a mechanical advantage (M.A) of 3 and increased the overall factor of safety by nearly 8 folds. The combination of lightweight materials, simple machine, and high torque servo motor ensured a small (0.45 kg) yet powerful end-effector that fell well within the payload capacity (5 kg) of the robot arm. The small footprint and weight of the end-effector were also critical to ensure not only wide ranges of accelerations while executing motion trajectories but most importantly to get full horizontal extension of the manipulator into the canopy which could have been unattainable with heavier and larger end-effectors. Perception The overall perception pipeline to perceive the vines and identifying the pruning locations is as shown in Fig. 6. The major steps in the pipeline involve acquiring static images from fourteen view points, point cloud registration using ICP algorithm, bud detection with Faster-RCNN, and cut-point detection with 3D region growing-based cane segmentation and a graph search algorithm. Camera system Performance of computer vision algorithms in the outdoor greatly depend on factors such as motion blur and changing illumination, to name a few. Among other, abrupt changes in the lighting condition can alter image quality that can lead to large data requirement in machine learning-based computer vision algorithms to compensate for variance in images . To minimize such effect, the camera system in our design uses active lighting that greatly minimizes the effects of varying environmental lighting while maintaining consistent image exposure. The detailed description, functionality and advantages of the active light camera system are described in our previous work ). The camera system in this paper uses two of these cameras (Fig. 12) in a top-bottom stereo configuration and moved along the linear slide to image the vines from different view-points. The bottom stereo provided front views from a plane parallel to the vines, whereas the top stereo provided tilted views from higher elevation to include occluded parts deeper into the canopy not visible from just the front view. Altogether, the 3D information from multi-view geometry enabled us to generate accurate and complete 3D models of complex vine structures. The design of the dual stereo camera with the linear base is shown in Fig. 3 (left) and a sample image-set of a vine with and without active lighting is shown below in Fig. 15. Dense 3D reconstruction One of the critical pieces of perceptual information required for autonomous pruning is precise and accurate 3D modeling of the vines. Once the 3D model is generated, the analysis of the plant geometry and topology can be automated, which also becomes essential for obstacle detection and ultimately in detecting of the pruning locations. To generate a dense 3D point cloud of the vines, the dual stereo camera imaged vines at seven precisely set positions along the liner slider. Given two point clouds, fixed (F ) and moving (M ) where, F = {f 1 , f 2 , . . . , f n }; f i ∈ R 3 and M = {m 1 , m 2 . . . , m n }; m i ∈ R 3 , the Iterative Closest Point (ICP) algorithm finds a rotation matrix R and a translation vector t such that error between transformed M and F point clouds is minimum. We take advantage of the precise initial transformation or correspondence between F and M provided by the linear sliding mechanism and calculate R and t in closed form. For this point cloud registration process, we experimented with the non-linear version of the point-to-plane ICP by (Fitzgibbon, 2003). This variant of ICP provides more accurate estimation in cases where consecutive point clouds have different densities and exact correspondences are sparse (Fitzgibbon, 2003). Although the point-to-plane ICP takes more time per iteration due to the added cost of point cloud normal computation, it usually converges in fewer iterations compared to classical ICPs. The point-to-plane ICP for our pipeline iteratively estimates R and t to minimize the distance between every point m i and the tangent plane at its corresponding point f i . A tangent plane is represented by its unit normal n i computed around a small 3D neighborhood of 50 points in the vicinity of the point f i . To reduce computation time, the size of each point cloud from the camera was reduced using voxel grid sampling and outliers were removed using statistical outlier filtering prior to ICP registration. The governing nonlinear equation is shown in Eqn. 3 Bud detection A vine node usually consists of several buds (also known as compound bud) and has primary, secondary, and tertiary backups. During the growing season, if the primary bud is damaged for reasons such as external injury, frost bite or other environmental factors, vines sequentially release each remaining backup to replace the damaged/fallen buds. These buds are relatively small and are randomly positioned in the node, which makes it harder to detect in images and 3d models. Thus, despite the presence of multiple buds in a single node, detecting and counting buds as nodes is a reasonable approach. Moreover, this assumption is valid considering the fact that the secondary and tertiary buds generally bear insignificant amount of fruit compared to the primary (Hellman, 2019). In this work, the task of detecting buds is accomplished by detecting nodes. A node as described in Section 3 is the bulged part of the cane that is more visible and has distinct features compared to the rest of the vine. From this point forward, we will use the terms bud and node interchangeably. Detecting buds is a critical step in the pruning process of grape vines. Pruning rules such as cane and spur pruning which are popular in commercial settings involve retaining a certain number of buds per cane (King, 2021). Thus, accurate counting of buds is extremely important for autonomous pruning of dormant grape vines. To count buds, we leverage on the robustness of deep learning-based 2D object detection in the color images of the vines. We used Faster-RCNN object detection network (Ren et al., 2015) to detect buds in one image from each stereo pair (top and bottom). For training, we used transfer learning and initialized the network weights with the pre-trained imagenet model before fine-tuning the network to our custom dataset. It consisted of 120 hand labeled images of buds collected prior to the field experiments. Although the number of images in the dataset seem small, the number of instances of buds per image were significantly larger. On average, 45 bud instances were present per image. The detected buds in the 2D images (top and bottom) were then projected into the 3D space using the camera intrinsic parameters that produced sparse point clouds of the buds. This operation occurred in parallel to the point cloud registration process discussed in the above section and utilized the optimized ICP transformations for final registration. The combination of registered vine and bud point cloud (here after referred to as input point cloud, P C i ) completed the 3D vine modeling process. The bud-detection network datasets details and training parameters are listed in Table 2 and 3 respectively. Figure 7 show sample detections and 3D projection of the buds in the top camera image. Obstacle detection Using the dense 3D point cloud and non-linear point to plane ICP registration, we were able to consistently generate precise and clean point clouds of vines with buds that could be directly used for manipulation tasks. To avoid damage and reduce contact between the robot arm with the vine and its rigid support structure, it was necessary to define obstacles. In this work, only the central trunk with the metal post and horizontal cordons were taken as obstacles, as contact with theses rigid structures could potentially cause serious damage. However, as canes are relatively flexible and move when pushed, contacts between these soft objects and the robot arm were allowed to facilitate the motion planning (see Section 4.3.1 for more details). Occupancy grid maps are popular choices to define occupied versus free spaces in the robot's workspace. To define cordon and trellis wire as obstacles, a RANSAC algorithm (Derpanis, 2010) fitted two (vertical and horizontal) lines in the 3D model of the vines (Fig. 7 right). As seen in Fig. 7 (left), the new vine architecture has a vertical metal post to support the trunk and a horizontal trellis for the cordon to extend laterally. The existence of these features in the point cloud greatly benefited the RANSAC algorithm to precisely and consistently fit 3D lines in all vines used in our experiment. These fitted 3D lines were then the only elements taken as occupied space in the Octomap occupancy grid mapping algorithm (Wurm et al., 2010) (Fig. 7 right). Region growing for cane segmentation In general, the purpose of the region growing algorithms is to merge adjacent data points depending on a region membership criterion. In 3D point cloud space, this criterion could be smoothness constraints, resulting in points with similar smoothness profile clustered together. In our case, we use region growing algorithm for cane segmentation by clustering 3D points belonging to the canes. As our 3D structures of interest are thin canes, they did not have planar surfaces for normal computation to make use of a normal based smoothness constraint. Hence, we propose a novel region growing based segmentation method that utilizes local structural properties of the vines. We used Singular Value Decomposition (SVD) (Stewart, 1993) on a small neighborhood of points of the vine's point cloud to understand the local structure of the vine in that small area. If there was only one dominant vector after SVD, that indicated the local neighborhood has a linear shape. If the number of dominant vectors was 2 or 3 that indicated the local shape is planar like and sphere like respectively. Using this local shape information, we were able to segment out the linear portions of a cane from cane-cordon or cane-cane intersection regions which have non-linear 3D distribution in the local neighborhood. We started by randomly selecting a 3D projected bud location as a seed point. A set of points around the seed location were then extracted using a radial neighborhood search operation. This search radius (a hyperparameter) was pre-determined empirically based on the average distance between two consecutive buds in canes. Then, SVD calculated the singular vectors and values from the set of neighborhood points. For the extracted set of input points N = (P i) ∈ R mx3 , the output singular values (non-zero diagonal elements) were M = (Si) ∈ R 1x3 . The normalized singular values were then passed through a Support Vector Machine (SVM) that learned to classify patterns in the singular values ( Fig. 8) as cane or intersection region. Based on the inference from the SVM, the set of extracted points were then labeled as a part of cane or intersection region between cane and cordon. This operation was iterated for all bud locations, and the steps are listed in the algorithm box 1. The training data for SVM involved 120 manually picked samples of cane and intersection regions. About 70% (84 samples) were used for training and the remaining 30% for test and validation. Pruning rule Pruning rules define a systematic way to remove older canes to keep the vigor and vine balance in control. Cane pruning and spur pruning are two of the most practiced pruning strategies in the U.S. grape industry. One major difference between these two rules is the number of buds retained after pruning. Cane pruning usually retains longer cane segments with variable counts of buds per canes, while spur pruning leaves fixed but smaller number of buds per canes. Algorithmically, cane pruning requires the ability to track buds in longer sections of canes, which could be a very difficult task because of the high degree of entanglement between the canes. On the other hand, spur pruning has simpler requirements and pruning locations are close to the cordons. For a proof-of-concept robotic pruning of vines, we adopted a simplified spur pruning /* Set of disjoint graphs */ G ←− min span tree(g) x, P.y, P.z, α, β, γ} end rule to only retain 4 buds per cane. In addition to bud retention, pruning rules also necessitate qualitative parameters such as cane diameter and health of canes and buds. In this proof-of-concept design, we considered all canes for pruning and list the inclusion of qualitative parameters as future enhancements. Cut-point detection Once the points belonging to just canes are segmented from the rest of the vine structure, the next step in the pipeline was to identify pruning locations. One possible approach could be to further segment clusters of multiple canes into individual clusters and process each cane individually. However, an additional segmentation or clustering step has the potential to induce more uncertainties, as segmentation and clustering processes are not perfect in themselves. So our approach deviates from this logic and processes all segmented canes at once using a graph-based approach. The SVM model explained in the previous section not only labels cane regions in the 3D models, but intersection regions between canes with cordons as well. The algorithm described in this section essentially uses the cane-cordon intersection region to solve the graphs. To identify pruning locations, the foremost requirement in our pipeline was to convert the segmented canebud point clouds into an undirected acyclic graph G. The first step in this process involved the use of Octree data structure to voxelize the cane clusters. The octree data structure recursively subdivides 3D point clouds into octants or voxels until a minimum voxel size is reached (Wurm et al., 2010). Here, an octree with a resolution of 5 cm was used to voxelize the extracted cane point cloud. Once the voxelization process was completed, the centroid of each voxel was extracted as a sub-sample of the canes. This was necessary to maintain the size of the graph and to keep computation time as low as possible. Subsequently, to generate the graph, a 3D kernel (Fig 9) traversed throughout the equally spaced octree centroids of the canes. With each step, the sixteen-neighbor kernel assigned vertices and edges to the voxelized cloud. The vertices and edges that contained buds were assigned to a special set of vertices V B and edges E B . The post-processing of the graphs mainly involved the removal of loops using the minimum spanning tree (MST) algorithm. It is a greedy approach that removes cycles in weighted graphs while picking smallest weighed edges. To preserve the edges belonging to bud positions, the set of special edges E B were assigned smaller numerical weights, whereas the rest of the edges were initialized to larger weights. This simple step inherently preserved all bud vertices and edges, as the global cost of the graph was minimized. Figure 9 demonstrates this logic. Pruning rules require the correct ordering and numbering of buds per canes, which in our case requires to properly assign each bud to its respective location in the canes. This process essentially involved converting the undirected graph to a tree-like data structure that assigned directions in the graph G. By assigning the cane-cordon intersection region as root nodes in the graph, a depth first search-based (DFS) algorithm first computed all possible paths to the leaf nodes. As canes have random and complex 3D structures with or without branching, the paths from the root node to the leaf nodes could have multiple overlapping routes (Fig. 9). To suppress this ambiguity, a similarity score was computed on all the paths generated by the DFS algorithm using the sequence matching algorithm from (Jeh and Widom, 2002). This similarity score, essentially quantified path overlaps between the root nodes and the end points. A threshold of 0.9 (90% similarity) was used to discard redundant routes in cases if multiple routs were found between the root node and the leaf nodes on the same cane. On these unique paths, newly discovered buds with respect to the root node were sequentially ordered, and the pruning points were identified using the pruning rule described in the previous Section. In our approach, the reduced canopy complexity with pre-pruning greatly facilitated this heuristic-based bud association algorithm. However, in complex vines with multiple crisscrossing canes, a more robust approach might be necessary. Once the pruning locations were identified, the next step in the pipeline was to compute its pose (position and orientation). A full pose was required as the cutting tool needs to approach the bud with a certain orientation to successfully make the cut. To calculate the cut-point orientation, we projected cane segments as 3D vectors on all three perpendicular planes. Here a 3D vector is defined by the 3D coordinates of the N th and (N + 1) th bud section of the each cane that are projected to the XY, YZ, and ZX plane. The angles made by the projected line to the respective planes then provide the roll, pitch and yaw angles with respect to the reference frame. The mid-point of the vector was taken as the pruning location. This process is depicted in Fig. 10 (left). Motion planning The kinematic redundancy offered by the 7-DoF manipulator only reaches its full potential when the motion planning algorithm can incorporate all degrees of freedom in its planning context. The mechanical design of the robot (Fig. 3 left) uses a 6 DoF off the shelf robot arm with a custom-built prismatic base. With few modifications, the ROS-MoveIt based planner (Chitta et al., 2012) can be redesigned to plan the joint trajectories for an integrated 7 DoF arm, but at hardware level custom software drivers split the trajectories in real time and control all joints synchronously. To plan motion between all cut-points, we choose RRT-Connect (Kuffner and LaValle, 2000) as an option in the OMPL integrated with ROS that utilizes the entire 7 DoF. The RRT-Connect motion planner provides collision free motion planning features which greatly fit the requirements for this proof-of-concept prototype (e.g., real time operation, obstacle avoidance or constrained movement). Additionally, in a comparative study of motion planners for robotic pruning by (Paulin et al., 2015), RRT and its variants were found to have overall better performance. Once the poses of the pruning locations were computed, the path of the end-effector to the end goal positions were divided into to two discrete sets of trajectories. The first set were computed using the RRT-connect solver and were used to make an initial approach to the cutting point, positioning the tool 15cm ahead of that cut-point (Fig. 10 right). Once the tool was in this location, the end-effector was commanded to orientate perpendicularly to the branch that contains the pruning point. Subsequently, to accurately position the cane in-between the cutting blades, the motion planning scheme then switched to a cartesian path planner. This cartesian path planner (also from OMPL library) was constrained to maintain the orientation of the end-effector while inching slowly towards the final pose in a straight line. Then the blades closed and opened to mark the end of a successful cut operation. This process proved to be effective in much of the experimental cases; however, it is open-loop as the robot does not have any real-time feedback during the final approach to the cutting point. In Section 6, we discuss some of the ways to close this loop to enhance the robustness of the system. One way to define obstacles for motion planning could be to take the entire vine structure as obstacle and force planning algorithms to find solutions for all pruning locations. However, motion planning with collision detection and avoidance can be computationally expensive, especially in the unstructured and complex environment of dormant vines. The random arrangement of canes in the robot's workspace as obstacles could result in the failure to converge to a solution or -as seen in practice-generate trajectories that result in erratic movements of the arm. To avoid such situations, collisions between the arm and the canes were allowed whereas trunk, trellis and cordon that are more structured and easier to identify were considered as obstacles. In addition to this, once a cutting action was executed, the arm always retracted backward to the initial pose (similar to the pose shown in Fig 3 right) before planning the path to the next pruning location. This process generated not only a natural-looking motion, but most importantly provided more open space for the "connect" heuristics in the RRT-Connect planner to generate effective planning queries and solutions (Kuffner and LaValle, 2000). Despite these systematic steps to supervise cautionary motion of the arm, contacts with canes would still be undesirable. To further minimize contact with canes, the task of sequencing the pruning locations for cutting operations was optimized using a travelling salesman problem (TSP). The nearest neighbor heuristic-based TSP exhaustively calculated all possible cutting route combinations and prioritized closer pruning locations to the ones farther from the end-effector while also minimizing the total travel distance. Although TSP is NP-harp problem, the small set of pruning locations per vine made the exhaustive optimization task possible in short duration. Navigation The RTK-GPS receiver mounted on the vehicle, wheel encoders, and the robot's onboard IMU (Inertial Measurement Unit) were the main localization sensors in the navigation system. An Extended Kalman Filter (EKF) fused the IMU, wheel odometry, and RTK position to localize the robot in the real world with the RTK-base as the reference frame. All RTK-GPS waypoints for autonomous navigation, including the locations of the vines were manually collected. Also, all vines selected for pruning in this work were located in the same vineyard row (Fig. 11). However, the current architecture of the vine required us to address the same vine from both sides of the row. Thus, the main task for the navigation system was to drive the robot down the aisles, accurately turn and enter the aisle on the other side of the same row while stopping at each vine selected for pruning. To navigate in-between vineyard rows, we used a Model Predictive Controller (MPC) (Allgöwer and Zheng, 2012) to follow GPS waypoints. The MPC controller first connected all waypoints using a spline to produce a smooth global path along with curvatures and speed profiles for the complete route. Subsequently, the RTK-GPS points corresponding to the pruning sites were read, and a local path from the current robot position to the next vine to be pruned was generated. Additionally, the planner also calculated local speed profiles (based on the global profile) along with the deceleration and acceleration required to smoothly and accurately stop in front of the tree trunk and start moving again. This approach allowed us to define parameters such as acceleration and deceleration ramps or cruise velocity, ensuring to provide a complete plan that minimized jerky motion that could potentially damage the robot's components or the crops. For a system with state x and control input u at time t, the general discrete form of the MPC controller used is shown in Eqn. 4. min x(·),u(·) Here, N is the time horizon, f (·) is the robot motion model, s represents the path constraints and Q, R, Q f are the weighting symmetric and positive (semi-) definite matrices. The span of the navigation system for autonomous pruning only required to travel a distance of two rows (approx. 0.3 km or 0.19 miles) and it included lane following, stopping between vines, turning, and re-entering. To adequately evaluate the autonomous navigation system, we tested the self-driving capability in the entire block (1 mile/ 1.6 km). In this larger navigation experiment, the robot skipped a row entered a new row with every turn without stopping, as depicted by the yellow path in Fig11. Overall, during this experiment the robot navigated 10 rows and made 9 U-turns autonomously. Robot platform The rugged ground robot (Warthog, Clearpath Robotics Inc.) fitted with a custom aluminum extrusion frame provided a base platform for the gantry system and the field server. The standalone integrated system with all perception, manipulation, and navigation components, and hardware are shown in Fig. 12. All electrical components including the computers, RTK-GPS, and cameras were powered by the ground robot's battery except for the arm that was powered with the portable 1000 Watt gas generator for the AC control box. The edge field server ran on an Intel Xeon E5-2687Wv4 processor with 32GB of RAM and an NVIDIA GeForce GTX1080 GPU for deep neural networks inferencing. All software was packaged for ROS Kinetic under Ubuntu 16.04 LTS 64-bit Linux environment. A local NTP server was used to sync the clock for all sensors and computers for accurate temporal operations in ROS. One complete pruning cycle consists of several tasks that include navigating to the vine position, scanning and 3D modeling, identifying cut points and executing motion plans to physically removing canes from vines. For efficient high-level coordination and execution of multiple tasks, we used a finite state machine (FSM) as shown in Fig. 13. The states in the FSM were navigation, perception, manipulation, and error. Depending on the status of the sub-modules within each state, the SFM transitions between different states following a pre-defined sequence for autonomous high-level control of the robot until all vines were pruned. Additionally, for robustness, each of the sub-processes of the states were equipped with internal error sub-states to selfdiagnose software level issues and pause all operations for manual intervention for hardware or unknown issues. In order to evaluate all the systems that Bumblebee comprises, four datasets were employed. The first one was used to train and evaluate the bud detector. From this dataset, we selected randomly 5 samples (vines) to evaluate the reconstruction completeness and the region growing algorithm. The quality of the overall point cloud generated with the ICP approach was assessed using a single vine imaged in the field conditions. Finally, a total of 20 vines from a single row in a commercial vineyard were selected for pruning. These vines were pre-pruned with a mechanical pre-pruning machine to reduce the vigor and simplify the cluttered work environment. We also provide a brief analysis on how this non-selective process minimize the complexity of the vines, enabling our system to perform precise pruning. The methodology employed for all these tests, as well as the results obtained are described in the following Sections. As detailed in Section 3, to simplify the canopy complexity, the vines were mechanically pre-pruned with an OXBO VMech 1210 Tool Arm and Sprawl pre-pruner ( Fig. 1 right). Pre-pruning In the twenty field vines, we manually counted all canes as well as the number of buds per cane to evaluate the prepruning that set the stage for robotic operation. In total, 268 canes were present in these vines with an average of 13 canes per vine. After the pre-pruning operation, we observed that 25% of the canes had exactly 4 buds, 35% had more than 4, and 40% were over pruned with less than 4 buds per cane. Here, "4 buds" is used as a reference as the simplified spur pruning rule adopted in this study only required to retain 4 buds per cane. Also, the distribution of buds ranged from 1 to 14 buds per cane as a result of nonselective manual pre-pruning operation. This large variation in the bud distribution is the variable we aim to minimize with our robotic pruning system, and the results are presented in the following subsections. As seen in Fig. 14 & 20, the pre-pruning step not only greatly reduced the vigor of the vines and length of each cane, but also reduced the total number of canes to be pruned (35%). Additional statistics include 1122 bud counts in total in all vines with a standard deviation of 2.08 bud counts about the mean of 4 buds per cane. Perception and Reachability Perceiving the environment is usually one of the earliest tasks for autonomous robots. In our case, detection of dormant buds in 2D images and their projection into the 3D coordinates along with the generation of the point cloud of the entire vine were some of the initial steps in the perception pipeline. As any inconsistency in bud detection or significant error in 3D reconstruction could highly affect subsequent processes such as estimating pruning locations and ultimately pruning, ensuring robust perception capabilities were crucial. Camera system The active lighting camera described in Section 4.2.2 was adequate to efficiently suppress affects from natural illumination. As a result, the camera system was able to produce images with consistent exposure (i.e., quality) in all lighting conditions present during the experiments. Figure 15 shows images of the same vine taken at different time of the day with and without flash. The first image (Fig. 15 left) was taken with the robot camera in a typical broad daylight whereas the remaining two were taken with the active light camera ( Fig.15 (center) at same time as Fig. 15 (left) and Fig. 15 (right) at night-time). Qualitatively, Fig 15 (center and right) look alike as the flash from the camera in conjunction with fast shutter speed overpowered day light effects. Quantitatively, the Structural Similarity Index Metric (SSIM) of images taken at various times of the day with the proposed camera system showed an average similarity of 90%. The SSIM is a quality metric that embeds structural as well as contrast and luminance as quality parameters (Wang et al., 2004). As mentioned in Section4.2.3, the bud detection network was trained on 85 images and evaluated in 35. Consistency in the mage quality is considered as a major contributor for requiring such short amount of data to train our deep object detector. For instance, in Fig. 15 (center and right), both images appear similar in exposure, color consistency, and background subtraction regardless of outdoor illumination. The P-R curve of the trained network is shown in Fig. 16. Additionally, we obtained a mean precision average (mAP) of 0.93 in the test dataset. A thorough analysis of image quality from our camera system and reduction in training dataset size to fine-tune deep object detectors is explained in our previous publication ( ). Vine reconstruction To quantify the uncertainty in the measurement of depth information from cameras, depth measurements from stereo pairs were compared against highly precise and accurate laser measurements (±0.1 mm resolution). This process included comparing point stereo measurements from 30 different locations in a single vine at different depth against laser point measurements at the same location. The point measurements under comparison ranged from 0.2m up to 1m which also represented the reachable span of the robot arm. As displayed in Fig. 12, the dual stereo rig that images each vine from 7 different positions, producing 14 different views. The overall point cloud registration process was largely facilitated by the precise and accurate movement of the linear slider. As motion in all directions other than the slider was mechanically constrained, the initial estimates of the point clouds transformations prior to the ICP optimization were very accurate. With such initial estimates, the point to plane ICP optimally computed transformation between successive point clouds as well as the final point cloud registration described by Eqn. 1 and 2. Here, the ICP registration error is defined as the absolute difference between the ICP optimized translation against the actual distance travelled by the camera while imaging at different positions (measured with the encoder of the motor drive of the linear slide). The mean absolute error between these measurements averaged to ±2mm. With uncertainties from individual stereo measurements and multiple ICP registration steps, the final accumulated registration error was estimated to be within ±6.8 mm. As explained in Section 4.1.2, the average diameter of the canes was 8 mm and the widest blade opening was 38 mm (Fig. 4). Therefore, the accuracy achievable from the 3d reconstruction pipeline was well within the tolerance of the end-effector. All accuracy analysis were done in a mock-up vine in a laboratory setup. This was necessary mainly to rule out effects from wind that could alter both ground truth and the stereo measurements. All laboratory tests used the same camera system as in the field prototype and a real vine collected from the test site. Reconstruction completeness In addition to measuring reconstruction accuracy, the completeness of the 3D model also plays a crucial role in the overall success of autonomous pruning. For instance, largely fragmented cane structures and missing buds in the model could highly impact the cut point detection algorithm (see Section 4.2.7), which ultimately affects the pruning efficiency. In the literature, the quality of point clouds are usually assessed using objective and subjective metrics (Karantanellis et al., 2020). Objective metrics usually compare point clouds to a reference or well-defined objects in the scene (Karantanellis et al., 2020;Moon et al., 2019;Zhang et al., 2018). Whereas, subjective evaluation are based on visual inspection and usually involve completeness, density etc. as factors in point cloud assessment (Karantanellis et al., 2020). However, because of lack of consistent structures in vines, reference or ground truth point clouds are difficult to generate and are not available for comprehensive comparison. To assess quality of point cloud from our dual stereo camera system, we present the following objective metrics. • Number of points: Total number of points in the registered point cloud. • Number of neighbors: Average number of points within the search radius of a sphere with r = 0.05 m. • Surface roughness: Average distance between each point in the point cloud to the best fitting plane using neighbors in the search radius of a sphere with r = 0.05 m. • Surface Density: Average number of points per square meter. • Volume Density: Average number of points per cubic meter. Furthermore, we compare the above objective metrics between three models: one reconstructed only using the bottom camera (BC), the other reconstructed only using the top camera (TC), and finally with the registered point cloud using both the top and bottom cameras (TBC) on 5 vines. The results are shown in Table 4. As expected, the results show that TBC model has more points, neighbors and higher surface as well as higher volume density when compared to BC and TC models. However, the surface roughness of the TBC mostly remained average between the TC and BC point clouds. TC BC TBC TC BC TBC TC BC TBC TC BC TBC TC BC TBC 1 For subjective evaluation, here we define two quality metrics to quantify the subjective quality of the reconstructed vine structures. The first metric, connected components, attempts to quantify the completeness of the vine structure as a function of the octree graphs connectivity described in Section 4.2.7. This approach essentially exploits the connected components properties of graphical structures. For a 3D model without any significant gaps in the model, we would anticipate a single or very few connected components, whereas fragmented/incomplete reconstruction would result in large numbers of connected components. The second metric, bud counts, involves the number of buds in the reconstructed model. Similar to the objective metrics in the above paragraph, we compare the results from the TC, BC, and the TBC model on 5 test vines in the subjective evaluation as well. The results show that the number of incompletely formed canes were significantly reduced in the TBC model when compared to the TC and the BC models (6 vs. 17 vs. 19 respectively). Similarly, the number of buds that were essential features for cane segmentation were also significantly present in the TBC model than in the TC and BC model (425 vs. 306 vs. 311 respectively). In total, the TBC model had Mean Absolute Percentage Error (MAPE) of 5.11% in buds count whereas the TP and BC has significant higher MAPE in bud counts of 25.23% & 23.37% respectively when compared to manually counted ground truth. Likewise, in the R 2 correlation between the manual counts of the buds vs. the final registered bud counts was highest in the TBC to TC and BC. Table 5 summarizes all these details on 5 vines for the TBC, BC, and TC models. The high level of completeness in the reconstruction of vines is attributed to the additional (elevated and slanted) views of the canopy provided by the top camera. Evidently, while point clouds from the bottom multiple views provided the majority of the vine structure, the top camera data filled gaps in the most occluded regions. Some of the fragments/disconnected canes in the TBC model were mainly stray canes from the adjacent vines, as no two consecutive vines had a well-defined separation. Region growing The validation of the region growing-based point cloud segmentation algorithm has two parts; i) accuracy of the SVM classifier, and ii) the resulting overall accuracy of cane segmentation. For the first part, we measured the performance of the SVM as a binary classifier to classify SVD decomposed values into cane regions. To evaluate the first part, we manually selected 536 sample point from 5 different vines (273 canes vs, 263 non-cane regions). Out of the 536 random test samples, the SVM correctly classified cane/non cane regions with a F1 score of 0.97. For the second part, we conducted point-to-point comparison between the hand labeled cane point cloud to the region growing segmented point cloud. As hand labeling of complex vine structures are resource intensive, we are currently limiting the segmentation evaluation to 5 vines. The analysis shows that the cane segmentation pipeline achieved an overall F1 score of 0.91. The confusion matrix for the overall cane segmentation and SVM-based individual region classification are shown in Fig. 17. Cut point localization To validate the localization accuracy of the cut points (in-between 4th and 5th bud), we kept track of all the canes (in the 20 vines) with bud counts exceeding 5. In all canes with a significant number of buds, the algorithm estimates of the cut positions were compared against the manual labels. Although, the midpoint between nth and (n+1)th bud was used as the 3D location of the cut points, any location within the two buds were taken as a valid solution since it was more important to correctly associate the bud sequencing. This algorithm on average achieved accuracy of 94% across all manually selected pruning locations. Workspace Quantification A Monte Carlo experiment was carried out to estimate the volume of the robot's workspace. The idea was to sample points in the joint position space to estimate the overall reaching capability of the 7 DoF robot, for one vine. These tests were performed in a simulated environment, using a point cloud model of the field vines. In total, nearly 200,000 end-effector positions were collected as samples of the reachable positions in the workspace. The volume enclosing all the positions reached by the end effector was estimated fitting a convex hull model implemented in MATLAB. This experiment was repeated for two cases: with the 6 DoF arm fixed to the center position of the linear slide and with the 7 DoF counterpart fully articulated allowing the prismatic base to move (Fig. 18 left). As expected, the results showed that the 3D work volume of the 7-DoF arm (3.5m 3 ) was more than 2 times higher compared to the lower 6-DoF design (1.6m 3 ). Similarly, because of the current architecture of the vines, this experiment also showed that if the mobile base is close enough to the canopy, an average of 68% of the canes were within the reachable workspace of the manipulator, while the remaining 32% had to be addressed from the other side. A graphical representation of the number of points in the workspace and the number of reachable locations in the vine structure from the center of the vine is shown in Fig. 18. In this figure (Fig. 18 left), the dashed lines represent the total reachable points in the work space, whereas the solid lines are the reachable points in the vine structure. Pruning To measure the overall effectiveness of the presented robotic pruner, we introduce several metrics to evaluate its performance. First, Total Pruning Accuracy (TPA) quantifies the robot's ability to prune successfully at the right pruning locations. Equation 5 defines TPA as: T P A = T otal valid cuts T otal pruning locations Similarly, Total Pruning Cycle (TPC) is the average time required to prune each vine, as described in Eqn. 6. This metric linearly combines computation cost of all sub-processes in the perception, planing, manipulation, and navigation systems. The computation timing breakdown of all major sub-operations in TPC is shown in Fig. 19. T P C = T perception + T planning + T execution (6) Figure 19: Total computation timing breakdown. Fig. 19 shows the TPC from a single side pruning and the significant sub-processes under perception, planning, and execution stages. In total, it took 137 sec to prune a vine from one side. The current vine training system allowed canes to be randomly distributed on both sides of the canopy and nearly 32% of the canes were on the opposite side and outside the reachable workspace of the robot. For this reason, the robot had to repeat all operation from both sides of the canopy (i.e., from point cloud model generation to motion planning and execution), which increased the TPC to 213 sec/ vine. The variability caused by non-selective pre-pruning is shown in Fig. 20. After this operation, the standard deviation of the bud distribution per cane was found to be ±2.08. Based on the statistics from Section 5.1 (blue data lines), only 95 out of 268 canes (35%) need to be pruned. Under the assumption of ideal perception and manipulation capabilities where all pruning locations were detected and pruned, the best achievable deviation would be of 0.97 standard deviation (Fig . 20 red data line). In reality, because of some discrepancies in the pruning point detection and motion planning/ execution pipelines, not all canes were consistently pruned. Out of 95 prunable canes, only 83 were successfully pruned, yielding a TPA of 87% ( Fig. 20 green data line). However, even with 87% TPA, the standard deviation decreased to ±1.03 which is a significant reduction in variance given that the pre-pruning step over-pruned 45% of the prunable canes. In section 6 we further discuss on the source and potential improvements of the current system. The factor that ultimately determines the success criteria for a pruning robot is its ability to remove canes. In other words, all the steps in the perception and motion planning leading up to the final execution of the cutting action becomes significant only if the target cut-point gets successfully cut. Commonly used metrics such as the TPA described above only quantify the ratio of success or failure in completing the pruning tasks. To incorporate the effects of intermediate steps and to better describe the overall performance of the pruning robot, we introduce a new metric called Total Pruning Efficiency (TPE). The TPE is a multiplicative combination of several efficiency terms that at high level include perception (3D registration, cut-point detection), motion planning, and execution efficiencies as shown in Eqn. 7. In this metric, all efficiencies are converted accuracies or success rates normalized to a number between 0 and 1. For instance, the bud detection efficiency is essentially the accuracy of detecting buds where η bud detection = 0.95 represents 95 % detection accuracy compared to ground truth values (see Table 5). Similarly, η planning = 0.95 represents 95% success rate in the motion planner's convergence to a solution. As shown in Table 6, the TPE is especially valuable in narrowing down system bottlenecks and as well as to justify different design choices. T P E = η registration * η localization * η execution (7) η registration = η bud detection * η 3D reconstruction η localization = η cane segmentation * η cut−point identif ication η execution = η planning * η execute Figure 20: A variability plot that shows the distribution of buds per cane for all field vines after pre-pruning, under the assumption of ideal robotic pruning, and achieved results with the robotic pruner. The error plot on the right side shows the standard deviation of bud distribution about the mean buds count. Table 6 summarizes TPE for various possible combinations of hardware, software, and pruning strategies. The first row with the cell labeled "all inclusive", incorporates all system components described in this paper. Here, the TPE accounted to 0.64 even with higher accuracies in the registration and localization pipeline but with relative low manipulator execution efficiency. In single side pruning, we only considered pruning a vine from one side. Although the η registration and η localization efficiencies remained similar, pruning only from one side mainly affected η execute efficiency as nearly 32% of pruning location were out of reach. This decreased the TPE significant to 0.3. Considering the full point cloud model of vines as obstacle mainly affected the motion planning and execution efficiencies. With more occupied space in the robot's workspace, the sampling-based planner (RRT-connect) took significant amount of time as well as attempts to converge to a solution. Significant delays were also observed in the motion execution, and most joint configurations looked unnatural and complex. The TPE in this case was 0.47 with TPC of 177. With just one stereo pair, we observed the most drastic effect to TPE. As it affected the perception part at the beginning of the pruning cycle, the error propagated to localization and motion execution stages. Here, the TPE was only 0.17. Finally, nearest neighbor-based TSP optimization exhaustively minimized the total point-to-point distance travelled while visiting all pruning locations. Without TSP, the longest possible cut-routes increased the TPC by nearly 20% while the rest of the efficiencies remained similar. The single side, full vine model as obstacle, model from single stereo, and the pruning point sequencing without TSP were analyzed in a simulator, virtually pruned from single side, and based on the data of the same vines collected in this study. Figure 21: Cross-track and heading errors of the path followed by the vehicle when compared with the desired trajectory generated by the planner. Navigation As mentioned in Section 4.5, the robot drove to each pruning location, remain stopped while pruning and start moving again to the next vine location. Given that the width of the vineyard row was only 1.8 m wide and the mobile robot with the linear slider and the arm was approximately 1.2 m, maintaining a consistent distance from the canopy and remaining parallel to the rows were critical requirements. The autonomous navigation system was first tested on the accuracy of stopping at each pruning location. We used 20 vines whose positions were marked prior to the trial using the RTK-GPS. During the test, the robot stopped as expected in all locations and the average position of the robot while stopped was used to calculate its distance to the desired vine location. The longitudinal root mean squared error obtained in this case was 0.28 m, which was acceptable given the pruning task was accomplished successfully in all 20 vines. Laterally, we observed average cross-track errors of 0.07 m for in-row navigation and maximum deviation errors of 0.29 m, which mainly occurred while turning. The average heading angle error was 10.76 degrees and when stopped, the robot remained parallel to the canopy. This positioning facilitated the 3D reconstruction and motion planning algorithms for the pruning task, as all the pruning points were horizontally equidistant to the cameras for imaging and the arm for the actuation. We also evaluated the capability of the autonomous navigation system in the larger section of the vineyard with multiple rows. To this aim, the robot was commanded to drive skipping one row along the yellow route depicted in Fig. 11. The total travel distance driven was approximately 1571.13 m at an average speed of 0.58 m/s and the robot drove autonomously for approximately 45 minutes with no intervention. In both trails, similar heading angle and cross tracking errors as in the pruning section were observed. Discussion In general, robots that interact with their environment pose very challenging problems to solve. In particular for robotic pruning, biologically driven surrounding and indeterminant growth habits of vines add more challenges in perceiving and interacting with the environment. It took us three years of effort with two hardware revisions and numerous software modifications to achieve the results reported in this paper. This section summarizes the key lessons learned, capabilities and limitations, future enhancements to our existing system and some remarks to guide further research. The first requirement in the perceptual capabilities of the pruning robot is accurate and complete 3D models of vines. With multiple views (fourteen different viewpoints), the scan-match based 3D reconstruction approach was able to generate precise models of the vines. In the generated 3D modes, canes which are thin structures with diameters as small as 4 mm were clearly visible with few fragments in its structure. The top slanted camera was a necessary addition, which greatly helped to minimize missing information in the occluded regions by adding point clouds from views that were not seen from the front facing camera. Thus, with adequate overlapping from multiple viewpoints, the complete 3D reconstruction of the vines was possible from just one side of the canopy. However, this approach not only required frequent stereo calibration but was also required manual tuning of several parameters to maintain a relatively consistent size of the registered point clouds for real-time processing. However, modern commercial vineyards typically have consistency in row width and vine spacing, and are equipped with mechanical means to simplify vine complexity at scale. These factors made it possible to tightly control field experiments such as maintaining constant distance between the robot and vines to achieve consistent results even with heuristically chosen parameters. In this study, complex vines structures were simplified by manually pre-pruning with a machine. This step not only facilitated the perception pipeline, but the overall pruning operation. Despite the heuristic-based choices of multiple parameters, the 3D reconstruction method seems to be applicable to uncut, highly vigorous and cluttered vines (see Fig. 22). However, it can be argued that such vine could potentially have higher occlusion that could lead to incomplete or missing canes and affect TPE. This limitation could be handled with an in-hand camera system to explore regions of high occlusion and iteratively add missing links. In recent history, deep learning-based point cloud registration (Elbaz et al., 2017) have shown promising results to register noisy point cloud data without accurate initial alignments and could potentially eliminate frequent calibration and initialization requirements. However, such a supervised approach could potentially require larger training samples to achieve good results. Large datasets to training machine learning models for deep learning-based computer vision is a bottleneck in specialty crop industry and agriculture, in general. The combination of vast amounts of cultivars and variations within those varieties makes collection and maintenance of labeled datasets for supervised machine learning extremely challenging. The consistency in image exposure and color achieved with the active light camera proved to greatly reduce variance caused in images by ambient lighting. In consequence, the training sample size was reduced by multiple folds to achieve similar bud detection results when compared to models trained with larger datasets (see for more details). The availability of public datasets with 3D plant models with proximal sensors are even rarer in agriculture. As hand-labeling of large 3D dataset to segment canes from rest of the vines was very resource intensive, we refrained from the state-ofthe-art deep networks and opted to classical machine learning with SVM. With the combination of singular value decomposition and SVM as a binary classifier, the region growing algorithm was robust at segmenting dormant canes. The training of the SVM model required relatively small but hand-engineered features, and the generation of data for training and the training process itself could be done in a few minutes. Throughout our three years of development and testing, we have only required to train the model once, and it seemed to work equally well on simple as well as complex vine structures. The region growing algorithm essentially exploits the nature of vines. It utilizes buds that are naturally present in vines as seeding points for growing regions in the segmentation process. As all vine varieties have canes with buds, with minor tweaks we expect this algorithm to be adaptable to most vine architectures with relatively small amount of data for retraining. This could be significantly useful as vines are high-value crops and vine industry plays a major role in specialty crops industry. When pruning a vine, professional workers only keep canes that are healthy and within a certain diameter. These quality attributes of canes are currently not included in our work. Another key limitation in our current computer vision pipeline is the potential effects from wind. For the latter, we observed average wind speed up to 12 miles/hour (mph) (5.3 m/s) and some gusts up to 25 mph (11 m/s). As vines in commercial vineyards are very rigidly supported by metal posts (vertically) and trellis wire(s) horizontally, small winds gusts (up to 12 mph or 5.3 m/s) seemed to have minimum effect in the 3D reconstruction of pre-pruned vines. However, higher wind speeds could arguably cause significant issues in the registration process for any scan match-based approach. Especially in vines with longer canes (not pre-pruned) and regions further away from the rigid trunk and cordon supports. To minimize such affect, we selected spur pruning to retain fewer number of buds per cane where the cut-points were closer to the supportive structures and minimally affected by wind. For other pruning rules such as cane pruning where longer sections of cane need to be retained, affects from wind could be a significant problem. For the former, we currently consider all canes as healthy and viable. Although measuring cane diameter is relatively straight forward from the stereo images, assessing cane quality/ health would require additional sensing ability. More advanced camera systems such as hyper-spectral or thermal imaging technologies in conjunction with end-to-end deep networks could potentially provide robust solutions. The addition of a prismatic base to the kinematic chain of the 6-DoF robot arm seems to have added several advantages. The motion planning and execution part with initial and final approach to the pruning locations were attempts to generate natural-looking motion of the arm as well as to cautiously interact with the vine structure. The initial planner was RRT-connect that positioned the end-effector 15 cm from the final destination. It converged to a solution in almost all cases (99%). However, the shortcomings in the manipulation end in this work are mainly attributed to the Cartesian path planner where 100% of the interpolated trajectory were sometimes not achievable. Furthermore, in some cases, the ROS's inbuilt Cartesian planner generated jerky motions that in some cases caused the tip/side of the end-effector to push the cane rather than securing it in-between the cutting blades. The current state-of-the-art and the interest of the research community to control robot arm in complex environment is reinforcement learning (RL). Similar to Deep learning in computer vision, Deep Reinforcement Learning policies tend to provide end-to-end solutions to manipulation tasks and could likely reduce dependency on heuristically set parameters and behaviors for pruning. Furthermore, with Deep RL more sophisticated capabilities that are not possible with existing sampling-based methods such as learning to prune from expert demonstrations could generalize pruning across different vine varieties and architectures. Our recent efforts in using RL policies to manipulate robot arm for pruning can be found in . As described in Section 5.2.6, to prune the remaining 32% of vines, the robot had to repeat all processes from the other side. Although, a robot arm with longer reach could solve this issue, vine architectures with uniform cane distribution on a single side and well-defined vine to vine separation would be advantageous. Furthermore, viticultural practices are critical for reliable robotic pruning operations. Cane and spur pruning are the main pruning methods adopted by the industry. However, to maintain balance between yield, quality, and vegetative growth, accurate estimation of vine size is necessary. The estimation of vine size which is often done by pruning weight estimation (Milkovich, 2021) determines the number of buds to retain per vine. This method of pruning is formally referred to as balance pruning, as the amount to prune is based on the capacity of the individual vine (Milkovich, 2021). None of the existing robotic prototypes, including this work, have implemented such a strategy. However, our approach of using bud detection and pruning strategy to retaining a fixed number of buds per cane sets us on the right path to achieving balanced pruning. Finally, we selected MPC controller for navigation in this work mainly for the four following reasons: i) it has produced good results in autonomous navigation for a variety of vehicles and driving conditions (Sakhdari and Azad, 2018;Amer et al., 2017), ii) its formulation naturally allows to constraint the optimization problem to obtain desired practical results, iii) it produced a smoother navigation in terms of overshooting and cross-track error when compared with approaches like pure pursuit in prior tests, and iv) as it is a model-driven strategy we can increase its complexity including the dynamics of the vehicle or other variables for future research. It is worth noting that the second point was particularly useful for this application, as we limited the control effort to obtain maneuvers that reduced the risk of damaging the surrounding vegetation. Although, just GPS way point following seemed robust for this application, inclusion of local sensor for navigation as well as safety critical features such as obstacle detection & avoidance, and compliance to farm vehicle and field workers are currently ongoing developments. Conclusions In this work we presented a combination of tools, techniques, and system development details of an autonomous vine pruning robot as a follow-up pruner. Highly vigorous Concord vines in a commercial vineyard were mechanically pre-pruned to ease robotic operations. The foci of attention here were not only to develop a system mostly utilizing off-the-shelf hardware components for a proof-of-concept prototype, but mostly to understand what is takes to robotically prune grape vines. The key technical challenges that we addressed in this work were robust imaging capability in the outdoors and data efficient machine learning models for processing vine structures. The illumination invariant camera system proved to be a valuable component as consistent image data were acquired at any lighting condition. This also led to fewer training sample for detecting buds in images and eased 3D reconstruction. Results from the field study show that even complex vines structures could be accurately modeled from single side imaging. The integrated system robustly identified pruning location and pruned 87% of the canes successfully, with an average cycle time of 213 sec /vine from two sides and 137 sec/vine from one side. Improved pruning efficiency will require robustness in manipulation and advanced sensing capabilities to assess cane health and vine size for balance pruning. The mechanical design with redundant manipulator was enough to address a single vine and could have multiple uses throughout the growing season, such as selective shoot thinning and harvesting. Initial investment* -$30,000 -
2021-12-02T02:16:30.513Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "eec3a888f5c51a3f49686eaf96bf3684bae9512d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "eec3a888f5c51a3f49686eaf96bf3684bae9512d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Computer Science" ] }
219162428
pes2o/s2orc
v3-fos-license
Paternal care in rodents: Ultimate causation and proximate mechanisms The evolution of paternal care in rodents has intrigued biologists for over decades. In this paper, both ultimate (adaptive significance, evolution) and proximate (ontogeny, mechanisms) questions related to the emergence and maintenance of male paternal care are reviewed. Paternal care is thought to be a consequence of social monogamy, but no definitive hypothesis adequately explains the evolution of paternal behavior in rodents. The onset, activation and maintenance of paternal care are shown to be governed by complex interactions in neuroendocrine systems that change during ontogeny. Depending on the species, different components of male experience as well as different exogenous cues are likely to be involved in the organization and activation of paternal behavior. Several hormones, including steroids (testosterone, estradiol, progesterone) and neuropeptides (prolactin, vasopressin, oxytocin), are involved in the onset, the maintenance, or both the onset and the maintenance of parental behavior, including direct paternal care. The effect of testosterone was found to be not universal and, moreover, species-specific. As for estrogens and neuropeptides, further investigations are needed to better understand the role of these hormones in activation and maintenance of rodent paternal behavior. Current research shows that male parental care in rodents is, to a great extent, an epigenetic phenomenon, and future studies will focus on the epigenetic modifications that can affect the paternal behavior in rodents. How to cite this article: Gromov V.S. 2020. Paternal care in rodents: ultimate causation and proximate mechanisms // Russian J. Theriol. Vol.19. No.1. P.1–20. doi: 10.15298/rusjtheriol.19.1.01. Introduction Male parental care is relatively rare among mammals (Kleiman, 1977;Kleiman & Malcolm, 1981), because males typically are `emancipated' from care of young and have the first opportunity to seek additional mates (Orians, 1969;Trivers, 1972;Maynard-Smith, 1977;Clutton-Brock, 1991). Moreover, males would forfeit potential reproductive success if they increased their parental effort in any one female's young at the expense of lost mating opportunities (Kurland & Gaulin, 1984). Nevertheless, male care of young does exist in some mammalian species, including rodents. This is why there is considerable recent interest in the evolution of male parental care. In rodents, male parental care is typical of biparental species. This kind of male reproductive strategy is generally associated with social monogamy and can involve such behaviors as warming, feeding, protecting, retrieving, and grooming young, depending on the species (Kleiman & Malcolm, 1981). Paternal care is related to a reduced likelihood of engaging in competitive or mating behavior and an increased likelihood of providing protection when necessary. Paternal behaviors include direct care of young (warming, huddling, retrieving, and grooming), as well as indirect care-giving activities (nest-building, provision of food, defense of the offspring against predators or infanticide); of course, males need to suppress their own infanticidal behavior in the presence of pups (Elwood, 1977;Perrigo et al., 1991;Vella et al., 2005;Wynne-Edwards & Timonin, 2007). In some rodent species, males show levels of direct parental care comparable to those of females (Elwood, 1983;Dewsbery, 1985;Brown, 1993;Gromov, 2011a). The nature of paternal care both within a species and among different species exhibits phenotypic plasticity: it is shaped by ecological provisions, environmental factors, neural constraints, and species-specific social interactions (Westnead & Sherman, 1993;Reynolds et al., 2002;Royle et al., 2014;Rosenbaum & Getter, 2018). In this article, the proximate and ultimate factors affecting rodent paternal behavior will be considered. Ultimate causation of male parental care As paternal investment is very costly, in terms of reduced survival, breeding and mating opportunities (Clutton-Brock, 1991), the question is why paternal care evolved in those species where it is observed. It is suggested that male parental care will only evolve when there is environmentally induced selection for care, and males are capable of improving offspring survival and development to such an extent that the benefits of paternal investment outweigh the costs of lost mating opportunities (Emlen & Oring, 1977;Kleiman & Malcolm, 1981;Gubernick & Teferi, 2000;McGuire, 2003;Wynne-Edwards, 2003;Feldman et al., 2019). Male parental care appears to have evolved multiple times among different taxa of rodents (Kalcounis-Rüppel & Ribble, 2007;Lukas & Clutton-Brock, 2013). It means that various environmental factors may operate as selective forces promoting the evolution of paternal care in different species. Paternal behavior is not restricted to specific phylogenetic lineages, hence it has evolved within individual species in response to local ecological conditions that demand care from two parents to optimize the reproductive success of each (Wynne-Edwards, 2003). In general, two broad groups of hypotheses are proposed for mammals, which could explain the evolution of paternal care in rodents. Fitness-enhancing hypotheses (Trivers, 1972;Maynard-Smith, 1977) suggest that paternal care evolved because there was an initial direct benefit to offspring, fathers and/or mothers. The prevailing paradigm assumes that a male's fitness can increase through providing care if his offspring survive and reproduce, and certainty of paternity is presumably a contributor to the evolution of paternal care. However, certainty of paternity is shown to be not required for paternal care in rodents (Hartung & Dewsbury, 1979;Werren et al., 1980). Therefore, male fitness benefits do not adequately explain the evolution of paternal care among rodents. On the other hand, paternal care can contribute to offspring survival, growth and/or development (Maynard-Smith, 1977) when resources are limited (Gubernick & Teferi, 2000) or there is a risk of infanticide (Sommer, 1997), as well as in some other situations (Storey & Snow, 1987;Brown, 1993;Huber et al., 2002;Stockley & Hobson, 2016). However, male parental care is suggested to be not necessarily crucial for infant survival in any rodent species, and thus can not enhance the male fitness (Rymer & Pillay, 2018). Paternal care could also evolve directly through alleviating reproductive costs of females (West & Capellini, 2016). Males making energetic contributions (e.g., provisioning food or huddling offspring) enable females to redirect resources into reproduction (Woodroffe & Vincent, 1994) or foraging (Helmeke et al., 2009), although such a reduction in maternal workload is not ubiquitous across rodent taxa (West & Capellini, 2016). Consequently, female fitness benefits are not the sole explanation for the evolution of paternal care in rodents. Therefore, the fitness-enhancing hypotheses in isolation do not account for the evolution of male parental care in rodents. Another group of theoretical models, the constraints hypotheses (Rymer & Pillay, 2018), suggest that paternal care evolved in the absence of fitness-related benefits, but males were constrained to remain with females and/ or offspring due to extrinsic (ecological) or intrinsic (physiological) constraints. In particular, the social constraints hypothesis (Payne & Payne, 1993) assumes that limiting resources favor males defending exclusive territories, into which females disperse, leaving little opportunity for additional matings; social tolerance and paternal care could emerge consequently in this situation. The ecological constraints hypothesis (Maher & Burger, 2016) suggests that under limiting resources, clumping of individuals due to costs associated with dispersal into potentially resource-poor environments could lead Paternal care in rodents to mate guarding (or harem defense) and paternal care. Considering the constraints hypotheses, the extent of paternal care and its subsequent cost to males likely varies among individuals and species across ecological conditions due to historical and physiological processes (Requena & Alonzo, 2017). Therefore, these hypotheses in isolation do not adequately explain why paternal care has not evolved in rodents generally. It is also expected that assistance provided by males may allow females to produce more energetically costly litters, and the need for such male care contributes to the development of obligate social monogamy or communal breeding associated, in some species, with male parental care (Woodroffe & Vincent, 1994). Social monogamy is known to be a social system when a single breeding female and a single breeding male share a common home range or territory and associate with each other for more than one breeding season, with or without non-breeding offspring (Lukas & Clutton-Brock, 2013). Theoretically, reduced opportunities for males to gain control of more than one female can lead to, as mentioned above, a monogamous mating system involving some forms of paternal care (Emlen & Oring, 1977;Kleiman, 1977;Wittenberger & Tilson, 1980;Lukas & Clutton-Brock, 2013). In other words, social monogamy is considered to be the common evolutionary antecedent to the evolution of biparental care and, consequently, male parental care. In rodents, social monogamy is usually not associated with genetic monogamy, and the incidence of extra-pair mating is generally high in many socially monogamous species (see, for example, Solomon et al., 2004;Gromov, 2018). It is also suggested that social monogamy evolved in mammals where feeding competition between females was intense, breeding females were intolerant of each other, and population density was low (Lukas & Clutton-Brock, 2013). This hypothesis however is not well supported by studies on rodents. Recent phylogenetic reconstructions have demonstrated that paternal care is likely a by-product of social monogamy, which may have emerged as a form of mate guarding when the cost of mate searching was very high. Under these conditions, guarding individual females may represent the most efficient breeding strategy for males (Lukas & Clutton-Brock, 2013). Complementary analyses have also shown that across mammals, including rodents, various types of male parental care reduce female energetic burdens, especially during lactation (Stockley & Hobson, 2016;West & Capellini, 2016). Paternal care thus allows females to redirect effort from current young to future reproduction. While these theoretical and empirical contributions provide important insight into the evolution of paternal care, they do not negate the evidence that males in some non-monogamous species have social bonds with infants, and may make important contributions to their survival and growth. This is, for example, documented in striped mice, Rhabdomys pumilio (Schubert et al., 2009). The fact that paternal care occurs in non-monogamous systems additionally supports suggestions that it may have multiple evolutionary origins. Because no single hypothesis is sufficient to explain all known instances of monogamy in mammals, including rodents, Wittenberger and Tilson (1980) proposed several alternative hypotheses for the evolution of this social system: 1) monogamy should evolve when male parental care is both non-shareable and indispensable to female reproductive success; this hypothesis implies that monogamy is advantageous for both sexes; 2) monogamy should evolve in territorial species if pairing with an unavailable unmated male is always better than pairing with an already mated male; 3) monogamy should evolve in non-territorial species when the majority of males can reproduce most successfully by defending exclusive access to a single female; 4) monogamy should evolve even though the polygyny threshold is exceeded if aggression by mated females prevents males from acquiring additional mates; 5) monogamy should evolve when males are less successful with two mates than with one. Surprisingly, no mammals, according to Wittenberger and Tilson (1980), are monogamous because male parental assistance is essential for rearing offspring (hypothesis 1). This conclusion contradicts the results of the study on California mice (Gubernick et al., 1993;Gubernick & Teferi, 2000;Wright & Brown, 2002) showing that paternal presence significantly enhances offspring survival. Moreover, paternal care-giving has been shown to be beneficial in some other rodent species, being associated with a significant positive effect on pup growth, development and survival that enhances the male fitness (Storey & Snow, 1987;Brown, 1993;Huber et al., 2002;Stockley & Hobson, 2016). Besides, male parental care is thought to be advantageous for lactating females in Djungarian hamsters, Phodopus campbelli (Walton & Wynne-Edwards, 1997;Wynne-Edwards, 2003). Thus, paternal behavior, associated with social monogamy, is certainly advantageous for both sexes in some rodent species. Wittenberger and Tilson (1980) consider monogamy and polygyny as mutually exclusive reproductive strategies evolved under different selective pressures. However, in populations of some rodent species like, for example, the prairie vole (Microtus ochrogaster), the Mongolian gerbil (Meriones unguiculatus), the social vole (Microtus socialis), the Brandt's vole (Lasiopodomys brandti) or the muskrat, there are both monogamous and polygynous social units (Getz et al., 1993;Marinelli & Messier, 1993;Roberts et al., 1998;Gromov, 2008Gromov, , 2018. Therefore, the abovementioned hypotheses are incapable of explaining all known instances of social monogamy in rodents. The ecological conditions leading to monogamy associated with biparental care-giving, and, consequently, paternal care, are debated. There is a point of view that monogamous species tend to dwell in stable environments, give birth to altricial offspring and have low reproductive potentials (Eisenberg, 1965;Emlen & Oring, 1977;Kleiman, 1977). It has been also hypothesized that social monogamy evolved from the ancestral condition of solitary individuals on the background of female-female intolerance and female dispersion that increases motivation of males to defend their access to females and led to the formation of male-female monogamous units (Lukas & Clutton-Brock, 2013). Besides, all the socially monogamous rodent species are territorial, and territoriality is thought to stabilize the 'evolutionary stable strategy' (in the sense of Maynard-Smith, 1982) of paternal care once it has evolved. But territoriality could be secondarily evolved after paternal care (Ridley, 1978). On the other hand, according to Emlen (1982), family-group social organizations associated with biparental care may occur in variable and unpredictable environments and/ or under intense intra-specific competitive pressures in stable environments. Monogamy may be also common in populations where individuals are widely dispersed over relatively uniform environments (Emlen & Oring, 1977). However, while ecological constraints are thought to be clearly important determinants of the available reproductive options in some biological systems, paternal care is found to be distributed across a wide variety of ecological niches in rodents. Some biparental rodent species inhabit areas showing a high degree of environmental stability and predictability, while others inhabit harsh, fluctuating, and highly unpredictable environments. Thus, paternal care emerges in both very harsh, variable niches and more stable, benign ones (Shen et al., 2017). One of the abovementioned hypotheses suggests that paternal care may occur in situations in which this behavior is critical for the survival of the offspring (Emlen & Oring, 1977;Clutton-Brock, 1989;Ribble, 2003). In these situations, males do not have the opportunity to seek additional mates, and their strategy for maximizing reproductive success is to maximize offspring survival. Such a reproductive strategy is documented, for instance, in California mice and old-field mice, Peromyscus polionotus (Wolff, 1989). As for California mice, male parental behaviors, like grooming, retrieving, and huddling over pups, are thought to be critical to offspring survival, especially when the ambient temperature is cold or resources are low (Gubernick & Teferi, 2000;Ribble, 2003). Specifically, removal of male significantly decreased pup survival, suggesting that direct paternal care and not infanticide prevention (no other males were present) is the primary function of male care (Gubernick & Teferi, 2000). Similar advantages of direct paternal care are found in mound-building mice, Mus spicilegus (Patris & Baudoin, 2000). Male parental care, which is typical of many rodent species with a family-group lifestyle belonging to the Holarctic fauna (Gromov, 2017(Gromov, , 2018, may have evolved as an adaptation to harsh environments, in which pair bonding and cooperation in different activities (digging of burrows or construction of other shelters, maintenance of territories, food hoarding, care of young) essentially increase offspring survival (Gromov, 2017(Gromov, , 2018. Geographic variation in paternal behavior reported for some rodent species (McGuire & Bemis, 2007) seems to correlate also with harsher environmental conditions, particularly colder temperatures. Even some non-monogamous species, like meadow voles, Microtus pennsylvanicus, may have evolved the ability to form selective partner preferences and display paternal care in winter (Storey & Snow 1987, Parker & Lee, 2001a. This hypothesis emphasizes the role of cooperation in favoring a family-group lifestyle in rodents and, consequently, the emergence of biparental care (Gromov, 2014(Gromov, , 2018. The distribution of resources or females may also affect the social structure and mating systems in rodent populations (Emlen & Oring, 1977). When important resources are distributed uniformly in space, there is little opportunity for resource monopolization. If the resources are sufficiently abundant and stable through time, territoriality typically occurs. Under such conditions, members of the breeding population would tend toward even dispersion, and the potential for multiple matings would be low. Sexual selection would be minimal, and the fitness of individuals might be maximized by sharing equally in parental care. Monogamy, associated with male parental care, is thought to occur only in populations where individuals are widely dispersed over relatively uniform environments (Emlen & Oring, 1977) or at the lowest densities with the lowest patchiness of food resources (Slobodchikoff, 1984;Waterman, 2007). In other words, paternal care accompanies social monogamous long-term bonds in situations when males are unable to gain access to more than one female during a mating season (Holmes, 1984;Komers & Brotherton, 1997). While such situations are documented in a number of muroid rodents (Kleiman, 1977;Mihok, 1979;Wolff, 1985;Lambin & Krebs, 1991;Getz et al., 1993;Waterman, 2007), they however primarily result in the facultative monogamy typical of many rodent species in which males usually do not contribute to paternal care. Thus, the relationship between the evolution of social monogamy associated with male parental care and distribution of resources, including mates, demands a more reliable substantiation in rodents. As for other factors favoring the evolution of parental behavior, Ribble (2003) suggested that relative litter weight might be correlated with the need for paternal care and influences male mating strategies. It was shown, in particular, that males of species with low relative litter weights tended to be monogamous, and, consequently, exhibited different paternal behaviors. Some authors suggested the specific ecological conditions favoring paternal responsiveness in certain rodent species. For example, male presence was supposed to alleviate maternal hyperthermia, which is a particular challenge in Djungarian hamsters, P. campbelli, adapted to heat retention rather than heat dissipation, and thereby preserves maternal homeostasis (Walton & Wynne-Edwards, 1997;Wynne-Edwards, 2003). The evolution of paternal care in P. campbelli is seen as the necessary consequence of conflict between adaptations for survival in a cold, dry seasonal habitat and reproductive adaptations appropriate for handling heat load, water stress, and rapid breeding in the same habitat. Moreover, the authors of this hypothesis (Wynne- Edwards, 2003;Wynne-Edwards & Timonin, 2007) have proposed the unique, hormone-independent (see below) pathways to paternal behavior that has possibly evolved in P. campbelli. This hypothesis however does not explain why male parental care is not evolved in other dwarf hamster, Phodopus sungorus, adapted to the same environmental conditions of Central Asia. To summarize, one can conclude that the literature related to the ultimate causation of rodent paternal care remains replete with a variety of interpretations and subject to a fair amount of debate. Neither purely phylogenetic nor socio-ecological hypotheses can explain presence or variability in the expression of paternal behaviors in rodents. It is obvious that no one set of circumstances has led to the parallel evolution of biparental care in rodents. This poses a paradox: why do similar social organizations occur in such seemingly opposite ecological situations? This question has no distinct answer to date. Proximate mechanisms of male parental care A concern with understanding proximate mechanisms of paternal care is that much of our understanding results from studies of laboratory rats and mice that are not naturally paternal. While some studies on naturally paternal species support the studies on laboratory rodents, the existence of a universal proximate mechanism across and within taxa is unlikely, and might be species-specific (Rymer & Pillay, 2018). In recent years, considerable progress has been made in elucidating developmental, social, hormonal, and neural determinants of paternal behavior, primarily in naturally biparental rodents. Much of this work has focused on determinants of within-animal changes in males' responses to pups across the lifespan: males of some species undergo predictable changes in their behavioral responses to pups, transitioning between aggression, indifference, and nurturance at different life stages. Depending on the species, different components of male experience as well as different exogenous cues are likely to be involved in the organization and activation of paternal behavior. These exogenous stimuli must then activate the central nervous system to evoke the needed learning and behavior. Among the exogenous stimuli, maternal response is known to be one of the main factors influencing the level of paternal care. In other words, a major determinant of paternal behavior is whether or not the female permits the male to stay near the young (Dewsbury, 1985). For example, females of some biparental species, like the Mongolian gerbil, the grasshopper mouse, the prairie vole, and the spiny mouse (Acomys cahirinus), frequently exclude males from the natal nest during parturition and for about a day thereafter, but subsequently permit males to fully interact with young (Elwood, 1975;McCarty & Southwick, 1977;Porter et al., 1980;McGuire et al., 2003). In contrast, presence of the mother was found to maintain paternal responsiveness in California mice, and maternal excreta were sufficient to keep fathers parental (Gubernick & Alberts, 1989). If the male remains with the female and the offspring it may generally be to the male's advantage to act parentally. Hence, male presence could be a strong predisposing factor for paternal behavior (Dewsbury, 1985). There are also so called indirect genetic effects that occur when variation in the quality of the environment (e.g., in the nest) provided by parents reflects genetic differences among them (Wolf et al., 1998). Environmental effects derived from this parental variation are considered 'inherited environments' because the parental phenotypes producing these environmental effects in offspring could be heritable (Wolf et al., 1998). Paternal effects are specific indirect genetic effects derived from the environment provided by fathers. They occur when fathers are influenced by environmental factors, which impact offspring (Curley et al., 2011). Paternal effects also occur when fathers influence maternal care of their mates. In particular, the father's absence can lead to reduced or increased maternal care (Helmeke et al., 2009;Rymer & Pillay, 2011). In striped mice, females compensate for a lack of paternal help when raising offspring alone, resulting in adult sons providing more care to their own offspring (Rymer & Pillay, 2011). On their first exposure to infants, male rodents may be infanticidal, show parental behavior or ignore the pups. Whether or not they show parental behavior may be influenced by their experience with infants (Jakubowski & Terkel, 1985;Soroker & Terkel, 1988). Such an experience resulting in emergence of paternal responsiveness is known as sensitization (Brown, Moger, 1983;Dewsbury, 1985;Walsh et al., 1996) and will be considered separately (see below). In evolutionary terms, the easiest way to achieve appropriate parental behavior in males would be to organize and activate existing neuroendocrine pathways leading to maternal behavior (Rilling & Mascaro, 2017). It is very likely that the essential hormonal stimuli required for parental behavior are shared by males and females, and the same hormones act at the same neural sites to facilitate the expression of the same repertoire of parental behaviors in both sexes. Obvious sex differences in these behaviors should descend from differential gene expression rather than structural dimorphism (Kelley, 1988). Current evidence suggests that mammalian paternal care-giving behaviors rely upon the same neural pathways, as those supporting maternal behavior, making use of the same neural sub-strates and hormonal systems (Feldman et al., 2019). It is well known, in particular, that the medial preoptic area (mPOA) of the hypothalamus as well as the bed nucleus of stria terminalis (BNST) play the key role in stimulation and regulation of maternal care: these brain regions contain cells expressing various neurotransmitters and neuropeptides, and the diverse projections of these cells connect to multiple neural targets in the mammalian parenting network to support maternal behavior; by contrast, anterior hypothalamic nucleus, ventromedial hypothalamic nucleus, and periaqueductal gray participate in the inhibiting mechanisms of neural regulation of maternal behavior (Numan & Insel, 2003;Numan & Stolzenberg, 2009). It is hypothesized that there are the same facilitating and inhibiting mechanisms in the neural regulation of paternal behavior (Romero-Morales et al., 2018b). However, despite evidence for similarity in the neurobiology of maternal and paternal behaviors in rodents, paternal behavior also has its own dedicated neural circuitry in some species. For example, in the study of two Peromyscus species -P. polionotus and P. maniculatus -that exhibit differences in parental behavior, twelve genomic regions that control parental care were identified. Eight of these regions were found to be sex-specific, suggesting that parenting behavior evolved along independent lines in females and males (Bendesky et al., 2017). Moreover, some authors suppose that the hypothesis of homology between paternal and maternal behavior has not yet been adequately tested, and it is possible that different neuroendocrine circuits could lead to the same behavior in males and females (De Vries & Boyle, 1998;Wynne-Edwards & Timonin, 2007). As for the neuroendocrine basis of male parental care in rodents, current evidence suggests that males in biparental species undergo systematic changes in hormonal and neuropeptide signaling during the transition to fatherhood, in association with pair formation, mating, cohabitation with a pregnant female, and/or exposure to infants. Some of these changes differ across species, and their functional significance, including potential effects on paternal behavior, is generally unknown. Several hormones, including steroids (estradiol, progesterone, testosterone) and peptides (prolactin, vasopressin, oxytocin), as well as many exteroceptive stimuli are involved in the onset, the maintenance, or both the onset and the maintenance of parental behavior, including direct paternal care. The understanding of the neural substrates of parental care has particularly benefited from what was known about the neural control of sexual behaviors. Specifically, sex differences in parental care in laboratory rats and mice can be influenced in both sexes, at least to some degree, by perinatal manipulation of androgen exposure (Lonstein & De Vries, 2000). Although the circuit underlying parental care seems to be similar in male and female rodents, its regulation is sex-specific and depends on both experience and, in male rodents, exposure to the pregnant and lactating dam. Testosterone interference Circulating testosterone concentrations are typically reduced in fathers and have been shown convincingly to influence the expression of paternal behavior; however, effects may differ both within and among species . Specifically, testosterone was found to decrease in new fathers of California mice (Gubernick & Nelson, 1989); however, castration reduces and testosterone or estrogen replacement restores parental behavior in this species (Trainor & Marler 2001. The stimulatory effect of testosterone in California mice is thought to be mediated by aromatization of testosterone to estrogen in the brain (Trainor & Marler, 2002). Similar results, when castration reduces and testosterone replacement restores parental behavior, were shown in virgin male Mongolian gerbils housed in same-sex groups (Martınez et al., 2015); however, virgin male gerbils housed with a lactating female showed the opposite pattern (Clark & Galef, 1999). One of recent studies (Martínez et al., 2019) shows that paternal behavior in Mongolian gerbils is associated with high testosterone concentrations in blood samples. On the other hand, high testosterone concentrations during in utero development was found to interfere with male parental behavior, resulting in a trade-off between mating effort (high testosterone) and parental effort (low testosterone) in this gerbil species (Clark & Galef, 1999. Another study showed that Mongolian gerbil testosterone concentrations have been correlated with the rate of paternal care, and that testosterone levels do not decrease when the males give paternal care (Luis et al., 2010). Studies of prairie voles have likewise yielded mixed results: castration either reduced (Wang & De Vries, 1993;Lonstein et al., 2002) or did not alter (Lonstein & De Vries, 1999) responses to pups in males. Hence, more studies are needed to determine if and how testosterone is involved in the regulation of paternal behavior in Mongolian gerbils and prairie voles. The hypothesis about inverse association, or tradeoff, between circulating testosterone concentrations and paternal care (Ketterson & Nolan, 1992) has not been supported by other experimental studies. In particular, castration did not reduce paternal responsiveness in biparental P. campbelli (Hume & Wynne-Edwards, 2005;Romero-Morales et al., 2018a). Both expectant and new fathers of this species had higher testosterone concentrations than they have had before pairing (Reburn & Wynne-Edwards, 1999;Schum & Wynne-Edwards, 2005). Moreover, testosterone concentration was found to be more responsive to the birth in uniparental P. sungorus than in biparental P. campbelli (Schum & Wynne-Edwards, 2005). In the mandarin vole (Lasiopodomys mandarinus), males that successfully raised their offspring had higher content of gonadal testosterone than males that were unable to take parental care of the offspring because of their death (Gromov & Voznesenskaya, 2013). It was also shown that serum concentration of testosterone as well as testosterone content in the testes of bank vole Clethrionomys glareolus males exhibiting parental responsiveness were higher than in the males inclined to infanticide. Increased testosterone content in the testes and blood serum was also found in red-backed vole Clethrionomys rutilus males that had contact with pups (Gromov & Osadchuk, 2015). Thus, the effect of testosterone on paternal responsiveness is not universal and, moreover, species-specific: in some rodents, testosterone inversely correlates with paternal behavior while no effect or positive correlation has been found in other species. Progesterone interference Progesterone is rarely measured in male rodents. In uniparental male laboratory mice, interference with the progesterone receptors was found to increase paternal behavior and decrease infanticide behavior, whereas increasing progesterone has the opposite effect (Schneider et al., 2003). This finding shows that experimental manipulation of progesterone may alter paternal behavior in non-paternal species of rodents. As for biparental rodents, California mouse males were shown to have lower progesterone concentrations as they became fathers Perea-Rodriguez et al., 2015). However, the pattern was found to be opposite in dwarf hamsters (P. campbelli and P. sungorus). Progesterone concentrations in naïve males were the same in both two species, but biparental P. campbelli had a significant progesterone increase from before to after the birth of pups, whereas uniparental P. sungorus did not (Schum & Wynne-Edwards, 2005). Hence, although progesterone dynamics differentiate the two species of dwarf hamsters as their males become fathers, the result of the study are opposite to predictions. In other words, it is unlikely that there is a simple inverse association between progesterone and paternal behavior in P. campbelli. In general, much more experimental data are required for better understanding the association of progesterone with male parental care in different rodent species. Estradiol facilitation One of hormones facilitating the onset and the maintenance of male parental behavior could be estradiol. For example, in male laboratory rats, parental responsiveness was experimentally induced by estradiol implantation (Rosenblatt & Ceus, 1998). Estradiol can also promote paternal behavior in California mice through the aromatization of peripheral testosterone into estradiol (Trainor & Marler, 2002). The actions of estradiol in the males involve the mPOA, which expresses aromatase enzyme (for the conversion of androgens to estradiol, E 2 ) as well as estrogen receptor (Rosenblatt & Ceus, 1998;Trainor et al., 2003;Cushing & Wynne-Edwards, 2006). Recently, paternal behavior in Mongolian gerbils was found to be associated with the presence of estrogen receptor α (ER α) in the mPOA, the olfactory bulbs, and the medial nucleus of the amygdala (MeA) (Martínez et al., 2019). Moreover, the neural regulation of paternal behavior in this gerbil species is thought to underlie positive and negative mechanisms as occurs in maternal behavior (Romero-Morales et al., 2018b). Similarly to the finding in California mice, males of dwarf hamsters (Phodopus spp.) were also found to have peripheral estradiol concentrations as high as reproductive females (Schum & Wynne-Edwards, 2005). High estradiol is suggested to be a predisposing adaptation to facilitate the onset of paternal behavior in P. campbelli (Wynne-Edwards & Reburn, 2000). However, neither estradiol dynamics nor pharmacological manipulation of estradiol support a causal link between estradiol and paternal behavior in dwarf hamsters: in contrast to predictions based on the results in females, estradiol in uniparental P. sungorus males increases before the birth and falls across the birth, whereas estradiol concentration in biparental P. campbelli males does not change (Schum & Wynne-Edwards, 2005). Castration removes the primary source of both estradiol and testosterone in P. campbelli males (Hume & Wynne-Edwards, 2005), but paternal behavior towards an experimentally displaced pup was not reduced (Hume & Wynne-Edwards, 2005). There is also no evidence that local aromatization of androgen to estradiol within the brain is involved in paternal behavior of dwarf hamsters (Hume & Wynne-Edwards, 2006). Reduced estradiol did not reduce paternal behavior even when prior experience with the birth or the pups was eliminated (Hume & Wynne-Edwards, 2005. However, when these males were treated with E 2 and the concentrations of this hormone increased significantly, they became paternal (Romero-Morales et al., 2018a). This finding contrasts with the conclusion of Hume and Wynne- and suggests that an increase in E 2 levels shifted infanticidal behavior to paternal behavior in P. campbelli. Besides, it was found that experimental increase in the expression of estrogen receptor α (ERα) in the medial amygdala inhibited parental behavior in adult males of the prairie vole (Cushing et al., 2008), while increasing ERα expression in the BNST had no effect (Lei et al., 2010). The results of these studies suggest that unlike the onset of paternal behavior, the maintenance of this behavior is not dependent on steroid hormones, as in maternal behavior. Thus, further investigations are needed for better understanding the role of estrogens in the activation and the maintenance of paternal behavior in different rodent species. Neuropeptides: prolactin, oxytocin, and vasopressin Paternal behavior, similar to maternal behavior, was found to be associated with changes in the levels of prolactin, oxytocin, and vasopressin. The extent of these changes parallels the amount of direct paternal care (Feldman et al., 2019). The anterior pituitary hormone prolactin has been referred to as "the hormone of paternity" (Schradin & Anzenberger, 1999), as circulating or excreted levels are elevated in fathers of numerous biparental species and often correlate with males' expression of paternal behavior (Saltzman & Ziegler, 2014;Hashemian et al., 2016). In male rodents, there is good support for a positive association between concentrations of prolactin in peripheral circulation and the expression of appropriate paternal care (Wynne- Edwards, 2001). In male laboratory rats, prolactin promotes, and a dopamine agonist inhibits, 'pup-contact-induced' paternal behavior (Sakaguchi et al., 1996). Similar patterns are seen in several biparental rodent species. For example, in California mice, both new fathers and new mothers have elevated prolactin concentrations relative to non-fathers (Gubernick & Nelson, 1989). In Mongolian gerbils, paired males have higher prolactin concentrations than unmated males (Brown et al., 1995). Paternal Djungarian hamsters have an increase in prolactin concentration during the late afternoon of the day before their female partner gives birth that is not seen in uniparental P. sungorus males (Reburn & Wynne-Edwards, 1999). The increase in males is synchronous with an increase in female prolactin concentration . Thus, prolactin may facilitate the initiation of infant care in some rodent species. It was found, however, that striped mice provided extensive paternal care but did not experience an increase in prolactin associated with fatherhood; nevertheless, males had higher prolactin levels during the breeding season than during the non-breeding season (Schradin & Pillay, 2004). Besides, in the experiments with Djungarian hamsters, it was shown that dopamine agonist treatment before and after the birth reduced prolactin concentration, but did not impair paternal responsiveness (Brooks et al., 2005). Thus, in spite of some evidence documenting a positive association between prolactin and paternal behavior, the experiments that pharmacologically reduce prolactin in a naturally paternal animal model, like Djungarian hamsters, do not support a causal pathway. Similarly, circulating prolactin does not appear to mediate sex difference in parental behavior of prairie voles (Lonstein & de Vries, 2000). There is suggestion that prolactin secretion in polygynous rodent species might be regulated by environmental stimuli, whereas social stimuli might be important for socially monogamous species (Schradin & Pillay, 2004). Therefore, further investigations are needed to highlight the role of prolactin in the activation and the maintenance of paternal behavior in different rodent species. Oxytocin and vasopressin are well-known neuropeptide hormones involved in social interactions (Young, 1999) and likely to be involved in parental behavior (Francis et al., 2002;Bridges, 2015;Kenkel et al., 2017). Oxytocin is also known as a potent prolactin-releasing factor (Liu & Ben-Jonathan, 1994). However, little is known about effects of oxytocin on rodent paternal care. In experiments with prairie voles, paternal behavior of adult virgin males was found to be inhibited by combined intracerebroventricular treatment with an arginine vasopressin (AVP) receptor antagonist and an oxytocin receptor antagonist, but not by either antagonist alone (Bales et al., 2004). On the other hand, when male prairie voles received a neonatal injection of an oxytocin antagonist, these males displayed less parental behavior at the age of 21 days compared to males that were handled without injection (Bales et al., 2011). It is also found that hypothalamic oxytocin gene expression does not increase in male prairie voles or montane voles (Microtus montanus) that become fathers (Wang et al., 2000), and peripheral oxytocin concentrations in California mice are elevated after mating, but low and unchanged while the pups are young and dependent (Gubernick et al., 1995). More recently, males of California mice participating in paternal care-giving also showed lower levels of oxitocin than non-breeding males (Perea-Rodriguez et al., 2015). Treatment with a different oxytocin receptor antagonist inhibited parental behavior in adult male prairie voles in a dose-dependent manner (Kenkel et al., 2017). In mandarin voles, fathers had a significantly higher serum concentration of oxytocin than virgin males; the levels of the oxytocin receptor in the mPOA of fathers were also significantly higher than in virgin males (Yuan et al., 2019). These results support the suggestion that oxytocin could be involved in stimulation of paternal behavior or, at least, in the adaptation to fatherhood in some rodent species, but further investigations are needed to highlight the role of oxytocin in the activation and the maintenance of paternal care in rodents. Recent findings suggest that arginine vasopressin (AVP) could be also important for paternal behavior. Specifically, injection of AVP into the lateral septum (LS) in prairie voles enhanced paternal responsiveness toward young pups (Wang et al., , 1999, and both male and female prairie voles had increased vasopressin gene expression after the young were born (Wang et al., 2000). Like oxytocin, vasopressin can also release prolactin (Shin, 1996). Expression of AVP receptor (V1a) increased social affiliation both in prairie voles and laboratory mice Lim et al., 2004). Central infusion of AVP receptor antagonists had the opposite effect in prairie voles (Wang et al., , 1999 and even in promiscuous meadow voles (Parker & Lee, 2001b). On the other hand, castration of male prairie voles virtually eliminated AVP-immunoreactivity (AVP-ir) in the LS and lateral habenular nucleus (LHN), but did not alter paternal behavior, indicating that AVP signaling in these areas is not essential for expression of paternal care (Lonstein & De Vries, 1999). Monogamous male Californian mice showed more AVP-ir staining in BNST than the polygamous Peromyscus leucopus, as well as more AVP receptors in LS (Bester-Meredith et al., 1999). These results are congruent with finding that circulating vasopressin is correlated with paternal behavior of P. californicus . Both within and among rodent species, paternal behavior was found to correlate with patterns of AVP-ir and AVP-binding, particularly in LS and other parts of the extended amygdala (Bales & Saltzman, 2016). For example, in California mice, high care male offspring had significantly more AVP-ir cells within the BNST than low care offspring (Yohn et al., 2017). However, according to other study, California mouse males participating in paternal care-giving showed lower levels of AVP V1a receptor mRNA expression than shown in non-breeding males (Perea-Rodriguez et al., 2015). On the other hand, in male prairie voles, variation in the length of microsatellite DNA in the regulatory region of the avpr1a gene encoding AVP V1a receptor (V1aR) underlies differences in V1aR neural expression and is correlated with significant differences in partner preference and paternal behavior: males possessing longer avpr1a microsatellite alleles spend more time with their female social partner, sire offspring with fewer females and provide more paternal care relative to males with shorter avpr1a microsatellite alleles (Castelli et al., 2011). In addition, there is evidence indicating that the AVP is involved in regulation of indirect paternal behavior. Specifically, Bendesky et al. (2017) identify the AVP gene as a likely contributing factor to the evolution of inter-specific differences in parental behavior related to nest building in two Peromyscus species -P. polionotus and P. maniculatus. The expression of AVP itself was found to differ between P. polionotus and P. maniculatus, and this difference may explain the association between nest building and the gene locus on chromosome 4. Thus, oxytocin and vasopressin are obviously associated with paternal behavior, but most closely functionally linked to social affiliation and pair bonding (Carter et al., 1992;Bamshad et al., 1994;Insel et al., 1994;Insel & Hulihan, 1995;Wang et al., 1999;Numan & Insel, 2003). At a proximate level, the existing evidence implies a common physiological substrate for both paternal behavior and pair-bonds. New research focusing on involvement of neuropeptides in the initiation and the maintenance of male parental care may help us understand inter-specific variation in paternal responsiveness of rodents. Relatively little is known about the effects of parity on paternal behavior in rodents. Prior parenting experience was shown to have no effect on paternal behavior in prairie voles (Wang & Novak, 1994;Kenkel et al., 2019). However, fathers of this species, compared to virgin males, exhibited higher levels of oxytocin-immunoreactivity in the paraventricular hypothalamus; on the other hand, the fathers had less oxytocin in the BNST (Kenkel et al., 2014). Contrary to the results obtained for the prairie vole, observations of breeding pairs of the social vole revealed that experienced fathers were significantly more active in pup grooming than new fathers (Gromov, 2011a). Similarly, in the mandarin vole, experienced fathers displayed more active paternal behaviors such as licking, retrieval, and nest building than new fathers; besides, new fathers had significantly higher levels of oxytocin receptors, but lower levels of dopamine-2 type receptors in the nucleus accumbens compared to experienced fathers (Wang et al., 2018). The oxytocin receptor (OTR) levels in the MeA of new fathers were found to decrease with the age of pups; in contrast, OTR levels of experienced fathers significantly increased with the age of pups (Wang et al., 2018). In striped mice, experienced males had higher prolactin levels than inexperienced males (Schradin & Pillay, 2004). These data illustrate that fathering experience could increase the active components of parental care and alter the expression levels of receptors of some neuropeptides. One can conclude that paternal experiences do facilitate paternal behavior in some rodent species, but other cues play a role as well. In summary, while gonadal hormones such as testosterone, estrogen, and progesterone, as well as hypothalamic neuropeptides such as oxytocin and vasopressin, and the pituitary hormone prolactin, are implicated in the activation of paternal behavior, there are significant gaps in our knowledge of their actions, as well as pronounced differences between species. Hence, future studies should focus on the neuroendocrine mechanism that underlies paternal behavior in rodents. These studies should examine similar outcome measures in multiple species, including both biparental species and closely related uniparental species. Careful phylogenetic analyses of the neuroendocrine systems presumably important to male parenting, as well as their patterns of gene expression, will also be important in establishing the next generation of hypotheses regarding the neuroendocrine regulation of male parenting behavior. Epigenetic 'programming' of paternal behaviors Over the last decade, experimental studies clearly demonstrated that animal genomes are regulated to a large extent as a result of input from environmental events and experiences, which cause short-and longterm modifications in epigenetic markings of DNA and histones (Jensen, 2013). Recent evidence shows that such epigenetic modifications can affect the behavior of rodents, and acquired behavior alterations can be inherited either through the germline or through reoccurring environmental conditions (Reik, 2001;Rakyan & Whitelaw, 2003;Rakyan & Beck, 2006;Skinner et al., 2008;Curley et al., 2011;Geoghegan & Spencer, 2012;Szyf, 2015). In other words, the environment experienced by parents can affect offspring phenotype, including their behavior. Epigenetic inheritance, i.e., the inheritance of information beyond the DNA sequence in forms such as cytosine methylation and histone acetylation, is the likeliest mechanism by which ancestral environments could influence offspring; microRNA (mRNA, short endogenous noncoding RNA) is also involved in the posttranscriptional regulation of gene expression (Turner et al., 2015;Mashoodh & Champagne, 2019). Epigenetic inheritance means that genetically identical organisms exhibit a range of phenotypes that are heritable despite not resulting from variation in DNA sequence. The epigenetic inheritance of acquired characters is also called the epigenetic (re)programming of phenotypic differences (Reik, 2001;McGowan et al., 2008;Skinner et al., 2008;Jablonka & Raz, 2009). The epigenetic programming is known to result in alteration of gene expression levels in the brain related to stimulation and regulation of different behaviors, including paternal care (Carone et al., 2010;Song et al., 2010;Jia et al., 2011;Rando, 2012;Saltzman et al., 2017). Moreover, parental care itself is revealed to be one of important factors resulting in epigenetic programming of the offspring behavior (Champagne, 2008(Champagne, , 2011Champagne & Curley, 2009;Champagne & Rissman, 2011;Rando, 2012). The impact of paternal care on the neural systems regulating social behavior in offspring can lead to multigenerational continuity in paternal behavior, similar to the mother-daughter transmission of maternal behavior in rodents (Champagne, 2008). Effects of sensitization In rodents, interactions with younger siblings or unrelated pups, either during the juvenile period or in adulthood, may contribute to both intra-and inter-individual differences in paternal responsiveness (so called effect of sensitization, when parental behavior is induced through prolong contact with infant stimuli, Brown & Moger, 1983;Dewsbury, 1985;Walsh et al., 1996). For instance, virgin prairie vole males that have lived with younger siblings are significantly more likely to behave paternally to an unfamiliar pup than those that have no experience with younger siblings, although most males in both conditions behave paternally (Roberts et al., 1999). Similarly, virgin males of California mice that have lived with their parents and younger siblings show higher levels of paternal care toward an unrelated pup, compared to virgin males that have lived with only their parents and a littermate but no younger siblings, or with only a littermate (Gubernick & Laskin, 1994). Species also differ in whether repeated exposure to pups during adulthood facilitates the onset of paternal care. In the study of California mice , adult virgin males with no previous exposure to pups were found to engage in less paternal behavior than new fathers, and virgins' paternal responsiveness was increased by repeated, brief (20-min) exposure to pups. In adult virgin male mandarin voles, even a single, 10-min exposure to an unrelated pup increased paternal responsiveness to an unrelated pup a week later (Song et al., 2010). Paternal responsiveness of some captive bank vole males could be also explained by the effect of sensitization (Gromov & Osadchuk, 2015). In contrast, repeated 10-min exposure to a pup did not reliably alter paternal behavior in adult virgin male Djungarian hamsters, even after four exposures . Similar to dwarf hamsters, adult virgin male prairie voles showed no change in parental behavior after three consecutive 20-min exposures to pups over several days (Kenkel et al., 2013). Therefore, the effect of repeated exposure to pups seems to be species-specific. It is known that the mPOA is implicated in the process of sensitization (Rosenblatt et al., 1996;Sturgis & Bridges, 1997). Experiments with California mice (Lee & Brown, 2002) have shown that lesions to the mPOA disrupted paternal behavior, and increased neuronal activity in the mPOA has been observed following pup exposure (de Jong et al., 2009). Besides, other brain sites, such as the MeA, the basolateral amygdala, the BNST, the ventral pallidum and the LS, have also been shown to be crucial to the emergence of paternal behavior (Kirkpatrick et al., 1994;Lee & Brown, 2002;de Jong et al., 2009;Akther et al., 2014). For instance, in California mice, immunoreactivity for immediate early genes such as fos (a marker of neuronal activation) increased in the BNST of new fathers, suggested altered neural transmission in this area (de Jong et al., 2009), and lesions to the basolateral amygdala impaired paternal behavior (Lee & Brown, 2002). In prairie voles, exposure to pups increased fos expression in the mPOA, the MeA, the LS, the paraventricular nucleus of the thalamus, and the BNST (Kirkpatrick et al., 1994). Lesions to the MeA in this species decreased paternal behavior (Kirkpatrick et al., 1994), and lesions to the ventral pallidum increased latency to retrieve and groom pups (Akther et al., 2014). Similar to lactating females, specific pools of the mPOA galanin-expressing neurons in the male brain project to inhibitory periaqueductal grey neurons to promote pup grooming, to the ventral tegmentum area neurons to increase approach behavior, and to the MeA neurons to suppress competing social stimuli to help the males focus on pups. All these regions and neural circuits are suggested to integrate to form the rodent subcortical paternal network (Feldman et al., 2019). Experiments with C57BL/6J mice have shown that experience with infants elicits long-lasting increases in parental care via epigenetic modifications (Bonthuis et al., 2011;Stolzenberg et al., 2012). Epigenetic mechanisms mediating the long-term effects of parental care provide multigenerational continuity in parental behavior, including paternal responsiveness (Champagne & Curley, 2009). By the way, facultative paternal behavior has been reported for some non-paternal rodent species under unfavorable breeding conditions (see, for instance, Barash, 1975;Mihok, 1979;Wynne-Edwards, 1995), as well as in captivity (McGuire & Novak, 1984, 1986Dewsbury, 1985;Storey & Snow, 1987;Xia & Millar, 1988;Storey et al., 1994;Wolff, 2003;Gromov & Osadchuk, 2015). These findings may indicate that the pup care reported for some rodent species in small cages is a laboratory artifact. Alternatively, males of these species have the potential to display paternal behavior, and may do so under certain conditions (Dewsbery, 1985;Gromov, 2011). This unusual paternal responsiveness could be easily explained by the effect of sensitization as a result of contact with pups in laboratory cages. For example, in male meadow voles, decreased aggression and facilitation of paternal responsiveness occurred most reliably after extensive exposure to pups (Storey & Joyce, 1995). The paternal neural activation was revealed in P. maniculatus males as a result of experience with pups, and enhanced mPOA activation was associated with this paternal response (Lambert et al., 2013). Both copulation and postcopulatory cohabitation with pregnant females were shown to reduce infanticide and enhance paternal responsiveness in male CS1 mice, and the effectiveness of copulation in this process depends on the number of occasions that males have previously encountered infants (Elwood, 1986). Similarly, copulation and cohabitation suppress pup-directed aggression in previously aggressive meadow vole males, but these males exhibited paternal behavior only following 24 h of postpartum exposure to pups (Parker & Lee, 2001a). These behavioral data suggest that copulation and cohabitation with a female are sufficient to suppress pup-directed aggression in non-paternal rodents, but these social stimuli are ineffective regulators of paternal behavior onset, and postpartum interaction with pups seems to be the most effective social experience for making males paternal. Other epigenetic effects In an epigenetic approach to behavioral development, ontogeny is viewed as a series of interactions between an organism and its environment (Lehrman, 1970;Johnston, 1987). As for paternal care and factors (both internal and external) affecting its development and variability, it needs to note that behavioral responses to pups may differ markedly among individual sexually naïve males as well as among individual fathers within a species. Although this variability is likely to arise in part from genetic influences, early-life experience can also contribute to long-term behavioral differences among males. Two important sources of inter-individual variation in paternal responsiveness could be identified: intrauterine position during gestation and parental care received during the pre-weaning period. It is known that circulating hormones during gestation can influence later behavior. For example, individuals gestating between two males (2M) experience higher androgen levels than those between two females (2F), and these intrauterine position effects have consequences in adulthood. Specifically, in Mongolian gerbils (Clark et al., 1998), intrauterine position was found to influence both males' behavioral responses to pups in adulthood and potential hormonal mediators of paternal behavior. Males that gestated between two sisters (2F males) had significantly more contact with pups than males that gestated between two brothers (2M males). Moreover, 2M males had higher circulating testosterone levels in adulthood than 2F males (Clark et al., 1992). In house mice, 2M males had decreased sexual activity, but were more aggressive and more paternal than 2F males (Mateo, 2007).Due to studies in other rodents, it was revealed that differences in males' intrauterine position are associated with differences in exposure to androgens and estrogens during gestation (Vom Saal et al., 1983;Pei et al., 2006), as well as with differences in expression of androgen receptors and a steroidogenic enzyme, 5a-reductase, in peripheral reproductive organs (Nonneman et al., 1992;Ryan & Vandenbergh, 2002). Therefore, intrauterine position likely affects males' parental behavior in adulthood by modulating exposure to steroid hormones during both early development and adulthood. After parturition, young directly experience their physical and social environments, and social stimulation from parents can have profound effects on behavioral development. For example, in laboratory rats, variation in maternal behaviors, such as nursing postures and rates of licking and grooming of pups, influences the development of serial traits in their young, as offspring of high licker/groomers are less fearful and have smaller stress responses than those of low licker/groomers (Weaver et al., 2004;Meaney & Szyf, 2005;Weaver, 2007;Meaney et al., 2007). Cross-fostering studies indicated that these effects on offspring are due to postnatal maternal handling rather than inherited traits. Daughters of high-licking and grooming mothers became high-licking and grooming mothers themselves, thus transmitting variation in parental behavior non-genetically across generations (Liu et al., 1997;Francis et al., 1999;Meaney, 2001). The neurobiological studies have revealed that maternal care (the extent of maternal grooming) affects DNA methylation and gene expression in the brain of the offspring (Fish et al., 2004). Long-term changes in offspring behavior are associated with expression of estrogen receptor alpha (ERα) in the mPOA of the hypothalamus, rendering animals with higher receptor levels more sensitive to estrogen (Champagne et al., 2003). Differences in ERα receptor expression between offspring of low-licking and grooming mothers and high-licking and grooming mothers are attributable to methylation of the ERα promoter region (Champagne et al., 2006). These epigenetic modifications to gene expression are persistent, predicting how a female will behave towards her future offspring. The same epigenetic effect of paternal grooming could be expected in male rodents. Brown (1993) has particularly noted that the type and frequency of parental behavior received by males during infancy may influence their display of paternal behavior in adulthood. This statement is supported by the results of recent studies. In particular, a cross-fostering study with California mice, in which offspring were reared by a foster father engaging in relatively higher or lower levels of paternal behavior than engaged by the biological father, indicated that the quality and quantity of paternal care expressed depend on the males' own neonatal and adult experience of paternal care . Like in laboratory rats, in the California mouse, it is possible that a similar mechanism of stimulation of parental behavior is involved because testosterone promotes grooming behavior in males via conversion to estradiol E 2 (Trainor & Marler, 2001). California mouse fathers were found to have significantly more aromatase activity in the mPOA compared with mated non-fathers, indicating that with fatherhood comes a regional increase in conversion of testosterone to E 2 (Gleason & Marler, 2013). Other experimental studies also show that the expression of paternal care in biparental rodent species is dependent on the quality and/or quantity of care that fathers received from their own parents (Gromov, 2009(Gromov, , 2011aBraun & Champagne, 2014;Bales & Saltzman, 2016): specifically, males that were reared uniparentally (i.e., without their fathers present) subsequently perform less paternal care toward their own offspring than do males reared biparentally (i.e., by both parents). In particular, male Mongolian gerbils reared without father display lower paternal responsiveness, indicated by reduced nest attendance and grooming of their pups (Gromov, 2009). In mandarin voles, paternal deprivation also reduced paternal behavior in male offspring (Jia et al., 2011;Yu et al., 2015;Wang et al., 2014Wang et al., , 2015. As in Mongolian gerbils and mandarin voles, prairie vole fathers raised by only their mothers performed less paternal behavior toward their own offspring than did fathers that have been raised by both parents (Ahern et al., 2011). The study on California mice has also shown that the amount of licking and grooming received by pups was significantly decreased in father-absent families . One of recent studies revealed that male California mice raised by fathers which paternal care was experimentally reduced were engaged in less huddling and grooming of their offspring (Gleason & Marler, 2013). It means that a significant reduction in paternal care influences the development of offspring paternal behavior. Importantly, pups reared by single mothers receive less total parental care (especially, because of a lack of paternal pup grooming) and are faced with a deficiency of thermoregulation in the nest than those reared by both parents (Gromov, 2009(Gromov, , 2011a. These early postnatal conditions associated with paternal deprivation may negatively affect subsequent behavioral development of young in biparental rodent species. In other words, the postnatal social environment experienced by offspring shapes the systems that support paternal behavior in adulthood. Recent research suggests that grooming of pups is an important contributing factor to development of paternal care and epigenetic (re)programming of male parental behavior in rodents (Gromov, 2009(Gromov, , 2011a(Gromov, , 2011b(Gromov, , 2013(Gromov, , 2018. Some experimental studies show that, due to pup grooming, it is possible to stimulate paternal responsiveness even in males of species with uniparental care. For example, in a cross-fostering study, male meadow voles raised by prairie vole foster parents received higher levels of parental care, especially grooming, during pre-weaning development and subsequently performed some paternal behaviors toward their own offspring, compared to male meadow voles raised by conspecific foster parents and showed no paternal care (McGuire, 1988). Similar to cross-generational transmission of maternal behavior (Champagne, 2008), this paternal transmission is likely to involve altered gene regulation in neural systems associated with social and reproductive behavioral phenotypes, resulting in a later recapitulation of the social context of early development. Likely targets include the dopaminergic, neuropeptide (oxytocin and vasopressin) and neuroendocrine systems that are known to be impacted by paternal deprivation Brown et al., 2013;Cao et al., 2014;Gos et al., 2014). As for the role of specific molecular mechanisms such as DNA methylation, histone modifications or the effects of mRNA, as well as the roles of the enzymes that regulate these factors, in this transmission, it requires, of course, further investigation. Epigenetic effects related to paternal care in rodents have been also supported by the results of some other studies. For example, in experiments with California mice (Gleason & Marler, 2013), paternal behavior performed by castrated and sham-operated males and, subsequently, by their sons was estimated. Castration or sham surgeries were performed on adult males to generate mice that huddled and groomed their offspring at quantitatively different levels. When tested in their home cage with one of their pups, castrated fathers took significantly longer than intact fathers to approach and begin caring for their pups, and spent significantly less time huddling and grooming their pups. These differences were repeated in the subsequent generation: gonadally intact sons of castrated fathers spent significantly less time huddling and grooming their pups, and performed significantly more retrievals of pups, than sons of intact fathers. Although neural and endocrine measures were not characterized in the offspring in this study, it was shown in previous studies that sons of castrated males of California mice had lower AVP-immunoreactivity in the dorsal region of the BNST compared to sons of intact males, as well as higher AVP-immunoreactivity in the paraventricular nucleus; thus, AVP may be a critical neurochemical underlying non-genomic transfer of behavioral patterns Frazier et al., 2006). Therefore, individual differences in paternal behavior may be transmitted across generations, potentially mediated by changes in AVP signaling within the brain due to epigenetic mechanisms. Comparison of paternal behavior in prairie voles that had received different patterns of care from their own parents was carried out in the study of Perkeybile et al. (2013). Offspring of "high-contact" parents experienced high total levels of contact with their parents but relatively low levels of contact with their fathers specifically, compared to offspring of "low-contact" parents. When tested with an unfamiliar pup shortly after weaning, sons of high-contact pairs engaged in more non-huddling contact with the pup than sons of low-contact pairs. Cross-fostering studies demonstrated that this effect was mediated primarily by experiential, rather than genomic, transmission of behavior, as juvenile males' behavioral responses to pups correlated with several components of parental behavior that they had received from their foster parents (Perkeybile et al., 2015). In addition, binding of AVP and oxytocin in the BNST of juvenile males correlated significantly or marginally, respectively, with several aspects of parental care received, as well as with AVP and oxytocin binding in their biological parents. Taken together, all these studies demonstrate that the quality and/or quantity of parental care that males receive during early postnatal development, especially due to tactile stimulation from their parents (Gromov, 2011a(Gromov, , 2011b(Gromov, , 2013, influence their behavioral responses to pups in adulthood, and that the differences in parenting style can be transmitted to the next generation. They also indicate that these developmental effects on paternal behavior are associated with, and perhaps mediated by, changes in oxytocin and AVP signaling within the brain. Conclusion Paternal care is an evolutionary mystery. Analysis of various circumstances in which paternal care has been observed provided no clear conclusions concerning its evolutionary scenarios. The fitness benefits of providing paternal care are not clearly understood as well. Male parental care in rodents is undoubtedly associated with social monogamy, or more correctly, with a family-group lifestyle. However, no convincing hypotheses accounted for the evolution of social monogamy in rodents has been proposed. Similarly, current evolutionary models do not convincingly explain emergence of male parental care among different representatives of order Rodentia. The onset, activation and maintenance of paternal care are governed by complex interactions in neuroendocrine systems that change during ontogeny. The neural adaptations that take place in male parents are less uniform and hormone-dependent than those that take place in female parents. Moreover, these changes are shaped, to a great extent, by active care-giving, exposure to the pregnant or lactating female, and the presence or absence of specific infant stimuli. The male's prior social experiences, the type of parental care he received, and his experience with pups may all influence his initial responsiveness to pups. Depending on the species, different components of male experience as well as different exogenous cues are likely to be involved in the organization and activation of paternal behavior. These exogenous stimuli, including maternal response, paternal effects, and infant stimuli, must then activate the central nervous system to evoke the needed learning and behavior. Several hormones, including steroids (testosterone, estradiol, progesterone) and neuropeptides (prolactin, vasopressin, and oxytocin), are involved in the onset, the maintenance, or both the onset and the maintenance of parental behavior, including direct paternal care. The effect of testosterone is not universal and, moreover, species-specific: in some species, testosterone inversely correlates with paternal behavior while no effect or positive correlation has been found in other species. The role of progesterone in the initiation and maintenance of male parental care is not yet clear. The limited results of neurobiological studies to date provide some contradictions, and therefore further investigations are needed for a better understanding of the role of estrogens as well as neuropeptides in activation and maintenance of rodent paternal behavior. Recent research shows that paternal environmental conditions can affect the phenotypes of offspring. Extensive genetic and molecular evidence supports a role for interconnected epigenetic information carriers such as RNAs, chromatin state, and DNA modifications in transgenerational inheritance of epivariable phenotypes. In most cases of transgenerational environmental inheritance, it is not yet clear how the relevant information is carried from males to their offspring, but epigenetic information is likely to be relevant for most such cases. Future studies should focus on the epigenetic mechanisms that underlie paternal behavior in rodents. In general, much remains to be learned about paternal care in rodents, and promising insight will likely come from broader studies using a multi-faceted proximate/ ultimate approach involving within and between species comparisons in free-living rodent species. Field and experimental studies of rodents exhibiting paternal care as well as appropriate genetic studies will be a valuable addition to understanding how paternal care has arisen in various radiations of rodent taxonomy, given the broad expression of paternal care among rodents and the distribution of the phenomenon among many distantly related taxa.
2020-05-31T17:23:21.466Z
2020-05-29T00:00:00.000
{ "year": 2020, "sha1": "aa085635ac002c494e8cf7c9b052ce0892b10550", "oa_license": null, "oa_url": "https://doi.org/10.15298/rusjtheriol.19.1.01", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "aa085635ac002c494e8cf7c9b052ce0892b10550", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
57447807
pes2o/s2orc
v3-fos-license
A longitudinal mixed methods study on changes in body weight, body composition, and lifestyle in breast cancer patients during chemotherapy and in a comparison group of women without cancer: study protocol Background More than 60% of women diagnosed with early stage breast cancer receive (neo)adjuvant chemotherapy. Breast cancer patients receiving chemotherapy often experience symptoms such as nausea, vomiting and loss of appetite that potentially affect body weight and body composition. Changes in body weight and body composition may detrimentally affect their quality of life, and could potentially increase the risk of disease recurrence, cardiovascular disease and diabetes. To date, from existing single method (quantitative or qualitative) studies is not clear whether changes in body weight and body composition in breast cancer patients are treatment related because previous studies have not included a control group of women without breast cancer. Methods We therefore developed the COBRA-study (Change Of Body composition in BReast cancer: All-in Assessment-study) to assess changes in body weight, body composition and related lifestyle factors such as changes in physical activity, dietary intake and other behaviours. Important and unique features of the COBRA-study is that it used I) a “Mixed Methods Design”, in order to quantitatively assess changes in body weight, body composition and lifestyle factors and, to qualitatively assess how perceptions of women may have influenced these measured changes pre-, during and post-chemotherapy, and II) a control group of non-cancer women for comparison. Descriptive statistics on individual quantitative data were combined with results from a thematic analysis on the interviews- and focus group data to understand patients’ experiences before, during and after chemotherapy. Discussion The findings of our mixed methods study, on chemotherapy treated cancer patients and a comparison group, can enable healthcare researchers and professionals to develop tailored intervention schemes to help breast cancer patients prevent or handle the physical and mental changes they experience as a result of their chemotherapy. This will ultimately improve their quality of life and could potentially reduce their risk for other co-morbidity health issues such as cardiovascular disease and diabetes. Background Breast cancer is the most common cancer in women worldwide and makes up to 25% of all female cancers [1]. Due to early detection through screening programs and therapeutic improvements, the five-year survival rate in the Netherlands has increased from 78 to 88% during the last two decades [2][3][4][5]. This implies that the number of breast cancer survivors will steadily increase in the future. The impact of chemotherapy on general health, therefore becomes more important to the health care system. For breast cancer patients, the side-effects of chemotherapy can be both short-and long term. Regularly reported short-term side-effects include; nausea, vomiting, hair loss, loss of energy and fatigue [6], taste and smell alterations [7][8][9][10][11] psychological distress [12][13][14] and even chemotherapy-related-hospitalizations [15]. Long-term side-effects of chemotherapy include; psychological distress and physical effects such as fatigue and loss of energy, weight gain [16][17][18], and unfavourable changes in body composition (increase in fat mass and loss of muscle mass) [17,[19][20][21] and loss of muscle strength [22,23]. Weight gain and changes in body composition may have a profound negative influence on quality of life and self-esteem in breast cancer survivors and may also increase the risk of several co-morbidities, such as cardio-vascular disease [24,25], diabetes [26] and breast cancer recurrence [27][28][29][30]. Gaining a better understanding of the processes that underlie these short-and long-term side effects is critical to enable the development of tailored intervention schemes. To date, studies on these short-and long-term side effects have had relatively insular focuses. Several studies reported that chemotherapy is associated with weight gain. The earlier studies report large weight changes [31,32], while the more recent reports suggest less weight gain [17][18][19]. In our meta-analysis [33] we found an overall body weight increase during chemotherapy of 2,7 kg (95% CI 2.0, 7.5) with a high degree of variation: some women gain more than 10 kg while others lose weight. Changes in body weight, body composition and muscle strength among women with breast cancer undergoing chemotherapy were possible influenced by lifestyle factors, such as physical activity and dietary intake and the perception of women on these factors. Patients are often forced to adapt their daily activities during treatment [34]. Several studies suggest that reductions in physical activity during chemotherapy may contribute to weight gain [9,21,35], lower quality of life [36] and increase the risk of disease recurrence [37][38][39][40]. Women may be influenced in their decision to engage in physical activity in a negative way [41] e.g. through pressure from friends and family to rest and not to be active [41], or due to lack of time because of taking care of children [42], lack of motivation [43], the side effects of chemotherapy [44], the need to conserve energy, fear, possible injury [45], and difficulty to stay focused during physical activity because of "chemo brain" [46]. When breast cancer patients tried to be more physically active during therapy however, most of them experienced increased wellbeing and restored energy levels during physical activity [47]. Reports on the influence of changes in dietary intake during chemotherapy in cancer patients and how these changes influence their body weight and body composition, both short and long term, vary widely. The literature findings describe that: they showed no changes [7,21,48], increases [49]), or decreases [32,50] in energy intake during chemotherapy. These variations could be due to the different designs of the studies and time points of measurements. In a recent study by our research group [51] a 10% lower energy intake through dietary changes (including an absolute lower intake of protein, fat and alcohol) for women with breast cancer, was observed during chemotherapy treatment (n = 117) based on 24 h recalls [51]. Furthermore, these breast cancer patients scored significantly lower on their self-reported taste, smell, appetite and hunger questionnaires. These results could potentially be due to chemotherapy induced symptoms such as, a dry mouth, lack of energy, nausea and difficulties with chewing [51]. In qualitative studies, patients with breast cancer stated during interviews that they also experienced a decreased enjoyment of food and a change in the role of food: eating for the sake of eating and use of comfort food as a reward, because of these changes in taste and smell [52]. Symptoms of psychological distress are twice as high among breast cancer patients in comparison with the general female population. The impact of breast cancer may have a long-term effect, extending for years after diagnosis [53,54]. Dealing with the diagnosis 'breast cancer' could have a profound influence on their perceptions on changes in body composition and weight related lifestyle factors. Despite available information and guidelines, studies suggest that women hardly experience support in their struggle to deal with the diagnosis and treatment [55][56][57][58]. Many women experience psychological stress and impaired quality of life as a result of a breast cancer diagnosis and treatment [6,13,59,60]. Studies suggest that women's overall health and their altered bodies are constant reminders of their illness and its treatment [9,16,52,[55][56][57][58][61][62][63][64]. Women reported that they feel frustrated not being able to control their weight [63] and their dietary intake [52]. Although for most women weight management during treatment has lower priority [65], they have to cope psychologically with their diagnosis and the effects of the treatment. From the current literature, it is clear that for breast cancer patients, the side-effects of chemotherapy can be both short-and long term. Our understanding of the processes that underlie these short-and long-term side effects is however, still incomplete. In what way the patients´perceptions on lifestyle factors, such as changes in physical activity and dietary intake, influence changes in the body weight and body composition of breast cancer patients is inconsistent and unclear. Furthermore, to date, the majority of these previous studies [7,17,19,21,63] only assessed changes in patients undergoing chemotherapy during treatment and did not include a comparison group of women without breast cancer. We developed the COBRA study to objectively assess changes in body weight and body composition and related lifestyle factors and how perceptions of patients influence these factors.). A unique feature of the COBRA study is that it was designed as a longitudinal "Mixed Methods Study" combining both quantitative and qualitative research methods and data. The publications of the COBRA study thus far focused on the quantitative design and quantitative findings of the study (33). The main aim of this manuscript is to describe the methodological design of the qualitative part of the COBRA-study and how the quantitative and qualitative data can be combined in a mixed-methods approach. This enables us to not only quantitatively assess changes in body weight and body composition and related lifestyle factors, but also to qualitative assess how perceptions of breast cancer patients influence these factors not only during chemotherapy, but also pre-, and post-chemotherapy. Furthermore, a group of women without breast cancer are also assessed as a comparison group, the majority of studies so far did not include a comparison group, to evaluate the significance of the breast cancer patient's results. Design and methods To prepare the study protocol, we conducted a qualitative pilot study among 20 breast cancer patients who had already completed their chemotherapy, in order to gain insight into their experiences with diagnosis and treatment. We learned from these patients that their experience of having cancer influenced their attitude towards quality of life, physical activity and nutrition, beyond the direct effects of chemotherapy such as nausea, vomiting, hair loss and loss of energy. Results from the pilot study showed that all breast cancer patients expressed an urgent need for information concerning nutrition and physical activity during chemotherapy. The pilot study also suggested different results based on age and BMI group; older women and women with BMI > 25 kg/m2 had a less urgent need for this information and they were physically less active when compared to younger women or women with a BMI < 25 kg/m2. We also found that women were sometimes able to come up with solutions to meet their own needs when they were confronted with changes in dietary intake, physical activity and quality of life during chemotherapy. These pilot study results confirmed the relevance of performing an in-depth study because patients expressed an urgent need for information about nutrition and physical activity. Mixed-method design We designed a longitudinal observational, mixed-method approach, to understand patients' experiences before, during and after chemotherapy, using repeated measurements and interviews as well as focus group meetings ( Table 1). The purpose of pairing qualitative and quantitative components [66,67] within this study is to provide a better understanding of the changes in body weight and body composition. Qualitative measurements of the perception of women on physical activity and dietary intake, as well as factors related to coping with diagnosis and treatment, can help to explain and interpret quantitative measurements of the factors influencing changes in body weight and body composition. A mixed method study is therefore, a good approach to obtain in-depth information and knowledge of the problem (i.e. changes in body weight and body composition) and also provides comprehensive datasets [68]. In addition, this approach assists in increasing the reliability and credibility of the findings through the combination of quantitative and qualitative results, the methodological triangulation [69]. For breast cancer patients, data collection took place four times during this study; T1: pre-chemotherapy, T2: midway chemotherapy, T3: post-chemotherapy (1-3 weeks after last chemotherapy cycle), and T4: half a year post chemotherapy. For the non-breast cancer (comparison) group, data collection took place at: T1: at inclusion, T2 after 3 months, T3 after 6 months, T4 after 1 year. For an overview see Table 1. Approval for the COBRA-study was obtained from the Medical Ethics Committee of the Wageningen University, The Netherlands (ABR NL40666.081.12) and the Scientific Advisory Committee VUMC/VU. Participants and recruitment Two hundred patients with breast cancer, indicated for (neo)adjuvant chemotherapy were recruited from 11 hospitals in the Netherlands. Inclusion criteria were 1) women > 18 years old, 2) newly diagnosed, non-advanced (I-IIIA) operable breast cancer scheduled for initiating 2nd or 3rd generation adjuvant or neo-adjuvant chemotherapy, and 3) able to communicate in Dutch. An exclusion criterion was pregnancy or intentions to become pregnant within the study period. The comparison group of women without any history of cancer was recruited via the women with breast cancer, who were asked to distribute information about the study to female friends, acquaintances and colleagues of the same age or 2 years younger or older. Women without cancer contacted the researchers if they were interested in participating in the study. We recruited 200 women for this comparison group. All respondents signed a written informed consent. For the mixed method part of the COBRA-study, a subgroup of N = 25 breast cancer patients was selected for the qualitative part of the study (Table 1). Purposive sampling was applied to reach as wide a range of perspectives as possible, and to capture the broadest set of information and experiences. Based on previous literature and the results from our pilot study, we used the following criteria for this sampling: variation in age (25-64 yr), pre-or postmenopausal status (pre n = 10, peri n = 3, post n = 12), Body Mass Index (BMI) > 25 kg/m 2 (n = 11) or < 25 (n = 14), and stage I to IIIa breast cancer. With the exception of the last criteria, the comparison group of women without breast cancer (n = 15) were selected using the same criteria. Data collection used for the mixed method study Quantitative data collection Body composition, body weight and dietary intake were assessed using: 1) a total body Dual-Energy X-ray absorptiometry (DEXA) scan, 2) a Food Frequency Questionnaire (FFQ) [70] on energy intake, 3) two telephone-based 24-h dietary recalls during chemotherapy for actual dietary intake because of the expected high day to day variation during chemotherapy treatment and, 4) the Appetite [72] and by an accelerometer which the women wore for 7 days. Quality of life was assessed by the EORTC C-30 questionnaire [73], depression and anxiety by the Hospital Anxiety and Depression Score (HADS) [74], and fatigue by the multi-Fatigue Inventory (MFI) [75]. See Table 1 for a description and the timing of different measurements. Qualitative data collection Interviews The timing of the interviews is shown in Table 1. Semi-structured interviews were held, guided by a topic list based on a literature review and our pilot study. Potential changes in aspects of dietary intake, physical activity and quality of life from the perspective of the participants were questioned. Patients were asked to elaborate about these topics and to mention all issues relevant from their own perspective. Additional questions were asked to uncover beliefs, values, and motivations that underlie individual health behaviours such as response to diagnosis, physical and mental health, and influences from the social environment during and after chemotherapy. Each of the four interviews at T1, T2, T3 and T4 with every patient, builds on the previous one. Each interview explicitly asks the women how their experiences change over time. Interviews take place at patients' homes or elsewhere, based on the preferences of the patients. All interviews are audiotaped and transcribed verbatim. Patients are asked to give feedback on a written summary of the interview to foster validity (member checks). The interviews with the non-breast cancer women in the comparison group, provides us with information to obtain a better understanding of the perception and experiences of the patients during treatment. Focus groups Focus group sessions were conducted after the interviews to validate, enrich and further explore the data gathered during the interviews of the women with breast cancer (Table 1). In these sessions, we also explored possible strategies the women use to curb identified changes in dietary intake, physical activity, body weight and quality of life. Since the study has an emergent design, the qualitative study design evolves over time, and, the themes to be discussed in the focus group sessions emerge from the results of the previous personal interviews. For the focus group sessions interviewed patients and non-interviewed patients are invited and, eight to ten respondents participate in the assigned focus groups. The sessions are moderated by a qualified researcher and observed by a second member of the research team. The focus group sessions are recorded on audiotape. The final number of focus group sessions depends on the validation and enrichment of the data. Data analysis Qualitative data Analysis of the interview data starts during data collection. All transcripts of the interviews are analyzed using a thematic content analysis with comparisons within and across the interviewed respondents [76]. The qualitative data analysis software MAXQDA (VERBI software, Marburg, Germany) is used to manage the data [77]. Transcripts are subsequently disentangled and divided into fragments and open-coded. Codes are categorized by subthemes and main themes. Relationships between the subthemes are explored, to eventually cover the subthemes under the overall themes. The codes, subthemes and themes are discussed within the research team until consensus is reached on all the themes. Codes and (sub)themes are structured in a code tree. The constant comparison method [76] is used in order to understand the differences, as well as similarities, between and within women. The main results are discussed within the research team to enhance the robustness of the findings. The themes recognized are used to find answers for the aim of the study, and to describe patterns and mechanisms within the whole dataset to provide a broader overview of the findings. The data gathered during the individual interviews are validated and enriched in the focus group sessions. Combining these two methods (interviews and focus groups) enabled us to check for inconsistencies and continuities between what was said in individual interviews and what emerged from interactive group discussions. Combined data Mixed methods is an approach which draws upon the strengths and perspectives of each method: the existence of the natural physical world, quantitative, as well as the reality and influence of human experience, qualitative method [78]. The collection and analysis of both data sets is carried out separately and the findings are not compared or consolidated until the interpretation stage, and finally sequential data analysis. The data are analyzed in a particular sequence with the use of, or findings from, the other method [79]. Quantitative results obtained from the measurements and questionnaires (Table 1) are combined with the qualitative results obtained from the individual interviews and focus group sessions. Together, these data sets can provide a more complete and comprehensive evaluation of the changes in body weight and body composition [80]. Findings generated by the different data collection methods could elucidate aspects of the changes in body weight and body composition allowing us to explore the outcome from the analysis, whether that be convergent, where qualitative and quantitative findings lead to the same conclusion; complementary, where qualitative and quantitative results can be used to supplement each other or; divergent, where the combination of qualitative and quantitative results provides different (and at times contradictory) findings [69,81]. In this study the quantitative part describes how the body changes during chemotherapy and the period thereafter (biomedical changes). The qualitative part focuses on how women experience potential changes in their body and what role eating and exercise behaviour plays (lifestyle changes). The combination of these two parts (quantitative and qualitative) makes it possible to explain and interpret these body and lifestyle (dietary and physical activity) changes in order to better understand changes in body weight and body composition [80]. Descriptive results of the quantitative measurements such as body weight, body composition, muscle strength, quality of life, smell and taste, and depression and anxiety on an individual level are linked to the results of the interviews and focus groups sessions, in other words, are linked to the women's perceptions on these issues, as identified by the different themes in the thematic analysis approach. As a result, certain potential changes in body weight and body composition during chemotherapy can be better understood with the help of the perception of women on physical activity, dietary intake and their subsequent lifestyle behaviour. Discussion In this paper we describe the methodological design of the qualitative part of the COBRA-study and how the quantitative and qualitative data can be combined in a mixed-methods approach. To our knowledge, this study is the first longitudinal study in women with breast cancer that combines both qualitative and quantitative methodologies with measurements taken before, during and after chemotherapy. Furthermore, it is the first study to have a control group of non-cancer women for comparison. This mixed methods study focuses specifically on the quantitative and qualitative changes in body weight and body composition in patients with breast cancer during chemotherapy compared to women without breast cancer. It explores the perceptions of women with and without breast cancer and how they deal with quantitatively measured, changes in body weight, taste and smell, dietary intake, physical activity and quality of life. Due to the longitudinal nature of the study, the measurements and the perception and experiences of breast cancer patients at various time points; pre-, during and post chemotherapy treatment; can be better understood. Specific time points at which additional support for women is required can be evaluated and defined. The collection of both qualitative and quantitative data facilitates a more complete insight and a better understanding of the changes in body weight, body composition and muscle strength. The findings of this study will help researchers, health care professionals and the breast cancer patients themselves to understand the struggles women with breast cancer undergoing chemotherapy have, and their needs during their treatment. This information will enable health care professionals to develop practicable, feasible and tailored interventions that could help breast cancer patients to handle or prevent treatment/weight related lifestyle changes and ultimately improve their quality of life and future health. Abbreviations COBRA-study: Change Of Body composition in BReast cancer: All-in Assessment-study Acknowledgements We would like to thank Rebecca Rendle-Buehring for her help in preparing the manuscript. Funding The current study is supported by grant UW2011-5268 and UW2011-4987 from the Dutch Cancer Society (KWF Kankerbestrijding). The sponsor has no role in the design, data collection, analysis and interpretation of the data, nor in writing the article or the decision to submit for publication. Availability of data and materials As a study protocol, data sharing is not applicable. Study materials are available from the corresponding author on reasonable request. Authors' contributions AK, MV, MB, MD, MB, HL, JV, YV, EK, RW, MW contributed to the conception and design of the study. All authors have read, critically reviewed and approved the final manuscript for publication.
2019-01-06T14:20:20.812Z
2019-01-05T00:00:00.000
{ "year": 2019, "sha1": "3b54e19d42e4339651fa9389564486c483d0fbbc", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-018-5207-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4730c92caf8513e261f2c2f7a6e870b5b89fc77c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247390048
pes2o/s2orc
v3-fos-license
Influences of Rainfall and Temperature on Malaria Endemicity in Cameroon: Emphasis on Bonaberi District Relating the influence of climate on the occurrence of a vector-borne disease like malaria quantitatively is quite challenging. To better understand the disease endemicity, the effects of climate variables on the distribution of malaria in Cameroon are studied over space and time, with emphasis on the Bonaberi district. Meteorological monitoring can lead to proactive control. The government of Cameroon, through the National Control Malaria Program, has put in place strategies to control and stop the spread of the disease. This study is therefore geared towards assessing the yearly parasite ratio of malaria over the ten regions of Cameroon and to work out the influence of rainfall and temperature on disease endemicity with emphasis on a district of Douala. The model used is the VECTRI model, which shows the dynamic link between climatic variables and malaria transmission. The parasite ratio observed and simulated showed a maximum correlation of 0.75 in 2015. A positive relationship between temperature, rainfall and malaria is revealed in this study but Bonaberi has malaria all year round. The West region is the least affected by malaria. We recommend that For the VECTRI model to perform better, the population could be incorporated in the model. Introduction Many people worldwide are at risk with regards to climate-health relationships. Some of these reasons may include climate variation and population density (Afrane et al., 2004;Ayanlade, 2020). Malaria being a climate-health-related disease is a well-talked-about ancient vector-borne disease, and still remains a public health issue in Cameroon. It is one of the most prevalent mosquito-borne parasitic diseases throughout tropical and subtropical regions of the world (Mfonfu, 1986;Titanji et al., 2001;Fru-Cho et al., 2013;Nyasa et al., 2021). Malaria is caused by a parasite, transmitted to humans through a bite of infected female Anopheles mosquitoes. Five Plasmodium species are currently involved in malaria transmission: P. vivax, P. malariae, P. ovale, P. knowlesi and particularly P.Falciparum which is the main malaria species in Cameroon (Craig et al., 1999). While there are affordable drugs to treat and stop the disease, malaria still contains a negative effect on people's health worldwide (WHO, 2015). Globally according to WHO's latest World malaria report 241 million malaria cases and 627,000 malaria deaths were recorded in 2020 (WHO, 2021). In Endemic areas, pregnant women, children under five years old, and immune-suppressed individuals are the foremost vulnerable (WHO, 2009;Danwang et al., 2021). This accounts for 67% of malaria deaths in the whole world. In sub-Saharan Africa, the malaria burden is incredibly high. Accounting for over 94% of world malaria deaths, this particularly is due to the climate and hydrological conditions that favour the breeding of mosquitoes. While there are affordable drugs to treat and stop the diseased, more than 90% of the population is in danger of malaria infection in Cameroon. Annually about 41% of the population had an encounter with malaria at least once. In addition, malaria is the root explanation for 50% -56% of morbidity and 40% of annual mortality among children (Mbenda et al., 2014). Malaria inflicts an economic burden on both the government and individuals, with an estimated cost of about US $12 billion each year (National malaria control programme in Cameroon, 2008) in the whole world. The government of Cameroon has put in situ various intervention strategies. Among these include; free distribution of treated mosquito nets, free malaria treatment for uncomplicated malaria for youngsters from zero to five years, and indoor residual spraying. In addition, the reduction of cost of diagnosis and treatment of simple malaria in health care facilities to five thousand francs CFA (Coldiron et al., 2017). This has enabled the habitants to be treated from malaria. Also, free intermittent preventive treatment for pregnant women since 2005, seasonal malaria chemoprevention for children 3 to 59 months within the Far North and North regions during the rainy season have been implemented since 2016 (Coldiron et al., 2017). Epidemics of malaria energetics are firmly influenced by climate (Caminade et al., 2014). Drivers of malaria include rainfall, temperature, humidity, immunity, epidemiologically population (Laneri et al., 2010;Boyce et al., 2016). All these influence vector multiplication and distribution. Temperature specifically has an impact on the developmental period (Alonso et al., 2010). Rainfall provides water available for vector survival (Abiodun et al., 2016). Recently, studies have been carried out concerning climate and human health. Climate change has an effect on the occurrence of malaria in Africa and Cameroon inclusive. Temperature and rainfall are some of the main climatic variables. Most of the agents that cause climate-related diseases are sensitive to temperature and rainfall (Ameneshewa, 1995;Boakye et al., 2004). Studies carried out by Ayanlade (2020) and temperature has an impact in all the developmental stages (Leeson, 1939;Kiszewski & Teklehaimanot, 2004;Paaijmans et al., 2007;Paaijmans et al., 2009) Land use and land cover also affect vector multiplication, so vector population does not only depend on meteorological variables (Koenraadt et al., 2003;Paaijmans et al., 2010aPaaijmans et al., , 2010b. In recent times and over the last few decades mathematical (dynamical as well as static) models have been employed to study disease epidemiology (Macdonald et al., 1968;Bouma et al., 1994;Smith et al., 2012;Matsuoka & Kai, 1994). The statistical model is based on statistic relation based on passed observations or static relations between various variables under given conditions. In dynamic models, the system evolves through time variations of the variables that govern the epidemic. These models have arrived divergent conclusions. Malaria has been studied for quite a long time and is one of the first human diseases to be modeled mathematically. Sir Roland explained that plasmodium spreads across intermediary mosquitoes. He proposed a model that took into consideration the human host and the mosquito population in the 1900s, but it did not take into consideration the mosquito life cycle (Smith et al., 2012 (Ermert et al., 2011). The model uses daily temperature and precipitation data. This model did not take into consideration humidity completely. The Vector-Borne Disease Community Model of ICTP (VECTRI) is a mathematical dynamical model that incorporates the impact of weather on malaria with reasonable surface hydrology, running at over regional scales with resolution down to 1 km. It incorporates population interactions (migration, immunity) and interventions (spraying drugs bed nets) (Tompkins & Ermert, 2013) This study is therefore geared towards assessing the yearly parasite ratio (which is the number of tested positive malaria cases divided by suspected malaria cases of malaria) over the ten regions of Cameroon, and also to work out the influence of rainfall and temperature on disease endemicity with emphasis on Bonaberi district, in Douala. Ethics Statement We declare that data on epidemiology in this study was collected and compiled by the Author from the national malaria program Cameroon based on records from the public health and analyses anonymously. Study Area This study is carried out in Cameroon situated within latitude 7.36˚N and lon- Data Both observed data from the national malaria program Cameroon and simulated data from satellite climatology data are used. Climatology data include rainfall and temperature from January 2012 to December 2017 for the whole of Cameroon and January 2017 to December 2019 respectively are employed in the study. Epidemiological Data Mean yearly malaria morbidity data is compiled from the national malaria control program between 2012 and 2017. Mean monthly confirmed malaria cases are obtained from the Bonasama district hospital (Bonaberi) from 2017 to 2019. The VECTRI model is evaluated using these two data sets. With this, the parasite ratio is calculated as the number of confirmed malaria cases divided by the number of suspected malaria cases. Meteorological Data Mean daily rainfall data is obtained from Famine Early Warning Systems Network ARC version 2 (FEWS/ARC2). The daily gridded 2 m temperature data was taken from the ECMWF ERA-Interim (Dee et al., 2011) reanalysis data. These values are used as input to drive the VECTRI model to simulate climate-driven malaria transmission over the ten regions of Cameroon. Secondly, other precipitation data are obtained from Climate Hazards Group Infra-Red Precipitation with Station data (CHIRPS). Temperature again is obtained from ECMWF ERA-Interim reanalysis data. The Model Simulations are done using The Vector-Borne Disease Community Model of ICTP (VECTRI) (Tompkins & Ermert, 2013). VECTRI uses a flexible spatial resolution that ranges from a single location to a regional scale (10 -100 km). VECTRI is a mathematical model for malaria transmission and takes into consideration the effects of temperature and rainfall on the parasites and their developmental stages. The limit with this equation is surely link to temperature as we use air temperature instead of water temperature. Mortality rate of the larva is an important factor for transmission and depends on temperature (Samé-Ekobo et al., 2001;Tompkins & Ermert, 2013 L M is the total larva biomass per unit surface area of a water pond and w the fractional coverage of a grid cell by potential breeding site .it is given by the surface hydrology composition. Larva flushing by heavy rainfall is also an important cause of larva morality (Tompkins & Ermert, 2013). VECTRI considers human population density within the calculation of human biting rates (HBR) and makes it possible to differentiate between urban, peri-urban and rural transmission rates (Tompkins & Ermert, 2013). Is the maximum fractional coverage of temporal ponds, E and I, evaporation and infiltration rate while P is precipitation rate, is a linear constant (Leedale et al., 2016). The VECTRI model has as goals to forecast malaria epidemic outbreaks in endemic zones and to represent malaria transmission in endemic areas (Quakyi et al., 2000). Results In the present work, we present the mean annual observed and simulated PR (parasite ratio) that is, the number of positive malaria cases divided by suspected malaria cases, monthly daily variations of rainfall, mean surface temperature and PR with an objective to understand malaria prevalence in Cameroon in general and Bonaberi in particular. intensity that year, which favored breeding grounds for vector multiplication. Globally the model is able to simulate the observed PR over the ten regions of the Country, but still overestimating it comparatively to the observed value. Monthly PR Variations for Bonaberi District from 2017 to 2019 To better understand malaria endemicity and to predict the transmission of malaria outbreak period across the Bonaberi locality, the monthly observed and simulated PR values are correlated with rainfall and temperature, as shown in Figure 4. As mentioned before there is a noticed gap between observed and simulated PR values but both of them follow the same trend during the year. But PR values do not appear to be well correlated with the monthly rainfall and temperature fluctuations. In 2017, the peak of rainfall was in the month of August, and peaks of temperature In the months of December but simulated PR was at its peak from July to January observed PR had a slight peak in the month of May. Also in 2018, simulated PR had peaks all around the year but for the month of March with a slight decrease. Peaks of rainfall are in the months of July and August; peaks of temperature in the months of December and January. More so in 2019, peaks of temperatures are in the months of December and February, peaks of rainfall in the months July, August, September, and a little drop in the parasite ratio in the months of March. It is realized that the monthly rainfall accumulates over Bonaberi. During the rainy seasons' peaks of rainfall are the months of July and August 2017 to 21.7 mm, and in the dry seasons with the least values in the months of January 0.37 mm Again, the monthly mean surface temperature is found to be above 30˚C during October to February and within 26˚C -29˚C during other months. In addition, the maximum PR simulated in the months of May to February was 0.87 to 0.95 and observed maximum in the months of May to January. Transmission is high year-round in Bonaberi just for a little drop in the month of March Simulated and observed mean seasonal PR for Bonaberi district is shown in Figure 5. Generally, the seasonal simulated PR is higher than observed PR all the years. SON and JJA seasons show great disparity in observed and simulated PR in all the years. Here the rains are too high most of the larva is washed by the heavy rains and floods, the mosquito population is reduced and so fewer malaria cases registered. In DJF and MAM has a slight difference in that rainfall is moderate and sufficient for vector survivor. Discussions Observed and simulated PR in Cameroon had its peaks mostly in the South West region with a PR of 0.84 observed and 0.81 simulated. Probably because it is a characteristic ecological region and has undergone some environmental modifications recently, this situation is probably because of urbanization, rapid population growth, immigration and the presence of the Cameroon Development Cooperation (Bigoga et al., 2012). This may affect the vector population, distribution and density and probably have an impact on malaria transmission efficiency. In this region, transmission is perennial, its intensity increases with the amount of rainfall and parasitemia. This is in line with studies carried out by Bigoga et al. (2012). That the human population has developed and maintained naturally acquired immunity since the entire population is exposed to an infected mosquito bite. The Also, the South region has a high PR, with a maximum simulated PR of 0.74 and observed PR of 0.69 the seasonal transmission increases may be due to the presence of the river Sanaga that provides a leave lock pool available for vector multiplication. Similarly in Congo according to Carnevale et al. (1992) and Manga et al. (1997) also in agreement with the fact that permanent rivers, increases malaria transmission rates. Extremely low PR is observed in the west region. there is the availability of permanent breeding sites in Dschang such as lakes and swamps, the suppressing effects of altitudes and climate on mosquito biodiversity and may limit siblings (Manga et al., 1997). Due to altitude, climate variation reduces vector survivor and multiplication this is true as around Mount Kilimanjaro (Manga et al., 1997). Despite the fact that the river Nkam and its tributaries meander around Sancho, the PR is quite low compared to the south region with large water bodies, probably because of the absence of the forest ecosystem. In sub-Saharan Africa, the Anopheles Gambiae and Anopheles funestus species were in most locations. In the dry season, Anopheles Funestus is resistant with a low transmission rate caused by micro climatic conditions of highland regions (Fontenille et al., 2000). This is similar to what is happening around Mount Cameroon as transmission intensity decreases gradually with altitude and also in Tanzania (Bødker et al., 2003;Maxwell et al., 2003;Wanji et al., 2003). The East region is one of the most affected regions in Cameroon, with PR between 0.6 -0.8 simulated and 0.3 -0.5 observed. Maybe because of the poor road network that makes it difficult for the movement intervention team, also constant immigration from the center African republic could increase the spread of the disease. This is in line with a report from the ministry of public health (Minsante, 2018). The Center region is rapidly urbanized and is surrounded by many hills irrigated by several permanent rivers (Knudsen & Slooff, 1992). Its PR ranges between 0.6 -0.8 both simulated and observed. Its transmission is the all-around year in agreement with other studies (Ndo et al., 2011). During the rainy season, permanent habitats for mosquitoes could arise from inundations and in the dry season. Urban agriculture as a result of the exploitation of the flood plains may lead to the spread of malaria. In addition, rapid unplanned urbanization, poor drainage especially by other human activities for example public and private construction sites, water from car wash points may also provide available breeding grounds. This is in conformity with studies carried out in Libreville in Gabon, Daresalaam, Tanzania (Antonio-Nkondjio et al., 2019). The northwest region has stable and high malaria prevalence, with the Anopheles gambiae species dominating (Mourou et al., 2012) this stability may vary with altitude moderation effects. Contrary to studies carried by (Antonio-Nkondjio et al., 2019) that highland areas with cooler weather conditions may discourage vector multiplication thus lowering prevalence. (Mourou et al., 2012;Machault et al., 2009) according to Klinkenberg et al. (2008). In recent times the climate of the North West region has drastically changed from cool and dry to a fertile ground for vector survivors that may account for a high parasite ratio 0.6 -0.7. The littoral region is close to the Atlantic Ocean, this region constantly has malaria as mentioned by (Klinkenberg et al., 2005;Klinkenberg et al., 2008;Machault et al., 2009;Mourou et al., 2012). Part of this region is found in the marshy area and always has breeding sites for vector transmission all this may be due to poor waste disposal, unplanned urbanization, and poor drainage facilities. This is similar to other urban cities like Accra in Ghana, Dakar Senegal and Lilongwe Malawi (Nimpaye et al., 2001;Afrane et al., 2004;Asare & Amekudzi, 2017;Mohamed & François, 2020). The Anopheles gambiae species happen to be Journal of Geoscience and Environment Protection more productive in the littoral region and thus have high parasite transmission. Figure 4 represents observed and Simulated PR correlated with rainfall and temperature, with peaks of rainfall from July to September and minimum rainfall from November to February, temperatures of peak 30˚C to 34˚C in the months of minimum rainfall the months of November, December, January and Febuary. In the months of July August and September, the temperatures were quite low to about 28˚C, which is favorable for vector multiplication (Moukam Kakmeni et al., 2018). Also, the PR simulated ranges from 0.8 -0.9 and observed from peaks 0.4 -0.5 in almost all the months and a little drop in March. The peaks in rainfall follow peaks in PR according to Kamgang et al. (2010). Recent studies confirm the fact that with rapid urbanization, increased population growth, poor housing conditions, lack of proper housing and sanitation, poor drainage facilities, frequent flooding during the rainy season especially in areas like Mabanda all this help in the spread of vector-borne diseases (Okiro et al., 2007;O'Meara et al., 2008). With two dry seasons and two rainy seasons, most of the time Bonaberi always has small pools of water, the river Wouri estuaries probably provide permanent water bodies that may sustain vector multiplication in the dry season and may lead to permanent reliable breeding sites. This difference in PR observed and simulated probably is due to the fact that there are increased interventions including widespread of insecticide-treated nets (ITNs) (Antonio-Nkondjio et al., 2019) which leads to decreased parasite ratio and fewer hospital admissions most of the inhabitants are now educated on preventive measures with regards to malaria transmission and eradication also, private health facilities may treat patients, both orthodox and traditional medicines, increase the widespread of malaria drugs for prophylaxis are not taken into consideration by the VECTRI model (Antonio-Nkondjio et al., 2019). Vector survival is incorporated as a user parameter for surface hydrology in which VECTRI turns to underestimate. Its growth rate, vector multiplication, and the adult population is reduced immediately the temporal ponds dry off .When the rains are quite heavy the larva are being flushed .But the surface hydrology scheme accounts for this negative effect (Tompkins & Ermert, 2013). Conclusion This present work compares and assesses the yearly PR of malaria over the ten regions of Cameroon, and correlates monthly rainfall and temperature with PR both simulated and observed on disease endemicity in Bonaberi district, in Douala. Results from simulated and observed PR value imply the whole of Cameroon is endemic with regards to malaria; the level of endemicity varies from one region to the other depending on its climatic variables. The areas with the highest transmission are mostly in the southwest region, followed by the Center, and south regions and Bonaberi Douala and the western region is the least. The model used shows the dynamic link between climatic variables and malaria transmission. Rainfall and temperature predominantly control malaria transmission and Journal of Geoscience and Environment Protection intensity as revealed by both simulated and observed results. However, in Bonaberi malaria transmission is high all year round but for a little drop in the month of March. Because of this, the VECTRI model possesses the potential to provide malaria early warning information for Cameroon and Bonaberi and should be considered by the national malaria program. Moreover, the model was able to discriminate between regions of low and high malaria transmission, months of peaks of malaria in Bonaberi due to differences in rainfall and temperature. From the population and mosquito infection status from the national malaria program and the Bonasama district, one may conclude that malaria is influenced by temperature and rainfall. The parasite ratio from the model when compared with observed data is reliable to monitor malaria transmission and control. Thus, results from the study will be useful at various levels of decision making, for example, in setting up an early warning and sustainable strategies for climate change and adaptation for malaria vector control program in Cameroon. For the VECTRI model to be more performant parameterization for permanent water bodies, topography, soil characteristics, habitat water temperature, and immunity level of the population could be incorporated in the model.
2022-03-12T16:19:44.315Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "a1dba14de8804c11cde86ec7fcf5290eab78aa82", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=115791", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "41fc2ab73d5330f62b1dd26593ee1f0dd10d3bfa", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
15940306
pes2o/s2orc
v3-fos-license
What Is the Ideal Core Number for Ultrasound-Guided Prostate Biopsy? Purpose We evaluated the utility of 10-, 12-, and 16-core prostate biopsies for detecting prostate cancer (PCa) and correlated the results with prostate-specific antigen (PSA) levels, prostate volumes, Gleason scores, and detection rates of high-grade prostatic intraepithelial neoplasia (HGPIN) and atypical small acinar proliferation (ASAP). Materials and Methods A prospective controlled study was conducted in 354 consecutive patients with various indications for prostate biopsy. Sixteen-core biopsy specimens were obtained from 351 patients. The first 10-core biopsy specimens were obtained bilaterally from the base, middle third, apex, medial, and latero-lateral regions. Afterward, six additional punctures were performed bilaterally in the areas more lateral to the base, middle third, and apex regions, yielding a total of 16-core biopsy specimens. The detection rate of carcinoma in the initial 10-core specimens was compared with that in the 12- and 16-core specimens. Results No significant differences in the cancer detection rate were found between the three biopsy protocols. PCa was found in 102 patients (29.06%) using the 10-core protocol, in 99 patients (28.21%) using the 12-core protocol, and in 107 patients (30.48%) using the 16-core protocol (p=0.798). The 10-, 12-, and 16-core protocols were compared with stratified PSA levels, stratified prostate volumes, Gleason scores, and detection rates of HGPIN and ASAP; no significant differences were found. Conclusions Cancer positivity with the 10-core protocol was not significantly different from that with the 12- and 16-core protocols, which indicates that the 10-core protocol is acceptable for performing a first biopsy. INTRODUCTION Other than skin cancers, prostate cancer (PCa) is the most common cancer in men and the second cause of death after lung cancer. The estimated numbers of new cases of PCa and deaths in the United States in 2014 are 233,000 and 29,480, respectively [1]. In Brazil, the number of deaths in 2011 was 13,129, and the estimated number of new cases for 2014 is 68,800 [2]. Screening for PCa is accomplished by digital rectal examination (DRE) and by measuring serum prostate-specif-ic antigen (PSA) levels. A DRE can be uncomfortable and is not welcomed by patients; however, this type of examination is an important screening and staging tool despite the disadvantages of subjectivity and interpersonal variability among examiners. The examination can aid in the detection of tumors in men with low levels of PSA [3]. Transrectal ultrasound (TRUS)-guided biopsy is the most accepted method for diagnosing PCa, which is detected in 30% to 40% of biopsy specimens [4]. This method did not gain popularity until the mid-1980s, when understanding of the anatomy of the prostate for radical prosta- tectomy and PSA measurement stimulated enthusiasm for early detection of PCa [5]. With the advent of TRUS, nonpalpable nodules began to be visualized and biopsied. Hodge et al. [6] proposed the sextant technique for PCa detection, which consists of the collection of six-core biopsy specimens targeted at the base, middle third, and apex regions of the prostate in sagittal line bilaterally. Subsequent studies have shown that sextant biopsies yield false-negative results in 30% of cases [7]. The sextant method was modified by the inclusion of more lateral biopsies (the method of five regions); four fragments (two on each side) of the most lateral regions and three from the median line were added, totaling 13 fragments. With this technique the number of false-negative results decreased by 35% [8]. In a subsequent study, Presti Jr et al. [9] showed the advantages of prostate biopsy techniques involving a larger number of core fragments and including the latero-lateral regions. In our Department of Urology at the Botucatu Medical School, the 10-core protocol is standard; the sextant biopsy protocol, extended to obtain 12 cores during the first biopsy, is used at the Brazilian Society of Urology. The aim of this study was to compare PCa detection in prostate biopsy specimens between the 10-core protocol (protocol of Botucatu Medical School) and the 12-core (used at the Brazilian Society of Urology) and 16-core (overall total) protocols. The number of cores collected during prostate biopsy was also compared with stratified PSA levels, stratified prostate volumes, Gleason scores, and detection rates of high-grade prostatic intraepithelial neoplasia (HGPIN) and atypical small acinar proliferation (ASAP). MATERIALS AND METHODS The current prospective controlled study was conducted from January 2011 to February 2012 at the Department of Urology, Botucatu Medical School, Sao Paulo State University, after approval by the Research Ethics Committee. The criteria for inclusion in the study were as follows: DRE results suggestive of neoplasia, elevated PSA (>4.0 ng/mL in men older than 55 years and >2.5 ng/mL in men younger than 55 years), a PSA density >0.15 ng/mL, and an annual increase in the rate of PSA levels >0.75 ng/mL. Carriers of coagulopathies, individuals with urinary tract infections (whether diagnosed at the time of biopsy or during treatment), and individuals who refused to provide informed written consent were excluded from the study. Consecutive patients (n=354) were recruited for the study; however, three of these patients were excluded: two for not consenting to participate in the study (did not sign the consent form) and another because they underwent sextant biopsy as a result of an unfavorable medical condition. The patient's medical records were reviewed, and variables such as age, race, serum total PSA (current and previous), free PSA, free PSA/total PSA, and biopsy indication were analyzed. The biopsy was performed on an outpatient basis in a room equipped with all material necessary for emergency intervention. Sedation and anesthesia were achieved by using 50-mcg fentanyl citrate and 5-mg midazolam. The biopsies were performed by two experienced urologists. On the morning of the procedure, a rectal enema (250 mL) was performed, and antibiotic prophylaxis was achieved with the oral administration of 500-mg ciprofloxacin 2 hours prior to the procedure and again 8 hours afterward. The procedure was performed while the patient was in the left lateral position with the thighs flexed. The procedure was performed by using Dornier TRUS equipment with a 6.5-MHz multiplanar probe, an auto-fire gun, and an 18-gauge needle. Initially 10 punctures were performed, yielding core specimens from the following regions of the prostate bilaterally: base, middle third, apex, medial (transitional zone), and latero-lateral. After these specimens were collected, six additional punctures were performed bilaterally in the same specimens in the more lateral regions of the base, middle third, and apex ( Fig. 1). A positive diagnosis of PCa was compared between the 10-core protocol (base, middle third, apex, medial [transitional zone] and latero-lateral, bilaterally) and the 12-core (base, middle third, apex, and more lateral regions of the base, middle third, and apex, bilaterally) and 16-core (overall total) protocols ( Fig. 2). Tumor detection with the 10-, 12-, and 16-core protocols was correlated with PSA levels, prostate volumes, Gleason scores, and detection rates of HGPIN and ASAP. Data were collected and recorded on an Excel spreadsheet and analyzed by using SAS 9.2 (SAS Institute Inc., Cary, NC, USA). Results for age, PSA level, and prostate volume are expressed as means and standard deviations. Qualitative variables are expressed as frequencies and percentages. Chi-square and Fisher exact tests were used to evaluate differences between the variables; the sig- nificance level was set at 5%. The prevalence of PCa diagnosis, with 90% power, was determined by using a one-sided hypothesis test and a significance level of 5%. RESULTS A total of 351 consecutive patients underwent TRUS-guided prostate biopsy with 16-core fragments. Mean age, total PSA level, prostate volume, and race are shown in Table 1. Four patients had PSA levels >100 ng/mL (137, 198, 2,000, and 2,585 ng/mL)−greater than the mean and standard deviation. Most patients (68.81%) had a high PSA level, 13.11% had an abnormal DRE result, 12.25% had an association between a high PSA and the presence of a nodule or an abnormal DRE result, and 5.83% had an increased PSA velocity. Results of the DRE performed before prostate biopsy were abnormal in 93 patients (26.50%) and were normal in 258 patients (73.50%) (T1c). Examination of the prostate by TRUS, before the prostate biopsy, detected hypoechoic nodules in 98 of the patients (27.92%). PCa positivity in prostate biopsy specimens, by the number and location of the biopsy cores, is shown in Fig. 3. The cancer detection rate was not significantly different between the three biopsy protocols. PCa was detected in 102 patients (29.06%) with the 10-core protocol, in 99 patients (28.21%) with the 12-core protocol, and in 107 patients (30.48%) with the 16-core protocol (p=0.79). The PSA level was stratified as 0 to 2.5, 2.6 to 4.0, 4.1 to 10.0, and >10.0 ng/mL, and prostate volume was stratified as ≤20, 20 to 50, and >50 cm 3 . PCa positivity in the prostate biopsy specimens in relation to stratified PSA levels (ng/mL), stratified prostate volumes (cm 3 ), and number of core biopsy specimens is shown in Table 2. Elevated PSA levels were associated with greater PCa positivity, especially when levels were >10.0 ng/mL. In two patients with a PSA level <2.0 ng/mL, PCa was detected on biopsy. A comparison of PSA levels with the number of core biopsy specimens showed no significant differences, nor did a comparison of prostate volumes with the number of core biopsy specimens. PCa detection rates were greatest at prostate volumes between 20 and 50 cm 3 and were lower at prostate volumes >50 cm 3 . Correlations between tumor Gleason scores, number of prostate biopsy specimens, and PSA values are shown in Table 3. Most patients had tumors with a Gleason score of 7 followed by a Gleason score of 6, and few patients had a Gleason score of 8 or 9. No significant differences in cancer detection rates with the three biopsy protocols were found in relation to Gleason scores and PSA values. The detection rates of ASAP and HGPIN in patients with negative biopsy results were stratified according to the three biopsy protocols and PSA values (Table 4). No statistically significant differences were found in the detection of ASAP and HGPIN in relation to the number (10, 12, or 16) of cores collected. DISCUSSION PCa is an insidious neoplasm, and, as with any other malignancy, early detection is important. The introduction of serum PSA screening for PCa was a major breakthrough in the early diagnosis of the disease, which allows for the detection of subclinical malignancies. The development of treatments such as radical prostatectomy has led to permanent cure in a large number of patients or to an improved life expectancy. TRUS-guided prostate biopsy has been the standard method for diagnosing PCa, but there is no consensus about the exact number of fragments to be collected. Several studies have attempted to define this number. Initially, Hodge et al. [6] proposed the sextant technique; however, subsequent studies have shown that sextant biopsies yield false-negative results in 30% of cases. Eskew et al. [8], in a study of 119 patients, added five more core specimens to the sextant biopsy, which improved the cancer detection rate by 35%. Levine et al. [7] added six more core specimens to the sextant biopsy, which resulted in the detection of an additional 30% of cancer cases. However, Naughton et al. [10], in a prospective randomized study of 244 patients, found that the 6-and 12-core protocols yielded similar cancer positivity: 26% and 27%, respectively (p=0.9). Presti Jr et al. [9] reported that sextant biopsies failed to detect PCa in 20% of 483 patients as compared with the 8-and 10-core protocols collected from lateral regions. Presti Jr [11] reviewed available studies that analyzed various biopsy protocols and suggested that an initial biopsy should include a minimum of 12 cores (extended biopsy), with special attention to the lateral regions of the prostate. In our study, PCa detection rates with the 10-, 12-, and 16-core protocols were 29.06%, 28.21%, and 30.48%, respectively (p=0.79). No statistically significant differences in PCa detection rates were found between these three protocols. According to the literature reviewed, the sextant biopsy yields false-negative results in 30% of cases and should not be used. Saturation biopsy as initial biopsy does not appear to be more effective at diagnosing PCa than does extended biopsy. In summary, extended biopsy is indicated for the first biopsy and saturation biopsy may be indicated for rebiopsies. Establishing a cutoff value for PSA aims to ensure greater diagnostic accuracy. Published data show that high-grade tumors may be found in patients with a PSA level as low as <4.0 ng/mL, which led to a reduction in the cutoff value to 2.5 ng/mL in some guidelines, especially for younger men, i.e., <60 years [17]. Physicians should evaluate PSA cutoff values for each patient individually to allow the diagnosis of aggressive tumors without increasing the diagnosis of indolent tumors. The higher the PSA level, the greater the PCa positivity rate (especially when >10 ng/mL), which was also evidenced in other studies [15,18]. In two patients with a PSA level <2.0 ng/mL, PCa was detected on biopsy. The indication for biopsy was the presence of prostatic nodules detected on DRE, which indicated that this type of examination may reveal tumors in men with a low PSA level [3]. In other studies also, no statistically significant differences were found between PSA levels by the number of cores [15,16,18]. The number of cores required to diagnose PCa in relation to the size of the prostate is not yet defined. Many studies have shown that the greater the number of cores collected in larger prostates, the greater the PCa detection rate [16,18,19]. The Vienna nomogram was developed to define an appropriate number of prostate biopsy cores to improve the detection of PCa based on the age of the patient and the prostate volume. Remzi et al. [20] showed that the detection rate of PCa with the Vienna nomogram was 36.7% compared with 22% on first biopsy in the control group of eight cores. However, Lecuona and Heyns [21], in a prospective controlled clinical trial, suggested that there was no significant advantage to using the Vienna nomogram to determine the number of prostate biopsies to be performed compared with the control group of eight fragments. In our study, we found no statistically significant differences in cancer detection rates when we compared prostate volumes with the number of cores collected (10, 12, and 16). PCa positivity in prostates >50 cm 3 was lower when compared with prostates between 20 and 50 cm 3 . It remains doubtful whether more than 16 core fragments could increase the cancer positivity rate in prostates >50 cm 3 . The correlation between Gleason scores and the biopsy protocol and PSA values in other studies also showed no significant differences in tumor detection rates [15,22]. Mian et al. [23] studied 426 patients, 221 of whom had undergone sextant biopsy and 205 saturation biopsy before radical prostatectomy. The sextant biopsy had a lower concordance rate with Gleason scores than did the saturation biopsy when compared with Gleason scores with radical prostatectomy. Other studies have also shown that the increase in the number of cores collected improves the concordance rate between Gleason scores with prostate biopsy and radical prostatectomy [24]. In our study, the correlation of prostate biopsy Gleason scores with radical prostatectomy Gleason scores was not evaluated. Few studies have addressed the influence of increasing the number of fragments in prostate biopsy on the detection rates of HGPIN and ASAP. Ploussard et al. [25] published a study in which HGPIN and ASAP were detected in 35.7% of cases by sextant biopsy, in 28.6% of cases with 6 additional cores (total of 12 cores), and in 35.7% of cases with 21 cores. Epstein and Potter [26] also showed no relationship between the number of cores collected during prostate biopsy and the incidence of HGPIN and ASAP. Nomikos et al. [15], in a retrospective study involving patients with a PSA level <10 ng/mL, reported that a 24-core prostate biopsy protocol increased the detection rate of HGPIN by 20% (p=0.0008) compared with a 10-core protocol. In our study, no statistically significant difference was found in the detection rates of ASAP and HGPIN in relation to the number of core fragments (10, 12, or 16) collected. The detection rates of ASAP and HGPIN were approximately 6% and 2.5%, respectively, consistent with the literature [27]. In our study, we collected cores bilaterally in the medial, transitional zone. The detection rate was low (1.8%). Corroborating the data of Pelzer et al. [28], we found that it did not improve the cancer detection rate, and there is currently no requirement to collect cores from the transitional zone. In our sample of 351 patients, only 1 patient (0.28%) would have remained undiagnosed with PCa if cores from the transitional zone were not collected. Considering the data presented, we concluded that the protocol with 10 cores on first biopsy is sufficient to obtain a high positivity rate (29.06%) as compared with protocols with more cores. The most important fact is that additional biopsies should capture more lateral regions of the prostate. In the current study, extended biopsies of additional lateral tissue increased the PCa detection rate by 20% compared with the sextant biopsy. Thus, we observed that the number of lateral biopsies collected is more important than the total number of biopsies collected. Indeed, biopsies of the transitional zone did not increase the PCa detection rate. However, it is clear from the literature that a greater number of biopsies is required to yield higher detection rates with rebiopsies (extended or saturation biopsies). We also propose a modification to the biopsy protocol used at the Botucatu Medical School, i.e., collect 10 cores, do not collect the 2 medial cores, and collect more lateral cores (base, middle third, apex and two latero-lateral, bilaterally). Furthermore, we must consider that increasing the number of biopsy specimens collected increases the dura-tion of the procedure and, consequently, the discomfort of patients, especially when analgesia is induced with local anesthetics, which is the most commonly used method. However, the number of biopsy specimens collected is not as important when using intravenous sedation and analgesia, as is performed when it is necessary to obtain a large number of fragments [29]. However, collection of a greater number of fragments could be a risk factor for complications after biopsy. de Jesus et al. [30] reported that the collection of more than eight fragments increases the likelihood of infectious complications. CONCLUSIONS Cancer positivity with the 10-core protocol was not significantly different from that with the 12-and 16-core protocols, which indicates that the 10-core protocol is acceptable for performing a first biopsy.
2018-04-03T03:47:22.951Z
2014-11-01T00:00:00.000
{ "year": 2014, "sha1": "8a0bfe9bd9641c1c2082e60b5e5b63673a857eb6", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4231149?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8a0bfe9bd9641c1c2082e60b5e5b63673a857eb6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59336460
pes2o/s2orc
v3-fos-license
A Novel Memductor-Based Chaotic System and Its Applications in Circuit Design and Experimental Validation , Introduction With the deep study of the chaotic systems and chaotic circuits, the concept of memristor was first put forwarded by Chua in 1971 [1].Memristor is the fourth circuit component after capacitor, resistor, and inductor were coined, which is actually a nonlinear resistor with natural memory function.Nevertheless, we did not see significant progress on relevant research at that time on account of insufficient attention was paid to the memristor.The immature nanomanufacturing technology and difficult manufacturing of memristor with real materials all contributed to the slow progress on memristor [2].It was not until 2008 that the HP Laboratories confirmed the existence of memristor and simultaneously a memristor-based real device was coined with its results published in Nature [3,4].Since then, memristor has become a hot research spot of chaos and it drew much more eyes from researchers engaged in various areas of science and engineering [5][6][7][8][9][10].It is well known that memristor has two models, namely, charge control and chain control.Among them, charge control exports memristor, while chain control exports memductor.If the memristor is a constant, it becomes the same concept as resistor.Correspondingly, the physical meaning of memductor is equivalent to conductance.Because the design of memductor is more convenient than the design of memristor in the design of chaotic circuits, the model of memductor is studied in this paper. As a tunable nonlinear device with small size and low power consumption, memristor is quite suitable for the applications of high-frequency chaotic circuit, image encryption, and chaotic secure communication.It is no wonder that, in recent years, utilizing memristor to construct chaotic circuits has attracted close attention of quite a number of researchers [11][12][13][14][15].Among them, Itoh and Chua adopted the memristor with a characteristic curve for the monotone rise and piecewise linear to replace the diode in Chua's circuit and followed by the chaotic oscillation circuit based on memristor was derived [6].Similarly, Muthuswamy and Kokate replaced the memristor with piecewise linear model instead of Chua's diode and analyzed the dynamic characteristics of the system after replacement.The results indicated that the chaotic characteristics of the system were more complex than that of the classical Chua's [7].In 2010, Muthuswamy and Chua proposed the most simple third-order memristor chaotic circuit so far and in [8,9] showed the experimental results of the corresponding hardware circuit, whose greatest feature was the simple structure.It was connected in series simply by a linear inductor, a linear capacitor, and a nonlinear memristor.In addition, Bao et al. carried on the research on the memristor chaotic circuit and realized a series of new Chua's memristor chaotic circuits by using the smooth model magnetic controlled memristor [10][11][12].At present, the proposed memristor chaotic oscillation circuits of different structure and types [13][14][15][16][17][18][19][20][21][22][23] include the chaotic circuits with two memristors [16], integer-order memristor chaotic circuit [18], fractional-order memristor chaotic circuit [19], and memristor-based circuit for neural networks [23], whereas most of the researchers focus on theoretical analysis and numerical simulation for the memristive chaotic system and the experimental validation of the hardware circuit is rarely seen because those memristive chaotic circuits are theoretically established and their feasibility to be implemented by using hardware circuit is still not known.In particular, it is more difficult to design and implement a practical circuit for certain more complicated memductor chaotic systems.Therefore, we construct a novel memductor-based chaotic circuit and implement the experimental validation of the hardware circuit for above reason. Moreover, in order to meet the security requirements of chaotic secure communication, researchers proposed a method to improve the predictability and complexity of the system by constructing hyperchaotic systems [24][25][26] and memristor-based chaotic systems, since memristor is a nonlinear component, whose memory ability [27][28][29][30][31] of the current by convection is not available in conventional chaotic circuit elements.In this way, it is especially suitable for the chaotic secure communication field [32][33][34][35][36].Although the application research of memristor is just the beginning in the field of chaotic secure communication, it has great potentials and advantages in improving the confidentiality and security of chaotic secure communication system.So far, there is no literature to implement the memductor-based chaotic secure communication in chaotic modulation way.In this paper, chaotic modulation is adopted to implement the memductor-based secure communication based on the novel memductor-based chaotic circuit. The contribution of this paper is that a new method for constructing ordinary chaotic system into memductorbased chaotic system is proposed by using memristor as nonlinear term.Then, we perform a detailed analysis, active control, synchronous stability analysis [37][38][39][40], and secure communication of the novel memductor-based chaotic system.The active control is implemented, and the synchronization stability results are determined by using Lyapunov stability theory.The corresponding physical circuit implementation is also proposed to show the accuracy and efficiency of the memductor-based chaotic circuit.The analog circuit implementation results match with the Multisim and MATLAB simulation results.In addition, the concept of "the memductor-based chaotic circuit defect quantification index" is first proposed to verify whether the chaotic output is consistent with the mathematical model through deep analysis on the design principle of memductor-based chaotic circuit.Our research provides important theoretical and technical basis for the realization of the large-scale integrated circuit with memductor.This paper is expected to serve as a further step to apply memductor into real-world secure communication. This paper falls into 6 parts.In Section 2, a novel 4D memductor-based chaotic system is constructed.In following Section 3, several qualitative issues about the novel memductor-based chaotic system, such as the basic dynamical behavior, divergence, stability of the equilibrium set, bifurcation, Poincaré map, and synchronous stability, are investigated analytically and numerically.In Section 4, the proposed memductor-based chaotic circuit is implemented in an analog electronic circuit.After that, a new memductor-based chaotic secure communication circuit is proposed based on the novel memductor-based chaotic circuit in Section 5. Finally, some conclusions and discussions are drawn in Section 6. The Construction of a Novel Memductor-Based Chaotic System 2.1.A Specific Memductor Model.Apart from the three basic circuit components, including capacitor, resistor, and inductor, the fourth circuit component is memristor, which derives from the magnetic flux and charge in the circuit.And the resistance value of the memristor varies with the current flowing through the circuit.When the circuit is powered down, the resistance value of the memristor still remains valid before the power is broken.Therefore, memristor is actually a nonlinear resistor with natural memory function. The memristor is defined as the relation between the magnetic flux and the charge quantity, that is, Memristor can be divided into accumulation charge memristor and magnetic flux-controlled memristor.For a charge-controlled memristor, ϕ is easily obtained by For (2), differentiation can be easily obtained as follows: Thus, v t can be obtained as follows: According to Ohm's law, v t is obtained as follows: Thus, a memristance value is obtained as follows: where m q is the memristance, and its unit is Ohm (Ω).If the memristance value is a constant, then it becomes the same concept as resistance.It can also be obtained by a linear relationship between the current and the voltage. For the magnetic flux-controlled memristor, q is easily obtained by q = f ϕ 7 From i = dq/dt, we can get where w ϕ is the memductance.In the chaotic circuits, the use of memductor is more extensive.This is because the design of memductor in chaotic circuits is more convenient than memristor design. Here, a magnetically controlled memristor is defined with a smooth cubic monotonic rise nonlinear characteristic curve.The model is a nonlinear memductor, and the nonlinearity is modeled by using a cubic curve model.The formula is described as follows: q ϕ = aϕ + bϕ 3 9 Act on the equation ends of the sign with d/dt, that is, In consideration of dq = idt, dϕ = udt, and ϕ = udt, we can obtain i = a + 3bϕ 2 u 11 Equation ( 11) is the VAR (volt ampere relation) expression of the memductor.It makes the physical concept of memductor more distinct; thus, we can clearly see that the dimension of a + 3bϕ 2 is conductance.Equation (12) seems useless, but it is very important for engineering design.The specific circuit of memristor can be directly designed by (12).Even when the model represented by (10) changes, we can also design corresponding memductor-based or memristor-based circuits according to this method. Realization Circuit of the Specific Memductor Element. According to (12), the specific circuit of memristor can be designed directly.An equivalent memductor-based circuit consisting of operational amplifier, analog multiplier, resistor, and capacitor is shown in Figure 1. Here, we assume that the B terminal is connected to the inverting input of the next-stage operational amplifier, so the B point has dummy ground and zero level.The A point is the voltage input, and it is set as u A .The multiplier coefficient of analog multiplier is 0.1, and the relation between input and output voltage is described as It is assumed that the normalized resistance is 10 kΩ.Then, the output voltage of the operational amplifier is − 300/RwCw u A dt.And the output voltage of the operational amplifier after normalization is −300 u A dt.After the first analog multiplication, the voltage is 30 u A dt 2 .After the second analog multiplication, the voltage is 3u A u A dt 2 .Therefore, the current flowing through R b is easily obtained as follows: Thus, the current flowing through B point is obtained as follows: In the following, the circuit parameter design is carried out.3 Complexity Then, the total current flowing through B is obtained as follows: In this way, the circuit structure and circuit parameter design of the memductance are realized.The equivalent memductor-based circuit with specific parameters is shown in Figure 2. A Novel 4D Memductor-Based Chaotic System.The 3D chaotic system is described as follows: 17 where and where x, y, z are the state variables and α, β, m 1 , m 2 are the constant parameters of the 3D system.Here, replace h x with the memductance w x ; thus, a mathematical model of a chaotic circuit consisting of a memductor element is obtained as follows: where Therefore, according to the characteristics of the aforementioned specific memductor element and the specific realization circuit with memductor, a novel 4D memductorbased chaotic system is proposed based on the ordinary 3D chaotic system (17).And the novel 4D memductor-based chaotic system is presented as follows: 21 where x, y, z, u are the state variables and α, β, ξ, γ, c, d are constant, positive parameters of the novel memductorbased chaotic system. When choosing α = 16, β = 15, ξ = 0 25, c = 0 00625, d = 0 125, and γ = 0 5, there exist typical chaotic attractors in system (21).That is, after adding 1D memristor to the ordinary 3D chaotic system, we need to find the appropriate parameters to satisfy the memductor-based system to produce new chaotic phenomena.For the constructed novel memductor-based chaotic system, four parameters ξ, γ, c, d are added.When the specific parameters are brought in, the equation becomes However, the numerical solutions of the proposed 4D memductor-based chaotic system (22) are not able to be implemented by using general circuit components.Therefore, in practical applications, it often needs to be varied to make proper adjustments of these variables.Here, the method of scale transformation is to replace x, y, z, and u by 4x, 0 5y, 3z, and u, respectively.After scale transformation, (22) becomes Thus, the novel 4D memductor-based chaotic system after scale transformation is easily described as follows: Dynamical Analysis of the Novel Memductor-Based Chaotic System 3.1.Chaotic Attractors.The chaotic attractors with MATLAB simulation of the novel 4D memductor-based chaotic system (24) are shown in Figure 3.It can be seen from the numerical simulation results that the numerical range of each variable parameter is within −10 V to +10 V, and it fully conforms to the requirements of circuit design in practical applications. That is because the working voltage of electronic components generally ranges from −15 V to +15 V in practical electronic circuits.As a result, it must be the equation of scaling if the memductor-based chaotic circuit is to be implemented. Divergence and Stability of Equilibrium Set.The divergence of the novel 4D memductor-based chaotic system ( 24) is easily calculated as follows: In this way, the system will be dissipative on the condition that the parameter becomes |u| > 5/2, because a necessary and sufficient condition for system (24) to be dissipative is that the divergence of the vector field is negative when the time tends to infinite.Furthermore, the corresponding dynamic characteristics will be presented. Considering x = y = z = u = 0, then the equilibrium equation of system (24) is easily obtained as follows: Clearly, the set of equilibrium points of the system ( 24) is obtained as follows: where σ is any real constant.That is, the set of points on the u coordinate is the equilibrium point and the system has an infinite set of equilibrium points.Through linearizing the system (24) near the equilibrium point, then the Jacobian matrix for system (24) at equilibrium point ( 27) is obtained as follows: where ξ = 4, α = 2, d = 2, η = 8, μ = 6, β = 2 5, γ = 0 5, and ρ = 4.Then, the specific Jacobian matrix for system (24) at equilibrium point is easily obtained as follows: The characteristic polynomial of the Jacobian matrix ( 29) is described as follows: Therefore, the eigenvalues at the equilibrium point of the novel memductor-based chaotic system can be obtained as follows: It can be concluded from (31) that the equilibrium point set of the system, which accords with the condition of chaos generation, is unstable. Bifurcation, Lyapunov Exponents, and Poincaré Graph. The calculation of Lyapunov exponent is a method employed to quantitatively judge the chaos of system.When choosing ξ = 4, α = 2, c = 0 025, d = 2, η = 8, μ = 6, β = 2 5, γ = 0 5, and ρ = 4, the initial conditions are chosen as x 0 = −0 17528, y 0 = −1 0872, z 0 = 1 6368, and u 0 = −3 2852.The Lyapunov exponents of the novel memductor-based chaotic system are, respectively, calculated as follows: L 1 = 0 0600, L 2 = 0 0065, L 3 = −0 0069, and L 4 = −10 4012 Figure 4 shows the projection of a chaotic attractor generated by the novel memductor-based chaotic system on the x − u plane.It represents the extreme sensitivity of the memristor-based chaotic system to the initial values [30].When the initial value varies by 0.00001, there will be such a prominent difference in the result.It is obvious that the proposed memductor-based chaotic system is extremely sensitive to initial values.In Figure 5, the Lyapunov exponent spectrum of the novel memductor-based chaotic system is shown.Consequently, it is found that the novel memductor-based chaotic system is chaotic oscillation from the chaotic attractors and Lyapunov exponents. In order to further verify the chaotic dynamical behavior of the novel memductor-based chaotic system (24), the bifurcation diagram and the Poincaré graph are strictly calculated.Through numerical analysis, the bifurcation diagram with parameter variation is shown in Figure 6, where α is a variable parameter.It is obvious that the system will undergo a huge change in topology when α is about 1.1.The Poincaré Synchronous Stability Analysis Based on Active Control. Chaotic synchronization means that the trajectory of a chaotic system converges to another chaotic system and maintains a consistent dynamic phenomenon from a physical standpoint [38].Here, the chaotic drive system or the transmitter in the secure communication system is defined as follows: Then, the chaotic response system or the receiver in the secure communication system is defined as follows: where N is the controller, t is the time, and vectors are X, Y ∈ R n .And they have the n-dimensional elements x 1 , x 2 , … , x n and y 1 , y 2 , … , y n , respectively.In addition, the two chaotic systems can be the same or different, but their initial conditions are different.If the two chaotic systems are interrelated to some extent through the controller N, X t ; t 0 , X 0 a and Y t ; t 0 , Y 0 are considered to be the solutions of system (32) and system (33), respectively, where they satisfy the smooth condition of the function, when R n has a subset of W t 0 , and the initial value is satisfied to X 0 , Y 0 ∈ D t 0 , and then when t ⟶ ∞ exists: Thus, it can be obtained that the chaotic response system (32) is synchronized with the chaotic drive system (33). In this way, the active synchronization error system between the chaotic drive system and the chaotic response system is defined by e = y − x, which means the asymptotic 7 Complexity stability at the origin of the synchronization error system on the basis of the Lyapunov stability theory.It is obvious that the controller N plays a key role in stabilizing the synchronization error system at the origin.Consequently, various synchronization methods will be realized by designing different controllers. 36 where y 1 , y 2 , y 3 , y 4 are the states and u 1 , u 2 , u 3 , u 4 are the designed controllers, whereas the synchronization error based on the active control method is defined as follows: According to (37), the synchronization error system between the memductor-based drive system (35) and the memductor-based response system (36) is easily obtained as follows: Then, the active controller system is designed as follows: 39 where k 1 , k 2 , k 3 , k 4 are the control gains, and they are positive values, respectively.Substituting (39) into (38), the active synchronization error system is obtained as follows: According to (42), ≤ 0 is easily obtained.That is to say, V is negatively semidefinite.Based on the Lyapunov stability theory, if V is positively definite and V is negatively semidefinite, then the system is consistent and stable at the origin of the equilibrium state [38].Accordingly, the active synchronization error system (38) is asymptotically stable at the origin.Thus, lim t→∞ |e t | ⟶ 0 It is proved that the synchronization between the novel memductor-based drive system and the novel memductor-based response system is achieved.In the following numerical simulations, the initial values of the novel memductor-based system are chosen as x 1 0 = −0 17528, x 2 0 = −1 0872, x 3 0 = 1 6368, and x 4 0 = −3 2852 The control gains are chosen as The history of the synchronization errors between the novel memductor-based drive system and the novel memductor-based response system is shown in Figure 8.It is clear from Figure 8 that the active synchronization errors e 1 , e 2 , e 3 , e 4 can be asymptotically stabilized at the origin in 8 Complexity a very short period of time.The active control method is simple, practical, and easier to be implemented in an electronic circuit.It can be applied to other complex memductorbased chaotic systems to implement synchronization and chaotic secure communication. Circuit Design and Hardware Implementation 4.1.Circuit Design.Based on the novel 4D memductor-based chaotic system (23), the normalized resistor is set as R normalization = 100 kΩ in order to design the memductorbased chaotic circuit.In view of the need for higher accuracy, the low-power AD633 analog multipliers are chosen in the chaotic circuits, which enjoy the precision of laser trimming and remain stable between −10 V and +10 V. Taking into considering the convenience of power supply and the feasibility of the circuit, as well as saving components, the selected operational amplifiers are LF347N and LF353N with the power supply voltage ranging from −15 V to +15 V.In order to prevent the voltage in the circuit from exceeding the range of operational amplifier, the ranges of the variables in system (22) have been adjusted appropriately, and a new memductor-based chaotic system (23) was obtained after scale transformation.Because the precision provided by AD633 is 1/10 V, the input factor for analog multiplier is 0.1 V. Conclusively, the state equation of the memductorbased chaotic circuit is obtained by rewriting (23): Thus, the novel memductor-based chaotic circuit schematic is designed as shown in Figure 9 according to (43).The circuit is divided into two parts: the nonmemristor part and the independent memristor part.The memristor part is the red circuit marked in Figure 9.The rest of the circuit is the nonmemristor part, a linear part.What is seen from Figure 9 is that the novel memductor-based chaotic circuit is composed of six operational amplifiers and two analog All of the electronic components are easily available.The memductor-based chaotic phase portraits of the novel memductor-based chaotic circuit by Multisim are shown in Figure 11.It can be shown from the simulation results that it outputs six chaotic phase portraits of xy, xz, zy, xu, yu, and zu.Moreover, the Multisim simulation results are consistent with the MATLAB simulation results as shown in Figure 3.That is, it fully conforms to the requirements of circuit design in practical applications. Hardware Implementation. Most researchers highlight the study of memristor chaos theory in numerical simulation; in that case, there is a certain deviation in the physical memristor circuit system.Based on the correct simulation results shown in Figure 11, with the purposes to verify that the novel memductor-based chaotic circuit enjoys high accuracy and good robustness and further study the chaotic dynamical characteristics of the novel memductor-based chaotic system (23), a practical electronic circuit is constructed by using some general electronic components such as operational amplifiers, analog multipliers, resistors, and capacitors according to the circuit model of Figure 9. It should be noted that the problems easily occurring in the process of constructing the memductor-based chaotic circuit should be tackled.For example, the chaotic circuit is more sensitive to the initial value because of adding the memristor, and any minor change will lead to unpredictable results.Therefore, we chose the values of the resistors closer to the simulation resistor to construct the circuit and test whether each module of the circuit works properly in the process of constructing.Afterwards, input voltage to the system and access the oscilloscope, the output phase portrait photos of the novel memductor-based chaotic circuit are shown in Figure 12. Figure 13 shows the experimental circuit board photo. What should be seen from the experimental results shown in Figure 12 is that the phase portraits of the novel memductor-based chaotic attractors displayed by oscilloscope coincide with the simulation results of MATLAB and Multisim.That is, it proves true that the memductor-based chaotic attractors exist in real.The proposed memductorbased chaotic circuit design method provides a reliable and straightforward way for realizing memristive chaotic circuits, and the method plays a significant role in easily handling and avoiding the output voltage beyond the limitation of the amplifier linear region efficiency. Experimental Results Analysis.Through careful experiments on the proposed memductor-based chaotic circuit shown in Figure 9, the following important conclusions can be obtained: (i) The impact of switching power seems to exist.Once the chaotic state is entered, the chaotic attractors begin to become stable.The memductor-based chaotic circuit characteristics of this phenomenon are presented as follows: when the power is turned on, two attractors contribute to establish a stable state of the circuit.One is a chaotic attractor, which tells the fact that the voltage amplitude is less than the 10 Complexity supply voltage, and no amplitude limiting condition occurs.The other is the possibility of entering a state of limiting amplitude and not breaking out of the limiting amplitude state, but entering the traditional periodic oscillation, and this oscillation is a stable oscillation (ii) The ranges of the physical variables measured in this experiment are presented as follows: x ranges from −2.2 V to +2.2 V, y ranges from −4.4 V to +4.4 V, z ranges from −4.4 V to 4.4 V, and u ranges from −4.8 V to +4.8 V.This set of data is easy to be controlled.So as long as the resistance of the 4 K resistor is adjusted, the amplitude of the chaos varies accordingly and the shape remains unchanged, which is extremely convenient (iii) A good memductor-based chaotic circuit must be designed without defects.One of the defects is the voltage limit of the regulated power supply.The defects may appear in designing of the operational amplifier and inverting integrator.As for the design defect of operational amplifier, the feedback resistor of the operational amplifier R f is greater 11 Complexity than that of the input circuit R in .That is, the design defect is presented as Moreover, if the operational amplifier is equipped with two input resistors, the design defect is presented as follows: 12 Complexity Therefore, both of these conditions may cause amplitude limiting distortion, which makes the design of memductorbased circuits deviates from the original intention of chaotic mathematical model. And as for the design defect of the inverting integrator, the normalized resistance of the inverting integrator is set as R normalization ; thus, the possible defect of the memductorbased circuit design is presented as follows: which is hard to achieve, since in some cases, the mathematical model itself is involved, and it is not just the circuit design but also the circuit model involved.Therefore, the reason why the steady phase portraits have not been debugged is that the design of the operational amplifier violates (44) or (45). (iv) Here, a new concept, called "the memductor-based chaotic circuit defect quantification index", is first proposed.The new concept of quantification consists of two parts logically.First of all, the singlestage defect coefficient is considered.For a stage operational amplifier, if the operational amplifier does not violate (44) and (45), the defect coefficient of the memductor-based chaotic circuit is equal to zero.If (44) and (45) are violated, the defect coefficient of the operational amplifier is defined as Secondly, the defect coefficient of the whole memductorbased circuit system is the sum of the defect coefficient at all levels of the unit circuit.Physical experiments in this paper show that the chaotic output of the memductorbased circuit with the parameters shown in Figure 9 is the most stable, and they are consistent with the MATLAB and Multisim simulation results. Application of the Proposed Memductor-Based Chaotic Circuit Since the memductor-based chaotic signal is more sensitive to the initial value than the ordinary chaotic signal, it is especially suitable for the secure communication field.In order to improve the security of secure communication system, it is considered that the novel memductor-based chaotic system should be selected as the chaotic system.In the proposed memductor-based chaotic secure communication scheme, the memristive secure communication circuit is implemented by using some electronic components containing analog multipliers, operational amplifiers, resistors, and capacitors with a novel 4D memductor-based chaotic system as chaos generator.Based on the proposed memductor-basede chaotic circuit shown in Figure 9, the memductor-based secure communication circuit schematic by Multisim is shown in Figure 14.Its circuit principle is carefully presented as follows: It consists 14 operational amplifiers together with 4 analog multipliers.Its basic circuit is composed of two proposed identical memductor-based chaotic circuit units with a little change.The left side of the circuit is the transmitter and the right side of the circuit is the receiver.The inverting input end of transmitter-modulator is connected with the transmitted signal to be transmitted.The same phase input end is connected with the x output terminal of the novel memductor-based chaotic circuit.In this way, the receiving system and the transmitting system are easier to maintain synchronization, and the robustness of the memductor- In what follows, the simulation experiments are presented to verify whether two identical parameters of the memductor-based chaotic circuits can effectively achieve the signal transmission and reception without distortion.Suppose an input sine wave with amplitude of 1 V and frequency of 1 kHz is given in the circuit simulation, the transmitting and receiving signal waveform by Multisim is shown in Figure 15.The synchronous phase portrait is shown in Figure 16.And Figure 17 shows the superimposed signal waveform of the modulation and demodulation signal.It is obvious from the simulation results that, no matter what kinds of signals are input, the two identical memductorbased chaotic circuits entirely maintain synchronization with each other if the component parameters of the transmitting circuit are exactly the same with the receiving circuit.Almost no distortion can be seen. Subsequently, the hardware circuit experiments of the proposed chaotic secure communication circuit based on the memductor-based chaotic circuit are implemented successfully.To verify the above Multisim simulation results, accordingly, an input sine wave with amplitude of 1 V and frequency of 1 kHz is taken in the practical electronic circuit experiment.It should be noted that the transmitting and receiving signal waveform photo is shown in Figure 18. Figure 19 shows the modulation and demodulation waveform subtraction.It is evident that the difference between the two waves (i.e., noise) is only 10 microvolts when the most sensitive gear of the oscilloscope is 10 μV.The synchronous phase portrait photo is shown in Figure 20. Figure 21 shows the superimposed signal photo of the modulation 14 Complexity and demodulation waveform.Figure 22 shows the transmitting modulation signal and the receiving demodulation signal waveform photo on the oscilloscope.According to the experimental measurement results of the memductor-based chaotic secure communication circuit, it is obvious that the transmitting and receiving signal waveform photo and the synchronous phase portrait photo displayed by oscilloscope coincide with the Multisim simulation results.Nevertheless, the memductor-based chaotic circuits composed of conventional operational amplifiers and analog multipliers still have some limitations, mainly because of the frequency limitations of the operational amplifiers.As already shown in [38], the operational amplifiers allow us to implement any type of circuit that is useful in analog processing applications.However, its performance in realizing chaotic circuits is limited.In work [38], the signals can be transmitted from 1 Hz to 500 kHz without distortion for the hyperchaotic secure communication circuit.When the signal frequency exceeds 500 kHz, the signal distortion will be very obvious.Thus, in order to transmit high-speed data, the chaotic attractors should work at high frequency.In addition, high frequency should be enhanced from the aspect of improving the security and confidentiality of chaotic secure communication circuits. Conclusion In this paper, a novel memductor-based chaotic system is proposed by adding a one-dimensional memristor equation to a particular three-dimensional chaotic system according to the physical nonlinear characteristics of memductor through looking for suitable parameters.And this paper is an attempt to investigate the dynamical behaviors and synchronous stability of the novel memductor-based chaotic system and realize these dynamics in a new physical circuit.What can be seen from the simulation results and experimental results is that they do not only output six phase portraits but also output stable fourth-order double vortex chaotic signals, respectively.In order to enhance the security performance of transmission signal and improve the vulnerability of the novel memristive system, the novel memductor-based chaotic circuit is applied to construct a new memductor-based chaotic secure communication circuit.Comparisons among Multisim simulation, MATLAB simulation results, and physical experimental results show that they are consistent with each other, and the attractors of the novel memductor-based chaotic system exist.What is more, the concept of "the memductor-based chaotic circuit defect quantification index" is proposed for the first time to verify whether the chaotic output is consistent with the mathematical model, which provides a powerful theoretical basis for the successful design and implementation of memductor-based chaotic circuits.These proposed circuit design methods can also be applied in other complex memristor-based chaotic systems. Nevertheless, the conventional operational amplifiers have somewhat performance limitations in implementing memductor-based chaotic circuits.It is quite hard to improve the frequency response for analog implementation of chaotic oscillator when it is designed with integrated circuits.Perhaps the implementation based on FPGA can be used as a solution to observe memductor-based attractors at higher frequencies.Thus, our future research will devote to the circuit realization of memductor-based systems by using FPGA. 24Figure 2 : 4 Figure 2: The alternative circuit of memductor based with specific parameters. Figure 8 : Figure 8: The history of synchronization errors. Figure 11 : Figure 11: The chaotic attractors of the novel memductor-based chaotic circuit with Multisim. a) xy phase portrait (b) xz phase portrait (c) yz phase portrait (d) xu phase portrait (e) yu phase portrait (f) zu phase portrait Figure 12 : Figure 12: The output phase portrait photos. Figure 14 : Figure 14: A novel memductor-based secure communication circuit by Multisim. Figure 18 : Figure 18: Transmitting and receiving signal photo.
2019-01-08T00:33:54.635Z
2019-01-03T00:00:00.000
{ "year": 2019, "sha1": "84388c26d9a2325cfb55e703ed38c9dddcd3e20c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/3870327", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "84388c26d9a2325cfb55e703ed38c9dddcd3e20c", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237255284
pes2o/s2orc
v3-fos-license
Short-Term High Fructose Intake Impairs Diurnal Oscillations in the Murine Cornea Purpose Endogenous and exogenous stressors, including nutritional challenges, may alter circadian rhythms in the cornea. This study aimed to determine the effects of high fructose intake (HFI) on circadian homeostasis in murine cornea. Methods Corneas of male C57BL/6J mice subjected to 10 days of HFI (15% fructose in drinking water) were collected at 3-hour intervals over a 24-hour circadian cycle. Total extracted RNA was subjected to high-throughput RNA sequencing. Rhythmic transcriptional data were analyzed to determine the phase, rhythmicity, unique signature, metabolic pathways, and cell signaling pathways of transcripts with temporally coordinated expression. Corneas of HFI mice were collected for whole-mounted techniques after immunofluorescent staining to quantify mitotic cell number in the epithelium and trafficking of neutrophils and γδ-T cells to the limbal region over a circadian cycle. Results HFI significantly reprogrammed the circadian transcriptomic profiles of the normal cornea and reorganized unique temporal and clustering enrichment pathways, but did not affect core-clock machinery. HFI altered the distribution pattern and number of corneal epithelial mitotic cells and enhanced recruitment of neutrophils and γδ-T cell immune cells to the limbus across a circadian cycle. Cell cycle, immune function, metabolic processes, and neuronal-related transcription and associated pathways were altered in the corneas of HFI mice. Conclusions HFI significantly reprograms diurnal oscillations in the cornea based on temporal and spatial distributions of epithelial mitosis, immune cell trafficking, and cell signaling pathways. Our findings reveal novel molecular targets for treating pathologic alterations in the cornea after HFI. G lobal sugar consumption has increased substantially over the last 30 years and constitutes 15% to 17% of total daily calorie consumption in Western diets. [1][2][3] Sugars are present in most processed foods and as sweeteners in soft drinks. Extant data suggest that the overconsumption of sugar is associated with a rapid increase in the onset, prevalence, and development of the metabolic syndrome, obesity, diabetes, cardiovascular disease, and cancer. [4][5][6] This global public health issue poses a considerable social and economic burden in both developing and industrialized countries. 7,8 Thus, exploring the mechanisms underlying the effects of excessive sugar consumption on human health is critical for the optimization of dietary practices and nutritional intervention strategies. 9 The impact of high sugar consumption on eye health is a topic of growing interest. [10][11][12] Data obtained from the largescale Age-Related Eye Disease Study revealed that decreasing the amount of added sugars and refined carbohydrates to the diet attenuated the risk and progression of AMD in populations at high risk of this blinding eye condition. 13,14 An increase in carbohydrate consumption has been associ-ated with the occurrence of cataracts. 15 A high sugar concentration in the lens promotes cataract formation via protein damage and clumping. 16,17 Further, high-sugar diets have been linked to the occurrence of dry eyes. Clinical observations suggest that the consumption of high-sugar diets aggravates dry eye symptoms. 18 Dietary carbohydrates, especially fructose, are major contributors to the development of systemic complications. 19 In contrast with glucose metabolism, which is tightly regulated and involves hormonal control by insulin, fructose metabolism is less tightly regulated. 19 At high doses, fructose predominantly results in lipid production and increases the fat burden on the liver. 20 Indeed, decreasing dietary fructose and sugar consumption in children and adults attenuates liver fat accumulation and improves cardiovascular and diabetes risk markers. 21 Collectively, these data highlight the metabolic stress induced by high fructose intake (HFI). The cornea is an active tissue that undergoes renewal and metabolism. 22,23 Nevertheless, the effects of HFI on metabolism in the cornea remain unclear. Metabolic processes and circadian rhythms are closely interlinked. 24 In mammals, almost all physiological functions exhibit circadian rhythms according to a 24-hour day based on the earth's light/dark cycle. 25 This adaptation is underscored by orchestrated rhythms in cellular and molecular processes in the suprachiasmatic nucleus (SCN) of the hypothalamus as a central pacemaker. The SCN is entrained by the light/dark cycle via the retino-hypothalamic tract and synchronizes with peripheral clocks in peripheral organs or tissues. In addition to light as a zeitgeber (daytime cue), other SCN-independent zeitgebers, including dietary nutrients, 26 feeding time, 27 and intake of drugs of abuse 28 can alter circadian rhythms. Converging evidence suggests that under normal light/dark cycles, the consumption of a high-calorie diet affects the circadian rhythms of critical organs such as the liver and brain at the transcriptional and proteomic levels. [29][30][31] The cornea is located at the front of the eyeball. The maintenance of the normal transparent state of the cornea is essential for the accurate projection of external light onto the retina and the formation of clear images. However, corneal homeostasis is influenced by various exogenous and endogenous factors. Indeed, corneal growth and development are affected by light/dark cycles. 32 Exposure to constant light, constant dark, and jet lag alter the mitotic cell number and expression of core-clock genes in the corneal epithelium. 33 Corneal thickness and transcriptome also exhibit diurnal fluctuations. 34,35 Streptozotocin-induced hyperglycemia significantly attenuates the mitotic division rhythms in the corneal epithelium and promotes circadian recruitment of peripheral neutrophils from the blood circulation to the limbus in mice. 36 These findings indicate that both endogenous and exogenous stress, including nutritional challenges, may alter circadian rhythms in the cornea. Given the excessive fructose intake in modern diets, this study aimed to examine the effects of short-term HFI on circadian rhythms in the cornea. We hypothesized that the consumption of a high fructose diet would affect circadian rhythms in the cornea at the transcriptomic level. To test this hypothesis, we employed high-throughput RNA sequencing (RNA-seq) and evaluated corneal epithelial mitosis and immune cell recruitment to examine the temporal and spatial effects of HFI on circadian oscillations in the cornea. Study Design The experimental design and analysis are depicted in Figure 1. General behavior profiling highlighted essential physiological features, including food and fluid intake, body weight, locomotor activity, core body temperature, and plasma glucose concentration (Fig. 1B). Circadian transcriptomic profiling involved RNA-seq followed by an investigation of circadian transcriptomic analysis (including phase, amplitude, and periodicity), rhythmic transcript identification using the JTK cycling algorithm, phase set enrichment analysis (PSEA), gene set enrichment analysis (GSEA), and time series clustering analysis (Figs. 1C and D). Cellular activity-associated transcriptomic profiling was performed to evaluate corneal cellular activity. Mitotic division in the corneal epithelium and recruitment of neutrophils and γ δ-T cells to the corneal limbus over a circadian cycle were assessed (Fig. 1E). Animals All animal care and experimental use followed the guidelines described in the ARVO Statement for the Use of Animals in Vision and Ophthalmic Research and was approved by the Jinan University Institutional Animal Care and Use Committee ( JN-A-2002-01). Male C57BL/6J mice at 8 to 10 weeks of age were obtained from the Medical Experimental Animal Center (Guangdong, China) and were housed in light-tight circadian chambers (Longer-Biotech Co., Ltd, Guangzhou, China). Time was indicated using the zeitgeber time (ZT) scale as an indicator of rhythm phase, whereby ZT0 and ZT12 referred to the time of lights on (7 AM) and lights off (7 PM), respectively. Mice were provided ad libitum access to a standard chow diet and water throughout the experimental period. Animals were euthanized by isoflurane overdose inhalation and cervical dislocation. HFI Protocol The HFI protocol was conducted as previously described. 11 Briefly, mice were divided randomly into two groups (Fig. 1A). The HFI group comprised 8-week-old animals that were provided a standard pellet diet and sterile tap water containing 15% D(-)-fructose (g/0.1 L) for 10 days. The normal control (NC) group comprised age-matched mice that were provided a standard pellet diet and sterile tap water. Pellet intake, fluid intake, and body weight were measured every 2 days at ZT6. Body weight was measured at the beginning and end of the experiments. Blood glucose concentrations were assayed from the tail tip of each mouse. Fasting blood samples were collected and measured using a glucometer (Accu-Chek Active glucometer, Roche, Germany). After 10 days of exposure to HFI, mice were fasted from ZT4, and tail blood samples were collected at ZT10. Hyperglycemia was defined as blood glucose concentrations of 11.10 mmol/L or more. Behavioral Analysis The locomotor activity of individually housed mice was measured using the Mini Mitter telemetry system (Mini Mitter, Bend, OR) as previously described. 37 Animals were implanted with a PDT-4000 E-Mitter (Mini Mitter) into the peritoneal cavity under pentobarbital sodium anesthesia (80 mg/kg of body weight, intraperitoneally) (Sigma-Aldrich, St Louis, MO). Data were collected over 20-minute and 5minute intervals for core body temperature and locomotion, respectively. Corneal sensitivity was determined as previously described. 38 Briefly, a monofilament with a Bonnet esthesiometer (catalog no. 8630-1490-29; Luneau Technology, Pont-del-l'Arche, France) was used to perpendicularly contact the central corneal area four times. The monofilament length was recorded as the sensitivity index using a double-blind approach based on the blink reflex. Wholemount Technique and Immunohistochemistry of Murine Cornea After euthanasia, both corneas were removed as previously described. 33,[39][40][41][42] In brief, corneas with complete limbi were fixed in 2% paraformaldehyde in PBS for 40 minutes, subjected to three 5-minute washes in PBS, blocked in 0.1 M PBS containing 2% BSA for 15 minutes, and FIGURE 1. Experimental setup and data analysis flowchart. (A) After 2 weeks of acclimatization to a 12-hour light/12-hour dark cycle, mice were randomly divided into NC (sterile water only) or HFI groups (10-day exposure to HFI). (B) After the HFI protocol, consummatory behaviors and physiological parameters of NC and HFI mice were recorded and compared at different time points. (C) On day 10 after the HFI protocol, the corneas of NC and HFI mice were collected at 3-hour intervals at eight different time points over a 24-hour cycle, indicated in ZT. (D) On day 10, after the HFI protocol, extracted RNA from corneas of NC and HFI mice was processed for RNA-seq data. JTK algorithm, KEGG analysis, PSEA, and time series clustering analysis were used to determine the circadian transcriptional landscape of daily rhythmic genes. GSEA was used to determine whether an a priori defined set of genes exhibited a statistically significant difference between corneas of NC and HFI mice. (E) On day 10 after the HFI protocol, the number of mitotic cells in the corneal epithelium and number of recruited immune cells (neutrophils and γ δ-T cells) to the corneal limbus over a 24-hour cycle were quantified. permeabilized with 0.1% Triton X-100 in BSA/PBS for 15 minutes. Subsequently, corneas were incubated in 0.1 M BSA/PBS and 0.1% Triton X-100 with anti-Ly6g FITC (3:100; BD Biosciences, San Jose, CA) to detect neutrophils, anti-CD31-PE (3:100, BD Biosciences) to detect limbal vessels, or anti-TCRδ-PE (clone GL3, 3:100, BD PharMingen, Franklin Lake, NJ) for 24 hours at 4°C. After incubation, tissues were subjected to three 5-minute washes in 0.1 M PBS. Whole corneas were sectioned into four quadrants, stretched, and mounted on slides using anti-fade mounting media with 1 μM 4,6-diamidino-2-phenylindole (DAPI) (Sigma-Aldrich) overnight, and stored in the dark until further analysis. Quantification of Mitotic Corneal Epithelial Cells and Immune Cells Corneas with complete limbi were collected at 3-hour intervals over a circadian cycle and were mounted on glass slides as previously described. 36,43,44 To quantify mitotic cell number in the corneal epithelium, two corneal diameters were selected, and the number of DAPI-stained and paired nuclei were quantified between each side of the limbus using a DeltaVision Image System with a 40× magnification field (Fig. 2, red circle-containing lines). Neutrophils surrounding the limbal vessels were quantified in eight different regions of the corneal limbus at 40× magnification (Fig. 2, white circles). The γ δ-T cells were quantified from the limbus to central corneal direction (from fields 1 to 3 in red circles) in the corneal epithelium and stroma. Tissue Sample Collection and RNA Extraction On day 10, after the conclusion of the HFI regimen, corneas were collected at 3-hour intervals over a circadian cycle as described previously. 35 Samples were rapidly frozen over liquid nitrogen. Total RNA was isolated from two pooled corneas from each animal using a Trizol RNA extraction protocol followed by cleanup using the RNeasy spin column kit (Qiagen, Hilden, Germany). All sample collections were completed within a 2-week period in January 2017 to avoid the effects of seasonal changes. Diagram depicting microscopic fields for quantitative cellular analysis. The background is a representative whole-mount corneal image collected and stitched using a 40× DeltaVision Elite microscope. The red circles across the cornea in nine 40× fields comprised fields 1 to 5 and 5 to 1 for mitotic cell quantification in the epithelium. The eight white circles in the limbus were used to quantify neutrophils around limbal vessels (in green) and γ δ-T cells around limbal vessels and in the epithelium from fields 1 through 3 in the red circles. RNA-Seq RNA-seq was performed as previously described. 11,35 Briefly, RNA purity was determined using NanoDrop (Waltham, MA, USA). RNA concentration was measured using Qubit 2.0 Fluorometer (Life Technologies, Carlsbad, CA). RNA integrity was verified using an Agilent 2100 instrument (Santa Clara, CA). Construction and sequencing of the cDNA library were performed by the Beijing Genomics Institute (BGI) using the BGISEQ-500 platform according to the manufacturer's protocol. Each sample produced more than 20 M clean reads, which were mapped to the mm10 reference genome version using Spliced Transcripts Alignment to a Reference (STAR 2.5.3a). Analysis of Rhythmic Gene Expression To map the global effects of short-term HFI on circadian transcript expression in the cornea, we collected corneas from NC and HFI mice every 3 hours over a 24-hour circadian cycle, extracted RNA from three biological replicates, and performed individual RNA-seq to high depth ( Fig. 1C and E). Analysis of rhythmic gene expression in the cornea was performed as previously described. 11,35 Briefly, timeordered fragments per kilobase of transcript per million mapped reads (FPKM) of actively transcribed genes were triplicated and input into the JTK_CYCLE algorithm in the R package to detect rhythmic components in genome-scale datasets over eight time points. 47 Between-sample normalization was performed using the DESeq median normalization method. FPKM mapped reads were calculated to obtain measurements of relative gene expression within and between biological samples. Oscillating transcripts were defined as those with a JTK_CYCLE P value of less than 0.05 and an oscillation period within a 24-hour range. 48 Based on the adjusted P value from the JTK_CYCLE output, we identified circadian transcription patterns in the corneas of NC and HFI mice. Circadian expression patterns were evaluated according to the direction (μ) and length (r) of the mean vector, which represented the average of all phases and degree of synchronization, respectively. Functional Annotation With Kyoto Encyclopedia of Genes and Genomes (KEGG) Enriched pathways from KEGG for circadian genes against the background of expressed transcripts in the corneas of NC and HFI mice were annotated as previously described. 11,35 The resulting annotations were grouped into annotation clusters based on common gene members. The enrichment score (ES) of each annotation cluster was defined as the geometric mean (in a -log10 scale) of the nonadjusted P values (P < 0.05) of individual annotations. To examine the relationships between specific rhythmic genes and KEGG pathways, KEGG network diagrams were visualized using a BGI in-house customized data mining system termed Dr. Tom (http://biosys.bgi.com). GSEA To characterize signaling pathways associated with HFI, GSEA was performed through the GSEA software (v4.1.0, http://www.broadinstitute.org/gsea/index.jsp, Broad Institute at MIT, Cambridge, MA) using format-converted mouse RNA-seq data as the expression dataset as previously described. 49 The database of analyzed gene sets (c2.cp.v7.4.symbols.gmt) included validated hallmark signatures derived from the Molecular Signatures Database and reflected transcriptional programs involved in metabolism, infection, immunity, and the cell cycle. We evaluated enrichment in phenotypes exhibiting positive or negative correlations with HFI. Gene expression in each model system was ranked according to real fold-change expression relative to corresponding controls. GSEA was performed using default parameter settings. The visualization of GSEA results is mainly divided into the following three parts 50 : (1) the ES, which reflects the degree to which a gene set is overrepresented at the extremes (top or bottom) of the entire gene ranked list. Higher the ES value is, more enriched the pathway in the sample; (2) a ranked gene list-each black vertical line below the ES fold line represents a gene in the functional gene set and its location in the sorted gene list after phenotypic association ranking; and (3) a heatmap-the left red part of the heatmap at the bottom represents the high expression of the corresponding functional pathway genes in the HFI group, and the right blue part represents the high expression of the corresponding functional pathway genes in the NC group. A positive normalized ES indicates higher expression in the HFI than in the NC group. A nominal P value was used to characterize the credibility of the enrichment results. It is generally considered that the pathways with a normalized ES of more than 1 and a P value of less than 0.05 are significantly enriched. PSEA Circadian pathways were characterized with PSEA (Software version 1.1), downloaded from https://omictools.com/ psea-2-tool, based on circadian transcript sets. 51 Gene sets of "c2.cp.kegg.v6.2.symbols.gmt" were downloaded from the Molecular Signatures Database C2 (KEGG gene sets). 49 The parameters were set to domains from 0 to 24 (phases between 0 and 24) and enrichment of each pathway in more than 10 genes, with a Kuiper Q value of less than 0.01 and max sims per test of 10,000. Time Series Clustering Analysis To visualize the trends in transcriptional expression in the cornea over time, noise-robust soft clustering analysis was performed using the fuzzy c-means clustering algorithm in the Mfuzz package (http://www.bioconductor. org/packages/release/bioc/html/Mfuzz.html). 52,53 As in our previously described protocol, 54,55 all rhythmic genes from NC and HFI-treated corneas with eight timepoint gradients using the JTK_CYCLE were first imported into the algorithm, respectively. The R package default was set with 0.7 as the core threshold (i.e., membership > 0.7 was considered a cluster core). Second, four specific clusters were selected based on cycling gene expression trends in the corneas of NC and HFI animals. Third, to understand the dynamic pattern of these genes and their relationship with function, we also performed KEGG pathway enrichment analysis on these rhythmic genes contained in each cluster. Statistical Analysis and Software GraphPad Software (GraphPad Prism 8.0; La Jolla, CA) was used for the generation of bar, scatter, and line charts; violin plotting; and statistical analysis. Oriana software (Version 4.01; Kovach Computing Services, Pentraeth, Wales, UK) was used to analyze the phase, period distribution, and Rayleigh vector of oscillating genes. The Venn Diagram Plotter (Venny 2.1.0, http://bioinfogp.cnb.csic.es/tools/venny/ index.html) was used to compare the numbers of rhythmic genes in different groups. Heatmaps were generated using pheatmap scripts in R (64-bit, version 3.6.1). The normality of all data was determined by the Shapiro-Wilk test to examine the homogeneity of variance. Between-group comparisons were performed using the Student t-test or oneway ANOVA with Bonferroni correction for multiple comparisons. Data are presented as mean ± standard error of the mean for quantitative variables and as frequencies for classification variables. P values of less than 0.05 were considered statistically significant; n.s. indicates not statistically significant. HFI Alters Consummatory Behaviors and Physiological Parameters A significant difference was observed in pellet intake after day 6 after HFI between the NC and HFI mice (Fig. 3A). The fluid intake of HFI-treated mice was significantly decreased on day 2 and day 4 after the initiation of feeding compared with that of NC-fed animals (Fig. 3B). Body weight was significantly higher in HFI mice than in NC animals on days 8 and 10 after HFI (Fig. 3C). No significant between-group differences were observed in plasma glucose levels after a 6hour fast (Fig. 3D), consistent with our previous findings. 11 To determine the effects of short-term HFI on circadian behavior, we implanted mice with telemeters and tracked core body temperature and physical activity as previously described. 54 Core body temperature and locomotor activity in HFI animals exhibited normal daily oscillations, but the amplitudes of both oscillations were lower than those in NC animals (Figs. 3E-H). In the corneas of NC mice, oscillating gene expression exhibited a biphasic profile that peaked at ZT1.5 and ZT19.5. In the corneas of HFI mice, a single sharp peak occurred at ZT6 (Fig. 4D). A comparative analysis of duplicates in RNAseq data at ZT6 and ZT18 (peak time in the HFI group) was visualized using a volcano plot. A principal component analysis was performed to examine the differences in the corneal transcripts and the variance in each component between groups. In total, 24 variables were analyzed for dimensional reduction. Three principal components were extracted according to the eigenvalue (>1). The variances of the three principal components (PC1, PC2, and PC3) accounted for 35.0%, 25.0%, and 13.5% of the total, respectively. The standardized load coefficients of 24 variables in each principal component for both groups are presented in Figure 4E. In NC mice, 24 variables correlated positively with PC1 and weakly with PC2 and PC3. Most variables in HFI mice correlated negatively with PC1. Collectively, these data suggest that HFI globally altered corneal transcriptome composition under homeostatic conditions. HFI Modulates Global Transcriptional Rhythmic Gene Expression Phase and Amplitude in the Cornea Venn and heatmap plotting analyses of these rhythmic genes indicated that 4054 transcripts in the corneas of NC and HFI mice exhibited rhythmic expression. Of these, 37.7% (1529 transcripts) exhibited rhythmic expression only in NC mice (Fig. 5A left), 48.7% (1974 transcripts) exhibited rhythmic expression only in HFI mice (Fig. 5A right), and 13.6% (551) exhibited rhythmic expression in both groups (Fig. 5A middle and Supplementary Table S2). Heatmaps verified the differences in circadian expression among transcript sets identified to be exclusively rhythmic in the corneas of NC (Fig. 5B) and HFI mice (Fig. 5C). Pellet intake in NC mice (blue) and HFI mice (red) during the 10 days after the start of HFI. **P < 0.01; ***P < 0.001, n = 10 mice per group. (B) Fluid intake in NC mice (blue) and HFI mice (red) during the 10 days after the start of HFI. *P < 0.05; **P < 0.01; n = 10 mice per group. (C) Kinetics of body weight gain in NC mice (blue) and HFI mice (red) during the 10 days after the start of HFI. *P < 0.05; **P < 0.01; n = 10 mice per group. (D) Plasma glucose levels in NC mice (blue) and HFI mice (red) on day 10 after the start of HFI. n.s., n = 10 mice per group. (E) Representative locomotor activity over a circadian cycle in a single mouse on day 10. The blue and red lines indicate NC and HFI mice, respectively. The gray shading indicates dark cycles. (F) Mean overall activity of NC and HFI mice over a circadian cycle. (G) Representative core body temperature rhythms over a circadian cycle in a single mouse on day 10. The blue and red lines indicate changes in NC and HFI mice, respectively. The gray shading indicates circadian dark phases. (H) The overall mean core body temperature of NC and HFI mice over a circadian cycle. We further examined whether the 1974 transcripts that were exclusively rhythmic in HFI mice fell into nonrhythmic or low-expression categories in NC mice. Venn plotting revealed that all of the transcripts that were exclusively rhythmic in HFI mice fell into the nonrhythmic cate-gory in NC mice (Fig. 5D), indicating that HFI-induced rhythmic transcripts were nonrhythmically expressed in the corneas of NC mice. We plotted the phase, period, and Rayleigh vector of oscillating genes for shared, NC-specific, and HFI-specific rhythmic genes. For NC-specific rhythmic transcripts, the phase of the 1529 transcripts was mainly distributed from ZT0 to ZT7 in the light cycle and from ZT15:30 to ZT21:30 in the dark cycle (μ = 21:29; r = 0.304) (Fig. 5E). The phase of the 1974 HFI-specific rhythmic transcripts was mainly distributed from ZT1:30 to ZT8:30 in the light cycle and from ZT15:30 to ZT20 in the dark cycle (μ = 03:01; r = 0.169) (Fig. 5F). The phase of the 551 shared rhythmic genes was distributed from ZT0 to ZT8 in the light cycle and from ZT12 to ZT24 in the dark cycle (μ = 00.10; r = 0.249) in NC mice (Fig. 5G), and from ZT0 to ZT8 in the light cycle and from ZT15 to ZT20 in the dark cycle (μ = 0.439; r = 0.332) (Fig. 5H) in HFI mice. Notably, 81.3% (448) of the 551 shared circadian transcripts exhibited a phase change by at least 1 hour (Fig. 5I left); of these transcripts, 35.0% (193) were phase advanced and 46.3% (255 transcripts) were phase delayed (Fig. 5I right). Collectively, these results indicated that rhythmic transcripts tended to be expressed at later phases in the circadian cycle in HFI mice. On average, the expression amplitude of the 1974 exclusively rhythmic genes in HFI mice (Fig. 5J right) was lower than that of the 1529 exclusively rhythmic genes in NC mice (AMP average, 20.17 vs. 8.93; Fig. 5J left) (P < 0.001). The expression amplitude of the 551 shared rhythmic genes was lower in HFI mice (Fig. 5K right) than in NC mice (AMP average, 15.75 vs. 11.47; Fig. 5K left) (P < 0.01). Collectively, these data indicated that HFI altered both the phase and amplitude of rhythmic gene expression. HFI Reprograms KEGG and PSEA in the Cornea KEGG enrichment analysis revealed that NC-specific genes were enriched for several metabolic pathways (Q < 0.05), including ribosome, oxidative phosphorylation, thermogenesis, proteasome, steroid biosynthesis, heat-inducible factor-1 signaling pathway, and metabolic pathways (Fig. 6A). In contrast, HFI-specific genes were enriched for pathways associated with metabolic pathways, pyrimidine metabolism, NOD-like receptor signaling pathway, DNA replication, purine metabolism, the Fanconi anemia pathway, the TNF signaling pathway, and fatty acid biosynthesis (Q < 0.001) (Fig. 6B). Four shared temporally coordinated pathways were identified: cell cycle, circadian rhythm, cell cycleyeast, and DNA replication (Fig. 6C). These data suggest that HFI reprogrammed rhythmically enriched biological pathways. We performed PSEA, a novel analytical tool for the visualization of identified and biologically related gene sets with temporally coordinated expression 51 to examine functional pathways in the corneas of NC and HFI mice that peaked at specific timepoints. The largest number of significant phase clusters (Kuiper Q value < 0.01) in the corneas of both groups was observed in the light and dark cycles (ZT0 to ZT6 and ZT18-ZT24 in NC mice; ZT0 to ZT9 and ZT15-ZT21 in HFI mice) (Figs. 6D and E). In total, 18 significantly enriched functional pathways were identified in the corneas of NC mice, grouped into four categories as follows: (1) genetic information processing: proteasome, ribosome; (2) cellular processes: cell adhesion molecules cams, lysosome, oocyte meiosis; ribosome, MAPK signaling pathway, spliceosome; (3) disease-associated pathways: pathways in cancer, Parkinson's disease, Alzheimer's disease, Huntington's disease, pancreatic cancer, small cell lung cancer, chronic myeloid leukemia, oocyte meiosis; and (4) organismal systems: chemokine signaling pathway, insulin signaling pathway, cardiac muscle contraction (Fig. 6D, Supplementary Table S3). In total, 23 significantly enriched functional pathways were identified in the corneas of HFI mice, grouped as follows: (1) metabolic pathways: oxidative phosphorylation, amino sugar and nucleotide sugar metabolism, pyrimidine metabolism, purine metabolism, and glycerophospholipid metabolism; (2) cellular processes, including lysosome, focal adhesion, tight junction, endocytosis, and cell cycle; (3) genetic information processing: spliceosome, DNA replication, and proteasome; (4) organismal systems: extracellular matrix receptor interaction, JAK-STAT signaling pathway, leukocyte transendothelial migration, and Fcγ R-mediated phagocytosis; and (5) Table S4). Each disease-associated pathway was activated in NC and HFI mice but at different ZTs. Collectively, these results indicated that HFI temporally altered the quality and distribution of phase set-enriched signaling pathways. HFI Reprograms Cluster-Dependent Transcriptomic Maps in the Cornea Clustering analysis is a powerful visualization tool for biological pathways underlying large gene expression datasets. 52,53 To determine the dynamic patterns of transcriptomic activity over a circadian cycle, we performed soft Mfuzz clustering on standardized log2-normalized FPKM values for each sample. Based on the distribution of temporal oscillation peaks and troughs over a 24-hour cycle, we defined four different oscillation clusters (Figs. 7A-H). For cluster 1 (NC vs. HFI groups; 387/484 enriched genes), the peak was located in the light cycle and the trough in the dark cycle. For cluster 2 (580/898 enriched genes), the trough was located at the junction of the light and dark cycles. For cluster 3 (356/481 enriched genes), the peak was located at the junction of the light and dark cycles. For cluster 4 (757/662 enriched genes), the peak was located in the light cycle. To further visualize the unique functional and biological relevance of each transcriptional cluster, KEGG-enriched analysis was performed (Supplementary Tables S5 and S6). In NC mice, cluster 1 was related to biosynthesis and metabolic pathways (Figs. 7A and B); cluster 2 was related to cellular processes and genetic information processing, including the cell cycle and oocyte meiosis; cluster 3 pathways predominantly comprised signaling pathways, with the exception of the circadian rhythm pathway (Fig. 7C); and cluster 4 pathways were also predominantly associated with signaling pathways (Fig. 7D). Despite similar cluster modes in NC and HFI mice, a KEGG analysis of HFI mice revealed distinct biological pathway annotations (Figs. 7E-H). In clusters 3 and 4, inflammation-associated pathways including Th17, TNF, and NOD-like receptor signaling pathways were identified, suggesting that short-term HFI activated inflammatory events in the cornea. Collectively, these data indicated that HFI reprogrammed dynamic transcriptional clustering of these biological pathways. The Core-Clock Gene Transcription in the Cornea Is Not Altered by HFI Changes in the light cycle, such as in the jet lag model, alter the expression of core-clock machinery genes in the normal murine cornea. 11,33,35 To identify the effects of HFI on core-clock machinery genes in the cornea, we analyzed the temporal transcription profiles of ten canonical coreclock genes in the corneas of NC and HFI mice. In NC mice, all canonical core-clock genes exhibited a robust rhythm, two of which (Npas2 and Nr1d1) peaked during the light cycle (ZT3 and ZT9, respectively) (Figs. 8A and B (Fig. 8J). In HFI mice, all 10 canonical core-clock genes also exhibited a robust rhythm with the exception of small but significant changes in the amplitude and peak of Nr1d1, Per3, and Nr1d2 transcription (Figs. 8B, E, and F). Collectively, these data suggested that HFI did not significantly modulate oscillations in core-clock machinery in the cornea. HFI Alters Rhythmic Mitosis in the Corneal Epithelium and Cell Growth-Associated Transcriptional Profile The cornea is highly regenerative 56 and mitotic division in the corneal epithelium exhibits diurnal rhythms. 33 To examine the effects of HFI on diurnal mitotic patterns in the corneal epithelium, we quantified limbus-to-limbus mitotic cell number. We observed that the active phase for epithelial division occurred from ZT17 to ZT33 and peaked around ZT6 in NC mice, in agreement with our previous reports. 33 The amplitude and total number of mitotic cells over a circadian cycle were significantly higher in HFI mice than in NC mice (Figs. 9A-C). We next analyzed transcriptional alterations in cell cycle, growth, and differentiation-associated transcripts over a circadian cycle. In total, 73 differentially expressed transcripts were identified (>I±I 0.3 fold) (Fig. 9D). Enrichment analysis of these differentially expressed transcripts revealed that the top 10 KEGG pathways were associated with the cell cycle, cellular senescence, meiosis, and FoxO-, p53-, and TGF-β-related signaling pathways (Fig. 9E). The relationships between selected rhythmic genes and KEGG path- ways are depicted in Figure 9F and Supplementary Table S7. Based on the ES of GSEA of all transcripts, the REACTOME cell cycle, M phase, and KEGG p53 signaling pathways were enriched in HFI mice (Figs. 9G-I). Collectively, these data suggested that HFI induced dysfunction in mitotic rhythms and associated transcriptional profiles in the corneal epithelium. HFI Alters Immune Cell Trafficking to the Corneal Limbus and Immune-Associated Transcription Neutrophils and γ δ-T cells are the essential innate immune cells on the ocular surface of mice and play an important role in maintaining the healthy state of ocular surface and pathological reaction. [57][58][59] However, it is not clear whether HFI alters the diurnal trafficking of neutrophils and γ δ-T cells to the cornea. In the corneas of NC mice collected at hourly intervals, the active trafficking of neutrophils and γ δ-T cells to the corneal limbal region occurred during the dark cycle and peaked at ZT18 (Figs. 10A-D). Trafficking of neutrophils and γ δ-T cells to the cornea and total cell number over a circadian cycle were significantly greater in HFI mice than in NC mice (Figs. 10A-D). Differentially expressed transcripts in the corneas of HFI and NC mice over a circadian cycle (>±1 fold) are presented in the heatmap in Figure 10E. KEGG enrichment analyses revealed that the top 10 enriched pathways were related to immune-associated functions (Fig. 10F). The relationships between these KEGG pathways and selected rhythmic genes are presented in Figure 10G. A complete list of the most overrepresented KEGG pathways in both groups is presented in Supplementary Table S8. Our data suggested that HFI promoted immune cell recruitment to the limbal region and induced alterations in immunologic pathways in the murine cornea over a 24-hour circadian cycle. HFI Alters Metabolic Pathways in the Cornea Liver metabolites of HFI are converted into fat and enter the blood circulation, inducing metabolic stress to the body. 3 Corneal tissues are distant to blood vessels and possess unique metabolic pathways. 60 To examine the effects of HFI on corneal metabolism under normal physiological conditions, we analyzed the corneal metabolism-related transcriptome over a 24-hour cycle after 10 days of HFI. HFI significantly altered the expression of many metabolism-related transcripts (Fig. 11A). Interestingly, we found that there were many members of the cytochrome P450 enzyme superfamily in the transcripts of differential expressions, including CYP2a4, 4a12a, 2c55, 1a1, 2b10, 2u1, and 2j13 (Fig. 11A). Enrichment analysis for these differentially expressed transcripts revealed that the main enriched signaling pathways were related to biosynthesis, microbial metabolism, arachidonic acid metabolism, retinol metabolism, carbon metabolism, and glycolysis and gluconeogenesis (Fig. 11B). The relationships between these KEGG pathways and selected rhythmic genes are presented in Figure 11C and Supplementary Table S9. GSEA revealed that the KEGG_fatty acid metabolism pathway was enriched specifically in the corneas of HFI mice (Fig. 11D). In HFI mice, mild lipid deposits were observed in the liver but not in the cornea (Figs. 11E and F). These data collectively indicated that HFI altered metabolic pathways, especially those related to fatty acid metabolism, in the murine cornea. HFI Alters Neural Activity in the Cornea HFI causes metabolic stress 3 and may trigger nervous system abnormalities in combination with other factors such as uric acid (a fructose metabolite). 61 The cornea is densely innervated by sensory nerve fibers (Fig. 12A) and is highly sensitive to external stimuli, especially mechanical stimuli. To explore the effects of HFI on corneal sensory function, we analyzed nerve-related transcripts at eight time points in a circadian cycle and identified 75 nerve-related transcripts that were differentially expressed in the cornea (>I±I 1 fold) (heatmap in Fig. 12B). An enrichment analysis of these differentially expressed transcripts revealed the top 10 pathways with significant changes (Figs. 12C and D). The remaining pathways are listed in Supplementary Table S10. To verify the functional correlates of these transcriptomic changes, we measured changes in corneal sensitivity using a Cochet-Bonnet esthesiometer. No significant difference in corneal sensitivity was observed in either NC mice or HFI mice at ZT6 and ZT18. However, corneal sensitivity was significantly lower in HFI mice than in NC mice at both ZT6 and ZT18 (Fig. 12E). Collectively, these data indicated that HFI altered corneal neural activity at molecular and functional levels. DISCUSSION Similar to most mammalian organs, the cornea exhibits robust circadian rhythms in various physiological functions, including diurnal recruitment of white blood cells into the corneal limbus and epithelial cell mitosis. [33][34][35][36] However, the molecular mechanisms underlying these circadian rhythms are poorly understood. Our latest published data revealed daily fluctuations in transcriptomic profiles in the murine cornea to adapt to light and dark phases. 35 Here, we provide a comprehensive analysis of the temporal and spatial distributions of the circadian transcriptome in murine cornea under a normal light/dark cycle. Because these animals The blue and red lines represent the NC and HFI groups, respectively. The top left inset depicts the total number of γ δ-T cells quantified per 24-hour cycle. ***P < 0.005, n = 8 corneas per group. (E) RNA-seq analysis of corneal transcripts on day 10 in NC and HFI mice over a circadian cycle. The heatmap displays expression levels of the 48 genes related to immunity that were significantly differentially expressed in HFI mice compared with NC mice. (F) Gene annotation of KEGG pathways enriched in the corneas of HFI mice on day 10 with Q < 0.05. The top 10 pathways are presented. The horizontal dashed line in the figure represents the boundary for Q < 0.05. (G) Immune-associated KEGG network diagram for the HFI group. The boxes and circles represent KEGG pathways and transcripts, respectively. Color and size indicate the number of genes or transcripts connected to each node. The darker the color and the larger the box, the more genes or transcripts connected to a node. The different colored lines represent different pathway classifications. The blue, yellow, and red lines represent environmental information processing, organismal systems, and cellular processes, respectively. were under a normal light/dark cycle, the observed transcriptomic changes of core clock genes were unaffected by the traditional light-retina-SCN axis, consistent with transcriptomic changes of core clock genes in the liver caused by Western diets. 29,30,62 Notably, we observed that shortterm HFI rewired the circadian transcriptome in the cornea under unaltered light, especially in pathways associated with metabolism, mitosis, neural activity, and immune function. We identified significant alterations in temporally coordinated and cluster-dependent transcriptomic landscapes of circadian transcripts in the cornea after 10 days of HFI. Further, we determined that HFI significantly altered the normal circadian recruitment of neutrophils and γ δ-T cells to the cornea, predominantly in the limbal area, as well as the pattern and number of mitotic divisions in the corneal epithelium over a circadian cycle. These results highlight novel pathological alterations in the cornea induced by HFI (Fig. 13). Circadian rhythms and energy metabolism are closely interlinked. 63 Environmental and genetic perturbations to circadian rhythms contribute to metabolic dysfunction. [64][65][66] In turn, the circadian system senses metabolic cues. Perturbations to metabolism such as high-calorie diets or altered feeding times disrupt the temporal coordination of circadian rhythms and may lead to various disorders, including obesity, diabetes, and sleep problems. [67][68][69] Here, we used excessive fructose consumption as an exogenous metabolic challenge and observed significant rewiring of oscillations in circadian gene transcription and associated signaling pathways in murine cornea over the light/dark cycle. Further, the newly generated rhythmic genes in the corneas of HFI mice were derived from nonrhythmic genes in normal corneas, resembling the pattern of de novo transcriptional oscillations that we previously observed in murine extraorbital lacrimal glands after short-term HFI. 11 Bioinformatics analysis of high-throughput RNA-seq data is a powerful approach to elucidate the complex molecular mechanisms underlying circadian processes. 70 We assessed the metabolic and signaling pathways of circadian genes affected by excessive fructose intake using a combination of KEGG, PSEA, 51 and time series clustering approaches. 52,53 We observed significant alterations in metabolic and other signaling pathways alongside changes in the time-phase distributions of these pathways in the corneas of HFI mice. These circadian transcriptomic changes provide insight into HFI-induced corneal pathogenesis. Nevertheless, the precise mechanisms of dysregulation require further in-depth analysis via proteomics and metabolomics. Every mammalian cell contains a set of core-clock genes that govern downstream clock-controlled genes via self-regulatory transcriptional and translational feedback loops. 71,72 Transcriptional alterations in core-clock genes may drastically affect physiological functions. We observed that the transcription of core-clock genes in the cornea was resistant to excessive fructose consumption, consistent with our previous findings in the extraorbital lacrimal glands in a short-term HFI model 11 and in the liver after high-fat diet treatment. 29 Collectively, these data support the concept that circadian oscillations in core-clock genes are highly resistant to metabolic challenges. 29,67,68 Nevertheless, the effects of the nutritional state on the regulation of circadian clocks remain to be defined. Similar to other epithelial tissues, the corneal epithelium undergoes constant renewal, which occurs over 5 to 7 days. 73 Turnover is mainly accomplished by limbal epithelial stem cells located at the limbus and the proliferation and migration of limbal epithelial stem cell-differentiated transient amplifying cells. [73][74][75][76] Recent evidence has indicated that the cell cycle is controlled by circadian rhythms. 77 Studies from our group and other groups have demonstrated that corneal epithelial mitosis exhibits a significant diurnal oscillation. [33][34][35][36][78][79][80][81] Recent studies suggest that circadian rhythms synchronize with the regular light/dark cycle and are retrained by nonphotic zeitgebers such as food and feeding, ambient temperature, social contact, and physical activity. 82 Consistent with these data, we observed that HFI activated cell cycle-related signaling pathways in the cornea, especially cell cycle checkpoints, M phase, and p53 signaling pathways. In accordance with these transcriptional changes, we observed that the corneal epithelial mitotic cell number was significantly increased over a circadian cycle. Our data support recent reports of the close association between high fructose consumption and a high incidence of cancer. 83,84 Collectively, these results suggest that even short-term HFI rapidly activates cell cycle-related molecular mechanisms in the cornea. With the exception of the limbal area, the cornea is an avascular tissue. To maintain rapid corneal epithelial turnover, high sensitivity, and transparency, the cornea obtains energy via unique metabolic pathways. 85 For instance, oxygen supply to the cornea arises predominantly from the atmosphere in the open eye and tarsal conjunctival capillaries in the closed eye. 86 Most nutrients, including glucose, are diffused from the aqueous humor and tear film. 87 However, the effects of HFI on normal corneal metabolism have not been reported. Our transcriptomic data indicated that various metabolic-related transcripts were differentially expressed in the cornea after HFI; in particular, transcripts of fatty acid metabolism pathways were significantly enriched. Consistent with these results, we identified the high expression of several cytochromes P450 family members in the corneas of HFI mice. Cytochromes P450s constitute an enzyme superfamily that oxidizes fatty acids, xenobiotics, and various compounds for clearance. 88 The cornea receives the highest innervation density in the human body, 89 with several sensory fiber subtypes that sense distinct external stimuli. 90,91 Neural activity in the Neural activity-associated KEGG network diagram for the corneas of HFI mice. The boxes and circles represent KEGG pathways and transcripts, respectively. Color and size indicate the number of genes or transcripts connected to each node. The darker the color and the larger the box, the more genes or transcripts connected to a node. The different colored lines represent the different pathway classifications. The blue and yellow lines represent environmental information processing and organismal systems, respectively. (E) Corneal sensitivity was measured using a Cochet-Bonnet esthesiometer and compared between NC and HFI mice. The results are presented as the mean ± standard deviation. *P < 0.05, **P < 0.01, n = 10 corneas per group. cornea is diurnal, including corneal sensitivity in humans, 92 sensory axon growth and shedding in the murine cornea, 93 and sensations in individuals wearing contact lenses. 94 We observed that corneal sensitivity was significantly decreased after HFI. Multiple reasons may underpin this observation. First, as demonstrated by our enrichment analysis of neural activity-related transcripts, many synaptic pathways associated with nerve conduction were altered significantly. Second, a large number of immune cells, including neutrophils and γ δ-T cells, were recruited around the limbus blood vessels after HFI. These factors, in combination with metabolic stress and energy alterations, may have decreased conduction speed and neural activity after HFI. 61,95 During a 24-hour circadian cycle, immune cells are periodically released from the bone marrow to the peripheral blood 96 and migrate from the blood to peripheral organs and tissues. 97, 98 We previously reported that neutrophils and γ δ-T cells migrate rhythmically to the corneal limbal region. 36 The current data revealed a significant increase in the number of recruited immune cells to the corneal limbus after HFI, suggesting that nutritional challenges modulate the plasticity of corneal immune function. These effects may alter the degree and status of the corneal response to various external stimuli such as injuries and microbial infections. Our study has a few limitations. It should be noted that we only collected murine corneas 10 days after HFI. Longterm HFI or fructose consumption in combination with other nutrients such as lipids may reveal distinct insights into the effects of high-calorie diets on corneal function. 99 In addition, our bioinformatics analysis was limited to highthroughput RNA-seq analysis, which does not yield information on translational, post-translational, or proteomic regulation. Future in-depth analyses of corneal structure and function alongside other -omics approaches are warranted to dissect pathologic mechanisms in the cornea induced by excessive fructose intake. CONCLUSIONS Our findings suggest that short-term excessive fructose intake significantly rewires diurnal oscillations in the cornea with regards to corneal epithelial mitosis, immune cell recruitment to the corneal limbus, and transcriptomic profiles. Our findings imply that metabolic challenges induced by HFI alter normal physiological processes in the cornea, including excessive cell proliferation, decreased corneal sensitivity, and a subinflammatory condition. These alterations might modulate the corneal ability to respond to various stimuli from external environments, such as injuries, microbial infection, and desiccation stress. Further analysis of reprogramming mechanisms will reveal potential targets to prevent the onset and progression of pathologic alterations of the cornea induced by excessive fructose consumption. Notably, these data highlight the critical role of nutritional interventions in corneal health.
2021-08-22T06:16:19.850Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "225337a1f7a9240c41423ebd9739d4bff5216a46", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1167/iovs.62.10.22", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb8a181c730b57f46264d767b94282d6dfb147f5", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
49576676
pes2o/s2orc
v3-fos-license
EUS-guided sampling with 25G biopsy needle as a rescue strategy for diagnosis of small subepithelial lesions of the upper gastrointestinal tract Background and study aims  This study was designed to evaluate the impact of additional tissue obtained with endoscopic ultrasound (EUS)-guided 25-gauge core biopsy needle (25G-PC) following an unsuccessful fine-needle biopsy (FNB) performed with larger-bore needles for the characterization of gastrointestinal subepithelial lesions (GI-SELs). Patients and methods  We prospectively collected and retrospectively analyzed information in our database from January 2013 to June 2017 for all patients with GI-SELs who received a EUS-guided FNB (EUS-FNB) with 25G-PC during the same procedure after failure of biopsy performed with larger-bore needle. Diagnostic yield, diagnostic accuracy and procedural complications were evaluated. Results  Sixteen patients were included in this study, 10 men and 6 women, median age 67.8 (range 43 to 76 years). Five patients were found to have a SEL localized in the distal duodenum, five in the gastric antrum, two in the gastric fundus and four in the gastric body. The mean size of the lesions was 20.5 mm (range 18 – 24 mm). EUS-FNB with 25G-PC enabled final diagnosis in nine patients (56.2 %). Regarding the subgroup of duodenal lesions, the procedure was successful in four of five (80 %). Final diagnoses with EUS-guided sampling were GIST (n = 6), leiomyoma (n = 2) and metastatic ovarian carcinoma (n = 1). No procedure-related complications were recorded. Conclusion  In patients with small GI-SELs, additional tissue obtained with 25G-PC could represents a “rescue” strategy after an unsuccessful procedure with larger-bore needles, especially when lesions are localized in the distal duodenum. in differentiating them from extrinsic compression and providing information about morphology and layer of origin [1]. EUS can sometimes provide information in case of lesions with typical morphological features, such as lipomas or duplication cysts. However, tissue diagnosis is often required, especially in neoplasms for which immunohistochemistry (IHC) is mandatory, such as gastrointestinal stromal tumors (GISTs). EUS needles of different size and shape have been used, with variable success/complication rates [4 -7]. Recently, a new needle with reverse bevel technology has been developed to simultaneously obtain cytological aspirates and histological core samples, thereby leading to an ideal EUS-guided fine needle biopsy (EUS-FNB) [8 -11]. The majority of reports on EUS-FNB needles have focused on pancreatic masses. Data on the diagnostic performance of 25-gauge (G) core needle to assess GI-SELs are lacking. The aim of this study was to evaluate the impact of additional tissue obtained with EUS-guided 25G-needle core biopsy following an inconclusive EUS-FNB performed with larger-bore needles for characterization of GI-SELs. Patients All consecutive patients who received, during the same procedure, an EUS-guided FNB with 25G-ProCore (25G-PC) needle (EchoTip ProCore; Cook Endoscopy) as a "rescue strategy" after an initial unsuccessful biopsy performed with larger ProCore needles (22G-PC and/or 19G-PC) to diagnose upper GI-SELs were prospectively enrolled and retrospectively analyzed. EUS-FNB with 25G-PC was considered as a rescue strategy after a prior attempt with EUS-FNB with a larger-bore needle when: (1) puncture of the lesion was not feasible for technical reasons (i.e, difficulty to advance the needle through the scope in angulated position); or (2) specimens obtained were considered macroscopically suboptimal (i.e, not suitable to put in a formalin bottle for histological examination). Inclusion criteria for EUS-FNB were: (1) presence of upper GI-SELs revealed by endoscopy, (2) need for pathological assessment to make a diagnosis and/or to guide management decision, (3) age older than 18 years, and (4) ability to provide informed consent. Exclusion criteria were: (1) inability to provide informed consent; (2) evidence of coagulation disorder. Baseline variables are presented as numbers (percentage) and mean values (range). EUS-FNB procedure EUS-FNBs were performed by using convex array echoendoscopes (UCT-140, Olympus America, Inc. Melville, New York, United States) with the patient in the left lateral position under conscious sedation (intravenous fentanyl and midazolam) or deep sedation (propofol). After targeting the optimal puncture site, each puncture was done using a core biopsy needle (Echo-Tip ProCore; Cook Endoscopy) guided by real-time EUS imaging. Two different suction techniques (slow-pull and "wet") were used at the operator's discretion. In the slow-pull technique, the stylet was left inside the needle and, after punctur-ing the lesion, it was slowly and continuously removed as the needle was moved to-and-fro for 10 to 15 times inside the lesion. In the "wet" technique, the stylet was removed and the needle was filled with saline to replace the column of air with water, then the needle was passed into the lesion and the suction applied with a 10-cc pre-vacuum syringe. Thereafter, the needle was moved to-and-fro 10 to 15 times inside the lesion, syringe-suction was then turned off before withdrawing the needle from the lesion [12,13]. The ProCore (PC) needle size for the first attempt (19G or 22G) and the number of needle passes were at discretion of the endosonographer. The procedure was stopped when biopsy specimens were considered sufficient by the operator at gross examination. A maximum number of three biopsy attempts were allowed for each needle. All EUS-FNBs procedures were performed by a single experienced endoscopist (FA) who has performed more than 1000 EUS procedures and at least 100 EUS-FNAs per year. This study was approved by the institutional review board. Specimen evaluation and histological process During the procedure there was no on-site cytopathologist. After EUS-FNB, the sample obtained was expelled onto slides. All macroscopically visible cores specimens (defined as whitish or yellowish piece of tissue with an apparent bulk) considered adequate by the endosonographer were put into formalin for histological process. Specimens considered inadequate were submitted for cytology assessment. Histologic specimen was categorized "diagnostic" when considered adequate to reach a definitive diagnosis by the pathologist (including cases where IHC was mandatory), and "non-diagnostic" when the sample did not meet this requirement. IHC staining was performed using commercially available antibodies against c-kit (CD117), CD34, S-100, DOG-1, and smooth-muscle actin. Endpoints The primary endpoint was adequacy, defined as the rate of cases in which an adequate tissue specimen for histological examination was obtained. Secondary endpoints were accuracy, defined as proportion of correct diagnoses, and adverse event rate. Standard references for the diagnosis were the surgical specimen when available or other diagnostic investigations and a follow-up of at least 6 months. Early (within 48 hours) and late (> 48 hours) adverse events (AEs) were recorded. All patients were evaluated for procedural AEs with a phone call or clinic visit at 24 to 48 hours and at 7 to 10 days following the procedure. Results Between January 2013 and June 2017, a total of 108 patients were referred to our department for tissue sampling of upper GI-SELs. Among them, 16 (14.8 %) patients (10 male; median age, 67.8 years; range, 43 to 76 years) underwent EUS-FNB with 25G-PC as a rescue strategy after an initial inconclusive biopsy performed with larger-bore needles during the same EUS procedure (▶ Table 1). Five patients had a SEL localized in the distal duodenum, five in the antrum, two in the gastric fundus and four in the gastric body. All SELs originated from the fourth sonographic layer of the gastrointestinal wall (i.e, muscularis propria) and showed a homogeneous hypoechoic echo pattern on EUS. Mean size of the lesions was 20.5 mm (range 18 -24 mm). Previous EUS-FNB with larger size-needle (11 cases with 22G-PC needle and 5 cases with 19G-PC needle) failed in 10 cases because macroscopically suboptimal specimens were retrieved and in the other six cases because of technical issues (▶ Table 2). Technical failure was mainly due to difficulty in advancing a large needle through the scope in an angulated position (such as the distal duodenum) and for the tendency of the needle to push the scope away from the gastrointestinal wall (as it happens in the greater curvature of the stomach). EUS-FNB with 25G-PC was technically feasible in all subjects and enabled final diagnosis in nine out of 16 cases (56.2 %). IHC was feasible in all these adequate specimens. Regarding the subgroup of duodenal lesions, the procedure was successful in four of five (80 %) (▶ Fig. 1). Final diagnoses with EUS-guided sampling were GIST (n = 6), leiomyoma (n = 2) and metastatic ovarian carcinoma (n = 1). All six patients with EUS-proven GIST were treated by surgery and confirmed at final pathology. Patients with leiomyoma were planned for follow-up. The patient with metastasis from ovarian cancer started palliative chemotherapy. Regarding the seven patients with non-diagnostic results with 25G-PC, two underwent wedge resection and GIST was confirmed on surgical specimens in both cases, five had endoscopic follow-up (no change was seen in a mean followup period of 23 months, ranging from 7 to 38 months). No major procedure-related AEs were recorded irrespective of needle size. ▶ Discussion Endoscopic ultrasound (EUS) is considered the primary modality for evaluation of SELs. Furthermore, EUS-FNA enables tissue acquisition when needed. EUS-FNA has an overall diagnostic accuracy ranging from 60 % to 80 % in SELs [14,15]. Several factors have been associated with inadequate tissue yield but the main ones are size and location of the lesion [16]. In fact, sam-pling adequacy increases proportionate with tumor size and poorer diagnostic yield has been generally associated with lesions smaller than 30 to 40 mm. Evidence from the literature supports this statement. In a retrospective study by Hoda et al on 112 upper GI-SELs, the diagnostic yield was 44.4 % for lesions less than 10 mm and increased up to 58.3 % for lesions ranging from 11 to 30 mm, and to 69.7 % for lesions > 30 mm [14]. In another study on 53 subepithelial gastric lesions, EUS-FNA had an overall diagnostic yield of 71 % for lesions measuring up to 20 mm, 86 % for lesions ranging 20 to 40 mm and 100 % for lesions larger than 40 mm [17]. More recently Akahoshi's group obtained a diagnostic rate of 73 % from EUS-FNA of 90 gastric SELs smaller than 20 mm [18]. However, Sekine et al demonstrated that GIST can be correctly identified by EUS-FNA even in small lesions, with an overall sensitivity of 82.5 % for GIST of any size, and 81.3 % for GIST smaller than 20 mm [19]. Unfortunately, cytology is often not sufficient to reach a definitive diagnosis of GI-SELs and usually a proper histological sample is required, especially in view of IHC analysis. EUS-FNB PC needles have been conceived to obtain more tissue and ideally to provide histological specimen (core biopsy). Studies on core biopsy needles were mainly conducted on patients with pancreatic masses, while only a few studies are available looking at characterization of SELs [7, 20 -22]. In the first experience of Iglesias-Garcia et al on heterogeneous study population with intestinal and extra-intestinal lesions, EUS-FNB with 19G-PC was technically feasible in 98.2 % of cases (112/114). In this study, 11 patients presented with upper GI-SELs and correct diagnoses were achieved in nine of them (81.8 %) [8]. Kim et al have evaluated 12 patients with upper SELs, including esophageal, gastric and duodenal lesions, and EUS-FNB with a 22G-PC needle reached a diagnostic yield of 75 % [20]. Similarly, Lee et al evaluated the efficacy of EUS-FNB with 22G-PC needle in gastric SEL, obtaining an overall diagnostic yield of 86 % [21]. According to tumor location, the highest diagnostic yield was in the fundus (100 %), followed by the body (89.5 %), cardia (83.3 %), and antrum (50 %). In this study there were only two cases of antral lesions and only one had final diagnosis with EUS-FNB [21]. More recently, a larger study of 77 upper GI-SELs with EUS biopsy needle has been conducted to evaluate performance of EUS-FNB using a 22G-PC where diagnosis was achieved in 81.8 % of cases [22]. Core biopsy tissue was obtained in 96.8 % of the cases. Only a single case of post-procedural bleeding was recorded [22]. Recently, a new 20G-PC needle has been developed, which is expected to be a balanced compromise between flexibility, facility of use proper of the smallest needles, and quality of the tissue sampling, typical of the larger needle, providing echo endoscopists a new tool to accurately target lesions, regardless of their size or location [7]. Antonini et al published the first experience with this needle in a multicenter retrospective study for the diagnosis of SELs. A total of 50 SELs were included and after a mean number of passes of 2.2 (range 1 -4), definitive diagnosis with full histological assessment including IHC was obtained in 88 % of patients (44/ 50) without any major complications [7]. The external validity of these studies was strongly limited by the fact that most of the punctured lesions were > 20 mm in diameter and a 22G-PC needle was used. Notably, in the current study, all the lesions were sampled with a 25G-PC needle and all of them were less than 25 mm. Indeed, our study showed than even in lesions ≤ 20 mm, the 25G-PC was able to achieve a diagnosis in 70 % of cases (7/10). Up to now, management algorithms for small GIST have been a matter for debate [23,24]. Natural history of small GISTs has not been well defined but even these lesions may present with malignant behavior and evolve into clinically relevant lesions [25,26]. Therefore, the European Society for Medical Oncology (ESMO) recommends EUS assessment for esophagogastric or duodenal SELs < 20 mm and surgical excision of histologically proven small GISTs, unless that entails major morbidity [27]. EUS-guided tissue acquisition with 25G-PC needles in patients with pancreatic lesions resulted in high diagnostic yield, similar to standard 25-gauge FNA needles, able also to provide sufficient tissue specimen for histological assessment [28,29]. In the study by Iwashita et al, despite the low yield (32 %) of a real "core," histological analysis was possible in 63 % of patients on the first pass and in 80 % of cases on subsequent passes [28]. This indicated that a definitive diagnosis could be obtained based also on tissue fragments that do not meet the criteria for architecturally intact histology but can still yield a diagnosis based on cell morphology. In our study both histological core and tissue fragments were considered by the pathologist for the final diagnosis, including full IHC when required. The results show that EUS-FNB with 25G-PC enables definitive diagnosis in most of the assessed small upper GI-SELs, otherwise not fully characterized by other larger-bore needles. Indeed, correct diagnosis rates were 56.2 % overall but 80 % in duodenal lesions. Other authors have highlighted the better performance of the 25G needle for SELs located in certain positions, such as the greater curvature of the stomach, where the needle tip may rebound, making it difficult to puncture the lesion [19]. The major advantage of the 25G needle is its thin caliber which makes EUS-guided sampling easier even in difficult sites. Transduodenal EUS-guided tissue acquisition can be technically challenging due to the angulated position that may hamper advancement of the needle through the scope and into the targeted lesion. Moreover, to avoid instrumental damage with larger-bore needles, often the scope has to be withdrawn into the stomach so the tip can be straightened. Our study presents some limitations that should be acknowledged. The number of patients was relatively small and there were recruited in a single center (reducing the external validity of our findings). Follow-up was relatively short, ranging from 7 to 38 months after EUS-FNB, and not all patients underwent surgical resection as the gold standard for diagnosis. Followup of small GI-SELs is controversial. Koizumi et al. have showed that doubling time differs according to the type of SELs, and GISTs were confirmed to have a significantly shorter doubling time (17.2 months) than the other types of tumors, thus suggesting that even small SELs should initially be followed up within at least 6 months after detection [30]. To the best of our knowledge, this study represents the first investigation of the role of EUS-FNB with 25G-PC biopsy following a failed FNB performed with another size needle for characterization of small subepithelial lesions of the upper gastrointestinal tract. Therefore, larger prospective studies are warranted to confirm our results. Conclusion In conclusion, our study shows that in patients with small GI-SELs, additional tissue obtained with 25G-PC may represents a "rescue" strategy after an unsuccessful procedure with largerbore needles, especially when lesions are localized in the distal duodenum.
2018-07-07T01:12:56.850Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "973084d130316a42a8c3c1c7f1e3a3d582b96880", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-0603-3578.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e28342afc498d9ae1318c5167e769af549962538", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235215483
pes2o/s2orc
v3-fos-license
IR GRIN lenses prepared by ionic exchange in chalcohalide glasses In order to decrease the number of lenses and the weight of thermal imaging devices, specific optical design are required by using gradient refractive index (GRIN) elements transparent in the infrared waveband. While widely used for making visible GRIN lenses with silicate glasses, the ion exchange process is very limited when applied to chalcogenide glasses due to their low Tg and relatively weak mechanical properties. In this paper, we develop chalco-halide glasses based on alkali halide (NaI) addition in a highly covalent GeSe2–Ga2Se3 matrix, efficient for tailoring a significant and permanent change of refractive by ion exchange process between K+ and Na+. Optical and structural properties of the glass samples were measured showing a diffusion length reaching more than 2 mm and a Gaussian gradient of refractive index Δn of 4.5.10–2. The obtained GRIN lenses maintain an excellent transmission in the second (3–5 µm) and third (8–12 µm) atmospheric windows. In order to decrease the number of lenses and the weight of thermal imaging devices, specific optical design are required by using gradient refractive index (GRIN) elements transparent in the infrared waveband. While widely used for making visible GRIN lenses with silicate glasses, the ion exchange process is very limited when applied to chalcogenide glasses due to their low T g and relatively weak mechanical properties. In this paper, we develop chalco-halide glasses based on alkali halide (NaI) addition in a highly covalent GeSe 2 -Ga 2 Se 3 matrix, efficient for tailoring a significant and permanent change of refractive by ion exchange process between K + and Na + . Optical and structural properties of the glass samples were measured showing a diffusion length reaching more than 2 mm and a Gaussian gradient of refractive index Δn of 4.5.10 -2 . The obtained GRIN lenses maintain an excellent transmission in the second (3-5 µm) and third (8-12 µm) atmospheric windows. To perform lightweight and compact IR systems for thermal imaging applications working in the third atmospheric window (8-12µm), this paper focuses on the development of radial IR GRIN lenses based on chalcogenide glasses. Recently, IR GRIN lenses were developed by applying a gradient of heat treatment to create axial 1 and radial 2 gradient of refractive index based on GeSe 2 -As 2 Se 3 -Pbse and GeSe 2 -Ga 2 Se 3 systems, respectively. The key of success of this technology is the use of unstable chalcogenide glass composition against crystallization and a reproducible and fine control of crystallization is the key issue. However, the main drawback rely on the use of small quantity of glass that can be synthesized without crystallization. Our investigation focuses on an ionic exchange process on glasses of high stability against crystallization. Ion exchange process has proved to be highly successful with oxide-based glasses. The ion exchange process is a well-known technique used to modify the optical, mechanical or chemical properties on the glass surface, which will result in a multitude of advantages and applications such as the chemical strengthening of the surface of a glass [3][4][5][6] . In addition, the change of the composition of the glass surface allows the manufacture of planar waveguides 7,8 as well as in our study, the manufacture of lenses with gradient of refractive index 9-12 . Up to now, while highly developed in silicate glasses, the ion exchange process is still very little studied in chalcogenide glass family because of the lack of composition with mobile ions and their low glass transition temperature compared to that of silicates glasses wherein this process is widespread. Indeed, in conventional and commercial chalcogenide glasses derived from Ge-As-Se, As-Se or Ge-Sb-Se systems, the atoms are connected to each other by covalent bonds and there is no mobile ion to be exchanged. To induce significant ionic mobility within the matrix, specific compositions based on chalco-halide were designed. The incorporation of alkali halide such as NaI within chalcogenide glassy matrix is already reported in several papers 13,14 . Wang et al. 15 were the first to study ion exchange in sulfur-based glasses. Diffusion depths of K+ cations in a GeS 2 -Ga 2 S 3 -AgX system glass (with X = Cl, Br, I) greater than 250 μm were measured. However, the IR transmissions and other thermo-mechanical properties of the glasses were not presented. Later, this process has already been developed to enhance the mechanical properties of chalcogenide glasses based on selenium by Rozé et al. 16 . The compression of the glass surface is accomplished by replacing the K + cations from the glass by Rb + cations from the melt bath, which have a higher ionic radius (K + : 1.38 Å and Rb + : 1.52 Å). The insertion of bigger ions inside the restricted spaces on the surface will create compression forces. The glass is reinforced by the surfaces in compression and the heart in a state of compensatory tension. However, in this case, critical diffusion length of 25µm is reached before inducing strong deterioration of the base glass leading to irreversible damages due to inner mechanical constrains. For making infrared lenses transparent in the second and third atmospheric windows, selenium-based glasses were selected. To maintain a glass transition temperature (T g ) higher than 300°C because of the used molten baths, alkali halide was incorporated in high-T g glasses belonging to the Ge-Ga-Se ternary system. Based on www.nature.com/scientificreports/ these glasses, the (72 GeSe 2 -28 Ga 2 Se 3 ) 75 (MI) 25 glass equivalent to the Ge 17 Ga 13 Se 54 I 8 M 8 (at%.) composition was selected due to its high rate of alkali ions (M = Na + , K + or a mix of both). Another key parameter of the ion exchange process is the composition of the exchange bath and its resulting melting temperature. Indeed, the melting temperature of the bath must be lower than the glass transition temperature of the samples to avoid their deformation during the ion exchange. The choice of the bath composition was focused on the nitrate compounds because of their low vapor pressure when melted, relatively low melting temperatures, and high dissolution rate in water to remove it easily from glass surface. Results Diffusion in chalco-halide glasses. Thermal characteristics of the base glass containing 25%mol. of NaI are presented in the Table 1 and compared to the base glass without alkali halide. As expected, the glass Tg decreases with addition of sodium iodine while the crystallization temperature remains unchanged leading to a high increase of the glass stability against crystallization. Indeed, weaker bonds appear in the glass due to Na + cations, the crosslinking of the vitreous network is lower and the T g is thus reduced 14 . Considering the fact that the exchange bath presents a melting point above 237°C, adding 25% of NaI is still optimal to keep a large range of working temperature. Also, one can notice a strong increase of the thermal expansion coefficient and a decrease of refractive index when adding NaI. Transmission windows of both glasses are presented in Figure 1. The addition of alkali halide induces a blueshift of the beginning of transmission extending the transmission into the visible region but at the meantime introduces parasitic absorption bands due to O-H, Ge-O bonds. It is known that the introduction of alkali halide within covalent chalcogenide glassy matrix gives a higher hygroscopic behavior of the materials 14 . Figure 2 presents a schema of the experimental process of ionic exchange. As presented in Figure 2, polished samples of the (72 GeSe 2 -28 Ga 2 Se 3 ) 75 -(NaI) 25 glass in rod shape of 10mm of diameter and 10mm thick were immersed in the 60 KNO 3 / 40 NaNO 3 bath (T m of 237°C) at T g -60°C (250°C) for different durations, from 1h to 63 days. In order to focus on radial exchange, a slice of 4mm thick was cut in the middle of the 10 mm rod and the surfaces were polished then. The symmetrical nature of ion exchange was systematically checked by EDS on all the samples showing a perfect radial symmetry. The Fig. 3 presents only one side for a better read of curves. The profile of potassium concentration was measured on the surface, from the edge to the center. As observable in Fig. 3, which presents the diffusion profile of K + in the glass containing 25% of NaI immersed at 250°C, www.nature.com/scientificreports/ the potassium concentration profiles show a gradient of potassium diffusion into the glass. The diffusion depth increases with increasing time of immersion up to 2 mm after 40 days and is followed by a saturation step of diffusion. A rate of 8 at%. of potassium is reached at the edge of the samples, which means that all sodium from the based glass composition of Ge 17 Ga 13 Se 54 I 8 Na 8 (at%.) has been replaced by potassium ions. The new composition of the edge of the glass is then Ge 17 Ga 13 Se 54 I 8 K 8 (at%.) that is to say a glass composition of (72 GeSe 2 -28 Ga 2 Se 3 ) 75 (KI) 25 . Figure 4 shows the RX diffractograms of the samples before and after being immersed from 30 to 63 days. The diffractogram was realized on bulk and powder to exclude potential surface crystallization. After 40 days of immersion, a crystallization of sodium iodine appears. After crystallization, sodium ions are less mobile to be exchanged with potassium ions, which could be an explanation of the saturation step of diffusion that we observed in Fig. 3. The transparency window of the immersed samples cut and polished are shown Fig. 5. At short wavelength, a progressive shift of the band gap towards higher wavelength is observable for increased time of immersion. This result is consistent with Rayleigh scatterings due to submicron particles already observed by XRD. The maximum of transmittance is barely unchanged in the third atmospheric window between 8 and 14µm before and after the ion exchange. Moreover, no additional bonds due to oxidation of the glass are observed in this optical window dedicated for thermal imaging. This method is therefore entirely suitable for creating GRIN lenses made of chalcogenide glasses since the transmission of the glasses in the IR remains intact. Obtention of GRIN lenses. In order to determine the evolution of refractive index according to the proportion of alkaline exchanged, optical properties of glasses with a mix of sodium and potassium of different Figure 6 presents the evolution of the refractive index as a function of the atomic percentage of potassium in these glass compositions. Those data show that the refractive index decreases linearly according to the potassium rate inside the glass. Therefore, a gradient of potassium in the glass leads to a gradient of index that is proportional to the potassium content, with a maximum ∆n reachable of -4,5.10 -2 when all Na + is replaced by K + ions. In order to be in line with the maximum diffusion scale of 2mm of potassium in the glass, glass rods of 4 mm diameter were immersed from 11 days up to 40 days in the nitrate bath, in order to avoid crystallization. The GRIN lenses obtained as well as their characteristics are presented in Figure 7. The observation of a metallic grid/lattice throughout the samples using a 8-12µm IR camera allows witnessing the variation of refraction index induced by ionic exchange (Figure 7a). As expected, the grid appears more and more curved as the diffusion depth increases. The timescale allows for an easy control of the diffusion profile leading to a hyperbolic secant profile of refractive index with a maximum ∆n of −4.5×10 -2 as shown in Figure7b). Beyond 40 days, an interdiffusion of K + derived from both sides leads to a K + concentration increase of at the center of the sample. This phenomenon can be the root cause of the slightly reduced Δn observed. Discussion An efficient ionic exchange between Na + and K + has been realized by using optimized chalco-halide glass in molted mixed nitrates (NaNO 3 /KNO 3 ) bath. The addition of amount of alkali halide (NaI) within a highly covalent matrix (Ge-Ga-Se) permits the creation of mobile ions needed for ionic exchange process while maintaining www.nature.com/scientificreports/ a high glass transition temperature. By controlling the time and temperature of experiment, diffusion depth of more than 2mm is reached without inducing any perturbations of the glass transmission in the second (3-5µm) and third (8-12µm) atmospheric windows. To our best knowledge, this is the first work reporting a so high diffusion depth induced in chalcogenide glasses combined to the generation of a permanent and intense (Δn=−4.5×10 -2 ) change of refractive index. Our material is thus an ideal candidate for thermal imaging applications, in which compact embedded IR optics are needed. Methods Chalcogenide glasses were prepared by following the ordinary melt/quenching method. All the raw elements (Ge, Ga, Se: 5N and NaI, KI: 2N) were weighted respectively to their composition. They were placed in a silica tube of 4 or 10 mm inner diameter, which was sealed under secondary vacuum (10 −5 mbar). The mixture was heated up to 870°C for 10 hours and then quenched in water before being annealed for 3h at T g to relax the mechanical constrains. In this preliminary experiment, no further steps of purification were performed. Glass rods of 10mm high were cut and then finely polished for optical characterization and ion exchange experiments. The ion exchange process is focused on Na + /K + because of their close ionic radius (Na + : 1.02 Å and K + : 1.38 Å). The melting temperature of KNO 3 is 334°C, which prevents from using it alone for ionic exchange experiment without deteriorating the glass samples. Thus, a mix of two nitrates based on sodium and potassium was selected to decrease the melting temperature of the bath. The 60 KNO 3 / 40 NaNO 3 bath composition was selected because it presents a good compromise between a relatively low melting temperature (237°C) and a high content www.nature.com/scientificreports/ of potassium. The bath was prepared by melting and mixing the two compounds above their melting temperature, thus above 350°C, and then was cooled down at the working temperature. The samples were immersed for different duration at 250°C in a silica chamber containing the melted alkali nitrate bath. This low temperature is optimal to offer a slow diffusion rate that allows a good control of the diffusion length without deteriorating the chalcogenide glass. After the ion exchange, the glass rods are then rinsed under distilled water to remove residual melt bath on the surface. To obtain radial graded index, the samples having undergone ion exchange process were cut into a slice of 4mm thick in the middle of the 10mm length rod and the surfaces were finely polished, as depictured in Fig. 2. The chemical concentration profile of K + was measured on the surface, from the edge to the center using a Scanning Electron Microscope (JEOL IT 300 LA) equipped with an Energy Dispersive X-Ray Spectroscopy (EDS). Differential Scanning Calorimetry experiments (DSC 2010 TA Q20) were performed to measure the characteristic temperatures (glass transition temperature: T g , temperature of crystallization: T x ) of the glass with a heating rate of 10 °C/min. The transmission was measured using UV-vis and FTIR spectrophotometer. To obtain the refractive index as a function of the wavelength of the glass surface, the so-called m-line technique was used at, 1311 nm and 1511 nm with an accuracy of 2×10 -3 . Hardness was measured using a Vickers indenter (Matsuzawa) using a constant force of 100 g for a duration of 5 s. The reported value is an average of ten measurements.
2021-05-28T06:16:56.703Z
2021-05-26T00:00:00.000
{ "year": 2021, "sha1": "f3003496d0ef154ac8be98b0f378f089fa80d02d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-90626-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e6c442c3dfe6d908a00e5b0e5421aa2417cf548a", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
243866812
pes2o/s2orc
v3-fos-license
Impact of optimized pest control schemes on mandarin yield in the Republic of Abkhazia Pests of mandarin reduce significantly crop productivity and commercial quality of the crop in Abkhazia. We assessed the impact of optimized mandarin pest control schemes on fruit size and crop yield in the humid subtropics of Abkhazia. The studies were carried out in 2019-2020 on full-aged plantings of cv. Unshiu mandarin in Gulrypsh district of Republic of Abkhazia on the base of Institute of agriculture of Academy of sciences of Abkhazia. The highest yield and quality of fruits was shown by the protection schemes in variants 5 (Confidor extra (0.05 %) + Cytovit (0.15 %) – the 1st treatment Vertimec (0.1%) + Cytovit (0.15 %) – the second treatment; the third and fourth treatments Karate Zeon (0.05 %) + Cytovit (0.15 %)) and 6 (Metomax (0.15 %) + Vertimec (0.1 %) – the 1st treatment; Karate Zeon (0.05 %) + Vertimec (0.1 %) in the other three treatments). The average fruits weight in these variants was up to 72-74 g. This exceeded the fruits weight in the Standard variant by 22.0-25.4 %. The yield was 46.2-44.7 t/ha, which is 36.8-41.3 % higher than the standard variant. Fruits of the 1st grade in the named variants accounted for 63.3-65.6 % of the total yield. Introduction In the modern period, citrus crops occupy a leading place in the agriculture of the Republic of Abkhazia, among which mandarin (Citrus reticulata subsp. unshiu (Marcow.) D. Rivera & al.) is the main industrial crop. Mandarin agrocenoses account for more than 90% of the area occupied by citrus crops [1]. It is known that pests can significantly reduce the yield and quality of fruits, as well as lead to a complete loss of yield and even to the death of plants [2]. The degree of distribution and development of pests affects the quantitative and qualitative characteristics of the productivity of agricultural crops. So, crop yield loss from pests amounted to 601 million tons in 2000-2015 [3]. The cultivation of mandarin in the conditions of Abkhazia, as in other regions of the world, faces difficulties in the field of plant protection from pests [1,[4][5][6]. More than 50 pest species have been recorded on citrus crops in the Black Sea region of the Caucasus. These species differ in the degree of their influence on the state of citrus crops, the size and quality of the harvest [6][7][8]. The appearance in the region of the brown marmorated stink bug (Halyomorpha halys Stål.) reduced the yield of standard mandarin fruits and led to a 40% decrease in the export of culture from Abkhazia to Russia in 2017 [9]. In technologies of mandarin protection from pests in the conditions of Abkhazia, organophosphate pesticides, mineral oil emulsions and lime-sulfur broth are traditionally used to this day. Farmers use modern insecticides, as a rule, without a scientific approach. This leads to an increase in the resistance of pests to the used active substances, oppression of plants, a decrease in yield, the accumulation of residual amounts of pesticides in agrocenoses. Optimization of mandarin protection schemes using modern insecticides and insectoacaricides from the classes of avermectins, pyrethroids and neonicotinoids is a very urgent task. An important issue in fruit (including subtropical) crops cultivation technologies is the influence of certain technology elements on the quality of fruits (their size and weight), as well as on productivity as an integrated indicator [3]. So, plant protection as an element of agricultural crops cultivation technology, contributes not only to the preservation of yield, but can lead to its increase and improve the fruits quality [10,11]. The aim of the research was to study the effect of the use of new mandarin protection schemes from pests on fruits weight and crop yield in the humid subtropics of Abkhazia. Materials and methods The studies were carried out in 2019-2020 on full-aged plantings of cv. Unshiu mandarin in Gulrypsh district of Republic of Abkhazia on the base of Institute of agriculture of Academy of sciences of Abkhazia. The experiments were laid down according to generally accepted methods [12,13]. The scheme of experience included 8 variation. In each case, 4 treatments were carried out: in the second decade of June, in the second decade of July, in the second decade of August and in the second decade of September. The fruits on the experimental plots was harvested in one day. Mandarin yield was determined by the gravimetric method (Scout Pro SPS202F scales) during the harvesting period according to the methodology for state agricultural crops variety testing [13]. Statistical processing of the research results was carried out according to Dospekhov [14] using the mathematical software package MS Excel 2010. Results and discussion Analysis of the obtained data showed that optimized plant protection schemes affect the weight of mandarin fruits. The smallest indicators of fetal weight were in the control variant -38 g ( Table 1). The application of the plant protection scheme adopted in the region (standard) increased the average fruit weight by 55.2 %. The greatest increase in the weight of mandarin fruits was noted when applying the protection scheme according to 5th and 6th variations. The average fetal weight increased by 89.5 and 94.7 % relative to the control variant, respectively, and by 22.0 and 25.4 % relative to the variant of adopted in region treatment (Standard). In the variants with the use of Diatomite, the lowest values of fruits fetal weight were noted, which may be due to the insufficient effectiveness of these variants in relation to mandarin pests. The weight of the fruits was lower than the Standard option, and in the case of using Diatomite 3 %, it even slightly differed from the option without treatment (control). The number of fruits per tree varied on average from 1108 to 1386. The yield per tree and the yield per hectare were determined by calculation. Thus, the highest yield values were obtained in variants five and six. The excess of the yield values in these variants was, 85.7 and 91.7 % relative to the control variant, respectively, and 36.8 and 41.3 % relative to the Standard variant ( Figure 1). In addition to the total yield, an important aspect is the marketability (commodity structure) of the harvested crop. Marketability comes first in the production of fruit and berry crops [15]. So, in the control variant, fruits of the 1st grade were not harvested ( Table 2). The collected fruits were small, damaged by a citrus rust mite (Phyllocoptruta oleivora Ashmead) and a brown marmorated stink bug (Halyomorpha halys Stål.). The share of nonstandard fruits was maximal (74,3 %). The commodity structure of the crop in different variants can be clearly seen in Figure 2. The maximum number of fruits of the 1st grade was harvested in variants 5 and 6, slightly less in variants 3 and 4. The last two options show the commodity structure of the crop, similar to Standard variant. Diatomite protection schemes could not reliably protect the fruit from damage by the citrus rust mite and the brown marmorated stink bug, so about half of the fruit turned out to be non-standard. In addition, a white bloom of the drug remained on the fruits during the harvest period, which had to be washed off before selling. Conclusion Thus, the highest economic efficiency was shown by mandarin protection schemes with alternation of Confidor extra, Vertimec and Karate Zeon in a tank mixture with Cytovit fertilizer, as well as with alternating Metomax and Karate Zeon in a tank mixture with Vertimec insectoacaricide. They increased the average fruit weight by 22.0-25.4 % and the yield by 36.8-41.3 % relative to the standard variant. Fruits of the 1st grade accounted for 63.3-65.6 % of the total harvest.
2021-10-18T17:40:44.198Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "003c234d05c2eb69c3fe63b8f93928df742e40b9", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2021/08/bioconf_fsraaba2021_04009.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1a038e44868000f0d0803e698878fd5f631168de", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
227255564
pes2o/s2orc
v3-fos-license
Community-Enhanced Social Prescribing: Integrating Community in Policy and Practice The NHS Plan is introducing social prescribing link workers into GP surgeries in England. The link workers connect people to non-health resources in the community and voluntary sector, with the aim of meeting individual needs beyond the capacity of the NHS. Social prescribing models focus on enhancing individual wellbeing, guided by the policy of universal personalised care. However, they largely neglect the capacity of communities to meet individual need, particularly in the wake of a decade of austerity. We propose a model of community enhanced social prescribing (CESP) which has the potential to improve both individual and community wellbeing. CESP combines two evidence-informed models – Connected Communities and Connecting People – to address both community capacity and individual need. CESP requires a literacy of community which recognises the importance of communities to individuals and the importance of engaging with, and investing in, communities. When fully implemented the theory of change for CESP is hypothesised to improve both individual and community wellbeing. Introduction Community-enhanced social prescribing (CESP) is a new model of social prescribing combining community engagement, organisational change and individual-level UK Policy Context While the imperative of social prescribing has its contemporary policy roots in the drive to engaging individuals through the provisions of universal personalised care (NHS England 2019c), its long term effectiveness will depend on integrating this with a new formulation of community care and engagement; a take on integration which existing narratives have, to date, largely failed to embrace. The history of health and social care integration in the UK is a long one. In 2009, Ham and Oldham noted that while the objective of achieving closer integration of health and social care had been crystallised in the Health Act 1999 a decade earlier, as a policy aim it had spanned half a century. Central to this aim has been the desirability of integrating the community services of health and social care sectors. Yet for all the subsequent policy drive of recent years towards its implementation, vehicles for convergence have left communities almost entirely stranded at the roadside. As systems and models for service commissioning and delivery are repeatedly engineered and reengineered towards meeting the vision for effective cohesion in health and social care organisations, the prize of integrating both of these sectors with that of community sector organisations and communities themselves has barely been considered. Clearly, integration has long been prominent in health policy as a critical dimension for bringing about a focus on community health. The Five Year Forward View for the NHS cited the necessity for a new community health approach, setting out proposals for multi-specialty community providers and primary care based models to achieve them (NHS England 2014). Such models exemplified a basis for closer partnerships between primary, community, mental health and social care services and were viewed as effective (Collins 2016). These new models of integration have gone on to shape a vision for the NHS in which the transformation needed to ensure progress on integration is to be pursued through strategic partnerships, to be consolidated by 2021, as Integrated Care Systems. So, although the multiple complexities of integration have made the path to its achievement inordinately long, it seems that something like an endpoint is in sight. But an endpoint at which the success of integration is judged by its scope to overcome the "organisational, professional, legal and regulatory boundaries within the health and social care sectors" (National Audit Office 2017, p. 5) is surely not really an endpoint at all. In defining integration as being about "improving patient outcomes, satisfaction and value for money" (ibid, p.5) while remaining silent on how the communities of which patients are members are to be engaged, is no way to ensure that they are placed "at the centre of the design and delivery of care" (ibid, p.5). Rather, it demonstrates a process of policy formulation in which the dimensions of management and governance in integration are privileged over its human values and social potential. Meanwhile in social care, the value of communities in shaping care models and their implementation has been an implicit guiding notion for many decades, having at times been explicitly advocated as the necessary orientation for the social work profession. Notably, the Barclay report -commissioned by government from a two-year review of social work in England and Wales -recommended, through processes akin to today's 'co-production', a re-casting of professional practice towards the purposeful engagement of informal carers and communities. For social work, Barclay (1982) recommended an active professional relationship of brokerage and support for individuals' social networks of support. Critically, alongside restatement of what then was a largely consensual ethic of citizen entitlement to public welfare services, the report envisaged a devolution of power to citizens as users of services and members of communitieswhich contributed to the subsequent disregard by the Thatcher government for its recommendations, and the rapid displacement of a primary community social work ethic by one of statutory duty. Many examples of community based social work had, however, arisen from this period. These echoed the increasing focus in both central and local government on policy for active citizenship and community renewal. In this context, citizenship and community engagement were pursued with increasing momentum as key principles for public policy, with communities increasingly seen as key to localising forms of democratic accountability and as sites for civic participation (Newlove 2011). Local Strategic Partnerships and Area Agreements became the key strategic means of advancing ideas of boundary-spanning active citizenship and social solidarity in practice from 2000 onwards (Geddes et al. 2007). The Labour Government's vision for strong and prosperous communities further shaped and promoted these goals in its White Paper of 2006 (Department for Communities and Local Government 2006). More recently, the 2018 Civil Society Strategy articulated a vision for creating social value from civil society: communities thriving through partnership and reciprocal contribution (Her Majesty's Government 2018a). Undoubtedly this policy trajectory is, in part, determined less by a moral case for local empowerment than by a politically contested imperative for social cohesion (as set out in the Integrated Community Strategies Action Plan for building strong, integrated communities (Her Majesty's Government 2019)), and by a policy drive that seeks to legitimate the displacement to communities of responsibility for aspects of the public service role which, from a decade of financial 'austerity', public services can no longer provide. Nonetheless, it is a trajectory from which multiple pointers to the value of local citizenship perspectives could be drawn, to inform a paradigm for integration; one in which meeting the claim of the individual citizen for 'personalised' services over which they exercise individual control is seen not as separate from, but as integral to building the social or community capital on which they depend -and to which as citizens, they contribute. A paradigm for this kind of integration does, though, appear elusive. Public Health strategy routinely centralises the value of population level initiatives aimed at community wellbeing impact (Public Health England 2015) and within this, the 'asset-based approach' features consistently as an expression of the community's potential contribution in promoting individual health and preventing illness. However, the discourse of integration fails more generally to stimulate a practice in which the value of these different policy traditions is realised, leaving individual and community dimensions of health in largely separate domains. This matters, not least because of the current policy priority of addressing loneliness and social isolation (Her Majesty's Government 2018b). Social Prescribing The current policy imperative of social prescribing crystallises the need for action in this area. It also highlights a fresh challenge for health and social care practitioners in bridging the domains of public services and local communities to support people with a range of health problems to access opportunities for participation in these communities. Requiring a step-wise implementation centred on primary care networks, social prescribing policy intends to stimulate innovation and systematise practices to which primary care practitioners have, in some measure, long been committed. In articulating intended outcomes for 'communities, the service system and people' and in the new provision of a thousand link workers and a dedicated Social Prescribing Academy to achieve them (NHS England 2019b), it appears possible in principle that a strongly integrative approach to both individual and community domains could now move centre stage. The evidence base for social prescribing, however, is less well developed than UK policy documents may imply. The most rigorous systematic review of social prescribing (Bickerdike et al. 2017) included only 15 studies, of which only one was a randomised controlled trial, and this conducted over 20 years ago (Grant et al. 2000). This review found the evidence to be of low quality, though most studies were positive about social prescribing. Later descriptive reviews have reached similar conclusions (Chatterjee et al. 2018;Pescheny et al. 2018). A further review which explored the process of social prescribing synthesised findings from 109 studies in four categories: exercise (n = 66), green prescriptions (n = 7), arts on prescription (n = 5) and generic social prescribing schemes (n = 32) (Husk et al. 2020). This review highlighted the importance of context and capacity, but also that social prescribing should be developed in line with complex intervention and behaviour change approaches. It is recognised that further high quality research is required (Public Health England 2015; National Institute for Health and Care Excellence 2016), but approaches that shift the focus from individual to community wellbeing must be informed by relevant theory. It is likely that a shift in practice to focus on community wellbeing may take some time. Sitting within the broader policy of universal personalised care (NHS England 2019c), and taking its place as a component in an 'all age whole population approach to Personalised care' model, it can be argued that the genesis for social prescribing owes significantly more to the individualisation of social diagnosis than to the collective development of social solutions. This is not to contest the importance of personalisation as a way to understand the necessarily individual and often complex nature of need, nor for meeting it with the sensitivity and optimism that, in relation to mental health, is the ethical heart of the recovery approach (National Voices and Think Local Act Personal 2014). But while the precepts of personalised health care are largely unarguable, its core application as a tool for personal independence can be seen as a counterweight to the cause of encouraging interdependence, or worse, through its association with individual budgets, in the particular context of a decade long politics of harmful public service cuts (Power 2014), as having provided for an unjust shift of responsibility for these cuts from the public to the individual domain (Spicker 2013). Equally, the impacts of these cuts are likely to be experienced by service providers as having compressed service access to those who fall within increasingly narrow eligibility thresholds. Even less time is available for the involvement of their staff in a whole systems approach to the learning that necessary shifts in professional culture demand. The challenge to professional engagement in a practice with, and for communities often rests on the truth that the meaning of 'community' is theoretically complex and impracticable. It is a challenge that needs to be taken seriously. Clearly, invoking 'community' as a single definitional category is to deny the complexity and diversity of communities in a way that makes no sense, and a linear vision of what constitutes community will afford little progress in this area. Yet neither is complexity in itself a justification for partitioning off from the professional sphere the potential to understand and mobilise the social context of people's lives, especially in the high profile context of injunctions to address loneliness and isolation (Her Majesty's Government 2018b). Appreciation of an individual's identity, or more accurately their multiple identities, needs to take account of the part that 'community' plays in this. The importance of practitioners having a conscious 'literacy of community' is thus paramount. A Literacy of Community Multiple identities imply membership of multiple communities. Communities of neighbourhood, interest, friendship, employment, faith or politics may, with many others, all play a part in the formation, authentication or expression of individual identity. While both health and social care professionals may extrapolate communities of significance to individual patients or clients, this is rarely the starting point that it needs to be if a holistic approach to care or support is to be offered. Moreover, viewing the needs and potential of an individual as a whole person implies an understanding of the meaning which an individual will attach to their whole social system; the value and significance that they associate with its key social components. Clearly the networks to which individuals relate will feature in their view of treatment and support possibilities and a careful appreciation of these should be a central starting point for professional engagement, particularly where the basis for engagement is currently concerned principally with the diagnosis of need and the provision by which it might be met. With an understanding or 'literacy' of what community means to individuals comes both the scope to tailor the service response in a personalised way and to understand the potential value of those communities of significance as partners in care or support and as legitimate settings for the exercise of an individual's civic participation. In knowing the importance of asking 'what does community mean to this person in front of me?', the worker is likely to be better equipped to plot a course through complexity and identify what their own part might be in enabling community connection. In the context of social prescribing it will, in summary, be important for service agencies to recognise and enable the role of the professional as catalyst in helping to build community, as well as individual, capacity. It will also be important for them to formulate a bridge between the two, investing in this form of engagement for the social return that we know the deliberative engagement of communities as social network assets can represent. From Individual to Community Wellbeing Social prescribing typically addresses individual-level outcomes such as social isolation, loneliness, or a lack of individual connection to local resources. Loneliness, in particular is of significant concern in the UK and Government strategies have been developed to address it (Her Majesty's Government 2018b; Welsh Government 2019). It is apparent across the life course (Victor and Yang 2012) and is associated with depression (Erzen and Çikrikci 2018), increased difficulties in activities of daily living (Shankar et al. 2017), increased health service usage (Gerst-Emerson and Jayawardhana 2015) and increased mortality (Holt-Lunstad et al. 2015). As a predominantly subjective experience, loneliness is conceptually distinct from the more objective concept of social isolation, which denotes an absence of contact with people (Zavaleta et al. 2017). Although loneliness can occur in crowds, social isolation is often an antecedent of loneliness. It is therefore important to support people to maintain or make new social connections. However, as loneliness refers to individual perceptions of self and one's social environment, it is more closely aligned to an individual model of health. Loneliness, as a characteristic of individuals, has therefore become a target for social prescription. This focus on individuals however, neglects the social environment in which people live. As evidence suggests that community connections are associated with lower levels of loneliness, particularly in deprived communities (Kearns et al. 2015), it is important to support the development of opportunity for social connection at both individual and community levels. Higher degrees of community engagement are associated with lower degrees of social isolation (De Koning et al. 2017). The interconnectedness of resources, groups as well as individuals in any given community can serve to promote social contact and prevent isolation and loneliness, thereby improving individual health outcomes. Community-level outcomes are not currently foregrounded in the role of a social prescribing link worker as they work with individuals. However, community enhanced social prescribing (CESP) recognises that individuals enrich the civic health of communities by developing opportunities for more active engagement with them. Communities are thus not only potential sources of health benefits for individuals, but they provide opportunities for people to enrich existing capacity and develop new assets for the benefit of all. By integrating a deliberative community development dimension into social prescribing, CESP works towards enhancing community as well as individual wellbeing. It helps to strengthen the fabric of civic participation and develop community citizenship, all core components of a healthy community (Holden 2018). The CESP process should result in an increase in 'sense of community' (McMillan and Chavis 1986) for both individuals and communities. This is an important outcome for social prescribing in general, and the CESP model in particular, as it supports people to enhance their connections with, and contributions to their communities, as well as deriving benefits from these. The Brief Sense of Community Scale (Peterson et al. 2008) is one approach to measuring this as it measures needs fulfilment, community membership, community influence and emotional connection to community and has been widely used internationally (Wu and Chow 2013;Wombacher et al. 2010;Coulombe and Krzesni 2019;O'Connor 2013). CESP is conceived as a way to connect professional practice in primary care networks with communities. It aims to impact positively on the culture of primary care practice and provide a way to connect it with community assets, whilst recognising that communities are dynamic and that capacity-building may be required. CESP brings a focus on community wellbeing into social prescribing. To arrive at this, we have integrated two existing models and bodies of evidence: that of Connected Communities and Connecting People. Connected Communities Connected Communities (CC) methodology is founded on the activity and experience of a three year Big Lottery funded study conducted by the RSA in partnership with the University of Central Lancashire (UCLan) and the Personal Social Services Research Unit (PSSRU) at LSE in seven sites across the UK (Parsfield et al. 2015) Incorporating key ideas concerning the co-productive engagement of communities in mental health inclusion, (Morris and Gilchrist 2011;Brophy and Morris 2014;Morris 2012), the study blended deliberative community engagement with social network analysis in a staged process aiming to understand the nature, value and potential of social networks for wellbeing at a local community level and to enable local communities then to apply this in developing, implementing and evaluating an intervention. Firstly, a local community partnership involving the voluntary sector, community and civil society sector organisations is identified to work with the research team. Through this community partnership, participatory community research is conducted on the basis of an identified issue, set of issues or challenges. These are diverse and in our work to date have included the isolation of single mothers, the exclusion of long term mental health service users, the fragmentation of communities and the lack of social cohesion, the engagement of young people in communities through social media and the integration of the life of an educational academy into that of the community of which it is part. Community members from the study area become researchers in their own community, receiving training and support from the university. Community researchers administer a community survey, collecting data on people's experience of local connections measured by the links and contacts that enable people to seek help; that embody important forms of trust and mutuality and that have an impact on wellbeing. These data are then translated into social network maps that depict visually the clusters, type, density and range of individual network relationships within the study area, alongside data on levels of loneliness, mental wellbeing, and residents' satisfaction with, and sense of community. This information is presented to the community itself through a reflective, focus group based process involving the community partnership, the researchers and respondents. Casting fresh light on who members of the community are to each other, this process can facilitate communities in assessing how connections can be mobilised to improve capacity, and lead to designing bespoke interventions to address the commonly identified issues or development challenges. Funding for future interventions is secured and on this basis, the intervention is developed and implemented over time by the community partnership (with academic support as required for the technical, analytical and economic aspects of the intervention as it develops). The resulting intervention may be small scalefor example, the establishment of a project that provides social connection and support opportunities for previously isolated single mothers or the agreement of multiple community agencies to synthesise their activities and collaborate in disseminating information to grow previously unseen community connections and assets. A second project involves evaluation of the intervention, which, critically, includes an economic analysis. Evaluation shows that these projects invariably become focal points for a broader approach to sustainable local community activity and additional projects for which there is, by then a sufficiently convincing local community infrastructure to enable successful funding to be sought (Parsfield et al. 2015). In enabling different forms of network potential to be identified and understood, the CC approach has particular relevance to the wellbeing of community members. It offers preventive services a way of strengthening their knowledge base for forms of practical prevention at primary, secondary and tertiary levels based on engaging professional staff jointlyor co-productively with their communities. Connecting People Connecting People (CP) (Webber et al. 2016) is a dynamic model of practice which aims to enhance an individual's social network (see Fig. 1). It has been developed from good practice in a range of statutory and voluntary sector agencies in supporting people to make new social connections (Webber et al. 2015). The relationship of the worker and the individual (a 'shorthand' definition for service user, patient or citizen etc) is central to the model, though it is an evolving, mutual relationship which is not typical of traditional 'clinician-patient' roles. Conceived as spinning circles, the process requires a partnership where both circles revolve at a pace to suit both the worker and the individual. The circular motion also indicates that the intervention process is a complex, rather than a linear one, as the outcomes do not always emerge predictably as a direct consequence of intervention. Instead, social networks are enhanced as a bi-product of this model. New relationships could form, mutuality be developed and the potential for reciprocity created at any point in the intervention process. The circles are represented as Catherine wheels, with the sparks emitted in all directions representing the unpredictability as to whether or when social networks are enhanced. The agency in which the intervention occurswhether this is a statutory service, a voluntary or private sector organisation, a social enterprise, or something elseis crucial. It is depicted on the model as underpinning and being core to the intervention. This demonstrates the responsibility of the agency to support the rest of the process, since without a supportive agency, it is much harder for the rest of the intervention to run smoothly. The larger circle on the right of Fig. 1 represents the process that an individual undertakes which can lead to social network development. Every instance is different, but in general, the process involves catalysing ideas and experiences. This is where the person is exposed to new ideas and activities, or has their existing ones encouraged and developed. This process may introduce them to new people and activities, further develop their skills and interests and enhance their social confidence. An ultimate goal of this process is to develop networks with new people and organisations which enhance that person's access to social capital. The process that the worker follows, (represented by the larger circle on the left of the model), is of equal importance in the intervention process to that followed by the individual. This assumes that the worker will need to develop their own social network knowledge in order to support the individual on their journey. Workers will need to build relationships with the person and often their family, friends and local community, as well as with other local organisations. They will need to foster trust through their reliability and interpersonal skills; identify opportunities; engage with the individual's local community; develop their own networks and resources and remember these for future use; adapt to new ideas; and utilise their contacts in the process of supporting the person they are working with. It is important that the worker can think creatively and use their resources effectively. Possible barriers to social network development are represented on the model as two counter-rotating circles which frustrate the motion of the two main circles. Barriers can be diverse and frustrate both the worker or the individual, so potentially posing considerable challenges. The worker and the individual need to work together to overcome the potential barriers to ensure the intervention cycle progresses. Our research (Webber et al. 2019) has found that when these systems and processes occur, and the intervention process moves in the dynamic way that is envisaged in the theoretical model, the outcomes will include an enhancement in the individual's social network, thereby increasing their access to social capital (Webber and Huxley 2007). Community-Enhanced Social Prescribing CESP is a conceptualisation that utilises the two models as described above to bring together the embedded assets, networks and resources of local communities in order to support individuals who are seeking to improve their wellbeing. It requires a coordinated approach from local agencies which looks beyond the needs of individual organisations, to building environments that help people to help themselves. This approach helps isolated people to engage with local networks, resources and community assets; a shift towards a focus on the enabling environment of the kind indicated in the NHS Long-Term Plan (NHS England 2019a). One essential component of an enabling environment is that of repeated opportunities for multi-directional collaborations for health and care. Over time, the co-creativity that emerges from such activity builds networks of high-performing teams and local communities for health. Geographic areas provide opportunities for such shared development and we envisage CESP as working within primary care networks, which cover populations of between 30,000 and 50,000 people. To realise this ambition at scale, the whole system needs to support such localism through processes that have been described by Thomas as 'community-oriented integrated care' (Thomas 2017). Change is required at two levels to create the conditions in which CESP can operate. Firstly, at the organisational and systems level within the primary care network, work needs to be undertaken to align organisational objectives with a shared focus on community wellbeing. This could involve a variety of methods, including whole system events using the large group method of real-time strategic change (Jacobs 1997); experienced-based co-design for stakeholders to reflect on data in the light of their experiences and participate in coordinated improvements; or using learning sets for locality leadership teams, local organisations and citizens to consider how best to make CESP work for them. To inform the process of organisational change, we propose convening a local community citizens' panel of six to twelve volunteers reflecting the socio-economic and cultural geographies of the local area. These volunteers will be members of the public who are active within the organisations, networks and businesses that are embedded within these communities and have a strong local knowledge. They will be appointed for a 12 month period after which time the panel would be refreshed. They would play a key role in mapping community assets (see below); steering the social prescribing initiative and the attached link worker; providing a strategic community alliance for the primary care network thus helping to shape its approach to community engagement. The citizens' panel would also have the potential to participate in the governance of the primary care network. Secondly, at individual level, a social prescribing referral system for agreed target groups (e.g. people with long-term conditions or mental health problems) will need to be established. Link workers would be trained in the Connecting People approach so that they can use it with the people with whom they work. This approach will enable CESP to be applied in locally-relevant ways that also help to incrementally transform the whole system towards effective use of local networks, resources and community assets. There are two essential processes of CESP which can be summarised as 'contextualising the community' and 'engaging with the community'. Contextualising the Community The primary care network facilitates the citizens' panel to map local assets, networks and resources, which are accessible to members of the local community, particularly those which are informal or not widely publicised. It is important that this is not merely a list of voluntary organisations in the local area, but also includes knowledge of the local neighbourhood networks which may be more informal and known only to local panel members. If the primary care network wished to fund it, panel members could become community researchers and, as in the Connected Communities model described above, collect data on social connectivity and asset utilisation within the local population. In any event, this process must be iterative and continuous since it needs to reflect the individual, diverse and dynamic characteristics of active communities, and use this to inform the organisations that are of key importance to effective social prescribing, particularly primary care networks. Using appropriate physical or social media, this multi-dimensional resource is then shared in public spaces within the local community, such as primary care surgeries, public noticeboards, community centres and social media groups, with members of the public invited to contribute and shape it further. Contact details for people associated with the assets are also collated so that they can be accessed as required. The citizens' panel is responsible for ensuring that these maps are reviewed and revised regularly. Engaging with the Community Each primary care network employs a link worker who is responsible for social prescribing. The link worker works closely with the citizens' panel to fully understand the assets of the local community, and also supports it and feeds into the mapping process. They utilise a model of social prescribing informed by Connecting People, which requires full and active engagement with the community with whom they are working. Link workers engage with people within primary care settings who are seeking to improve their wellbeing by engaging with local groups, networks, resources, activities or assets. They follow the Connecting People steps of establishing readiness; mapping the individuals' existing networks and access to local community assets; setting goals for enhancing their wellbeing and planning with which local resources might assist; supporting them to engage with community resources; reviewing with them their progress towards their goals and supporting them to overcome barriers to community engagement. Link workers' engagement with local people, the citizens' panel and the wider primary care network gives them an important role in ensuring that the local asset map is a dynamic resource that is kept up to date, is relevant and fit for purpose. As well as using it in their daily work, they will also continually update it and promote its use in the wider community. Working with the citizens' panel, they will also help to identify gaps in local provision. It is hypothesised that increased knowledge and use of local assets, resources and networks will bring benefits for both individuals and communities. Theory of Change It could reasonably be expected that operationalising the CESP model in the way described here will achieve a number of key outcomes as suggested in the theory of change model (Fig. 2). Figure 2 summarises the processes described above. It assumes a context characterised by social and health inequalities, community fragmentation and social disadvantage, Context Mechanisms and underpinning theory Outcomes where local services are not well connected and individuals experience loneliness, low wellbeing and a poor sense of community (Fig. 2, left column). Social prescribing link workers use the Connecting People model to support individuals to engage with local community assets (Fig. 2, bottom of middle column). The link workers draw upon the local asset map developed by the citizens' panel, which is informed by Connected Communities methodology (Fig. 2, top of middle column). The third set of processes in the theory of change model relate to organisational change whereby connections between organisations are enhanced, awareness of community assets is increased and whole system events bring people and organisations together to consider how best to meet emerging local need (Fig. 2, middle centre of middle column). If fully operationalised, CESP should enhance individual and community outcomes (Fig. 2, right column). For individuals able to utilise and contribute to community assets, we would expect them to experience improvements in their sense of community, general wellbeing and access to social capital. Greater community engagement could also contribute to reducing loneliness, social care needs or help to support the management of long-term conditions. These outcomes are expected on the basis of findings of Connected Communities, Connecting People or similar initiatives to improve community engagement (e.g. Parsfield et al. 2015;Webber et al. 2019;O'Connor 2013). Uniquely for a social prescribing model, we anticipate that CESP will improve community-level outcomes (Fig. 2, right column), suggesting that the model has the potential to benefit all community members irrespective of their receipt of a social prescription. While outcomes on this scale may take longer to become apparent, we anticipate that they will give rise to improvements in a collective sense of community, community wellbeing and the potential for community capacity to occur in the future. Greater awareness of a community's assets should lead to an increased use of community resources and local investment in them. The citizens' panel will also help to identify gaps which, if addressed, will further enhance community wellbeing, thus creating a virtuous cycle. Improved outcomes are dependent on changes in the local health and social care infrastructure. Local primary and secondary health care services need to work towards being part of a whole system, integrated with local community and voluntary organisations (Fig. 2, middle of central column). This can be challenging where there are significant differentials in power, security of funding and professional priorities between agencies. However, where inter-agency coherence can be achieved, the improved connectedness can stimulate local initiatives which can improve community wellbeing in ways not previously foreseen, such as those concerned with improved resilience or crisis response. Discussion Community wellbeing is a multidimensional construct encompassing a number of domains (Phillips and Wong 2016;Sung and Phillips 2018). There are many influences on the subjective and objective wellbeing of a community. It is unlikely that a single factor has an over-riding influence, but inter-connections between domains are likely to be important. Uniquely for social prescribing, the CESP model described in this paper is one that integrates individual and community level activity and, uniquely for social prescribing, produces outcomes for both individuals and communities. It foregrounds the role of connectedness within local community and health networks and is an integrated contextual intervention to optimise outcomes at individual and community levels. Enhanced community engagement is likely to reduce the loneliness and social isolation of individuals, maximise their reciprocal contribution to their community and improve the availability of local resources for the wellbeing of the community as a whole. It remains to be seen if the implementation of CESP can address health inequalities within and between communities. This is a particularly pressing problem in England where life expectancy has fallen in the last ten years in the most deprived communities outside London (Marmot et al. 2020). The social determinants of health are not being adequately addressed by public policy, health and social care services or the voluntary and community sector. This is partly as a result of a decade of austerity in which services have been starved of funds and investment, but it is also due to the ways in which individualism, appearing as an implicit guiding principle and strongly politicised priority for society, has worked to de-emphasise collective action and its value. Individualism and collectivism have opposite associations with loneliness (Heu et al. 2019), and a shift towards policies which promote collective action may also improve individual outcomes. At the time of writing in 2020 and in light of the Covid-19 pandemic, communities are finding new ways of coming together to identify and support each other, in particular, people who are vulnerable and therefore self-isolating. Mutual aid groups have been set up in neighbourhoods; neighbours are establishing WhatsApp groups in their streets; communities are looking out for those without friends or family who may need practical help with shopping or medicine collection and befriending groups are being set up offering phone or online contact. We are witnessing community organising on an unprecedented scale; collective action to meet individual need. Like the future impact of the Covid-19 crisis itself, the extent to which community groups will work alongside statutory health and social care services, in supporting the fabric of communities is unknown. However, it is apparent that when communities self-organise in the way proposed by the CESP model, there is strong potential for improved individual and collective outcomes. Conclusion The CESP model brings together two established approaches, Connected Communities and Connecting People, with their own different and distinctive evidence bases, to form an integrated whole through which we can address the twin dimensions of working with individuals and connecting them to communities empowered to better understand their assets and needs. Both are required in an integrated model for the success of social prescribing. However, this is as yet unproven. Research is required in different contexts to evaluate how contextually-bound the model is likely to be, and which conditions need to be present for community wellbeing to be enhanced. Civic participation as a necessary dimension of being a citizen is envisaged as a necessary precondition for the success of the model. We suggest that community wellbeing will be enhanced by improved civic participation through a reciprocal wellbeing transfer among individuals and to the community as a whole, uniquely benefitting both individuals and communities. Data Availability Not applicable. Compliance with Ethical Standards Conflicts of Interest / Competing Interests None. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-12-03T09:04:34.387Z
2020-12-02T00:00:00.000
{ "year": 2020, "sha1": "54033dd010c0741d5f0b144fb0b8675a4b7a048a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s42413-020-00080-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8111cf526fe2252732d3eb7952415c740c1264a7", "s2fieldsofstudy": [ "Political Science", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Sociology" ] }
250612295
pes2o/s2orc
v3-fos-license
Polybenzoxazine/carbon nanotube nanocomposites as a polymeric sensing material for volatile organic compounds The emissions of volatile organic compounds (VOCs) have hazardous effects on humans and the environment, and hence they should be detected and reduced. In this study, polybenzoxazine (PBZ) and amine-functionalized multiwall carbon nanotube (MWCNT) composites were synthesized as a sensor for VOCs. MWCNT were functionalized with two types of diamines, namely, 1,6-hexanediamine (HDA) and phenylenediamine (PDA). HDA or PDA treated MWCNTs were loaded into the benzoxazine matrix with different weight percentages (0.1, 0.3, 0.5, and 1%). FTIR analysis confirmed the chemical attachment of the two types of diamines on MWCNT. XRD diffraction and scanning electron microscopy (SEM) were used to investigate the nanofillers morphology and clarify the differences between pristine and amine-functionalized MWCNT. Thermal gravimetric analysis (TGA) was used to study the composites’ thermal stability and degradation behavior. It was found that, in contrast to neat PBZ, the major degradation temperature of PBZ/0.5%MWCNT-PDA nanocomposites were enhanced by 10%. The electrical conductivity of PBZ was 6.32 × 10–9, which was enhanced to 6.11 × 10–7 in the composites with 1% MWCNT-PDA. This material was tested as a VOCs sensor for methanol, acetone, and toluene and showed that PBZ/1% MWCNT-PDA composite responded to all the vapors. Introduction Ignition of fuel in vehicles, chemical derivatives, industrial effluents, burning of coal, and fertilizers is considered a source that triggers environmental pollution. Many chemicals that have low boiling points and evaporate at room temperature are called volatile organic compounds (VOC) [1]. These (VOCs) severely affect the atmospheric environment and organic life; thus, VOCs should be detected rapidly in the air. When humans are exposed to VOCs, they will gain many infections and diseases [1]. Benzene, toluene, xylene, ethylbenzene (BTEX), and aldehydes are the major components of VOCs [2]. Moreover, VOCs resulting from the degradation and cracking of different types of polymers such as polypropylene (PP) (VOCs) affect severely the environment and can limit their uses [3]. Many materials have been reported for monitoring VOCs such as ZnO, methylammonium lead iodide perovskit and tungsten disulfide [4][5][6]. ZnO, a typical metal oxide semiconductor (MOS), has potential uses in the detection of hazardous gases because of its wide bandgap, n-type transport characteristic, and high electrical performance. Meanwhile, doping ZnO materials is an efficient approach to increase their sensing capability [7,8]. Polymers play an important role when used as the sensing material. However, these polymeric sensing materials have great advantages such as working at low temperatures (less than 100 °C), and low cost [9]. Thin films are the main shape of polymeric sensors due to the high relation between their surface area and volume ratio as well as their active porous surface. These features augment the sensing process for the target gases. Polymeric sensors have many applications, such as optical sensors, mechanical sensors, and resistive sensors [9]. Polymer composites exhibit superior properties over neat polymers, and thus, they have also been reported for monitoring VOCs [10]. Carbon nanotubes (CNTs) were utilized with different polymers as sensing composites [11][12][13]. Electrically conductive aerogels consisting of carbon nanotubes and cellulose were used before as vapor sensors. To test their vapor sensing capabilities, the electrical resistance of these aerogels was measured after exposure to vapors such as methanol, ethanol, toluene, and others. The results revealed that CNT-cellulose composite aerogels exhibit rapid response, high sensitivity, and good reproducibility to both polar and nonpolar vapors [14]. Polybenzoxazines (PBZ), as a kind of phenolics, are highperformance thermosets with a range of features and overcome some drawbacks of resole and novolac type phenolics. Nevertheless, it is hard to homogeneously disperse the CNTs in the PBZ polymer because the CNTs are insoluble in many solvents. In addition, because of the great surface energy and Van der Waals force, the CNTs tend to agglomerate into strict packages. The covalent connection between the reactive chemical groups of PBZ and CNTs, or modification of the CNT surface with functional groups that will further react with the polymer matrix, can be used to improve interactions between the PBZ and CNT nanofillers. Yang et al. synthesized a high dispersible pyrene-functionalized benzoxazine (Py-PBZ)/ single-walled carbon nanotube composite materials. The novel material showed high thermal stability after thermal curing [39]. Wang et al. [40] synthesized PFBZ-MWCNTs hybrid material that achieved good mechanical strength and high electrical conductivities (7 × 10 -5 S/cm −1 ) as a result of the good compatibility occurred between the matrix and filler. This work focused on the preparation of nanocomposites from PBZ and amine-functionalized MWCNTs for sensing VOCs. MWCNTs were oxidized to provide the surface with carboxyl groups for the interactions with the diamines. Functionalization of carboxylated MWCNTs (MWCNTs-COOH) was carried out using two different diamines, namely; 1,6-hexanediamine (HDA) and phenylenediamine (PDA). It was thought that the surface functionalization of MWCNTs by amines could increase the sensing properties of the composite. The characteristics of the composites were analyzed by different techniques, including FT-IR, XRD, SEM, thermal properties, and electrical conductivity. The PBZ/ PDA functionalized MWCNTs nanocomposite showed the highest electrical conductivity. Moreover, the new composite materials were tested as VOCs sensors for different solvents such as methanol, acetone, and toluene. This is the first time that such a novel composite was tested as a VOCs sensing material. Materials Dodecyl amine, Bisphenol-A, and paraformaldehyde were obtained from Kishida Co., Japan. Ethyl acetate, nitric acid, and sulfuric acid were obtained from El-Naser Co., Egypt. 1,6-hexanediamine (HDA), and phenylenediamine (PDA) were obtained from Fluka, Switzerland Multi-walled carbon nanotube with a diameter of 10-20 nm and length of 0.1-10 μm was acquired from EPRI (> 90% pure). All reagents were obtained from Sigma Aldrich and used without further purifications. Preparation of amine functionalization MWCNTs Two consecutive steps were carried out for the functionalization of MWCNTs: oxidation followed by treatment with HDA or PDA. For oxidative processing, 3 g of pristine MWCNT was charged to a 150 mL water solution of nitric acid and sulfuric acid (5 M HNO 3 /5 M H 2 SO 4 ) with a volume ratio of (1:3) in the refluxed system. The mixture was placed in a beaker and heated to 90 °C for 3 h. The mixture was then refined and cleaned many times with dist. water till the estimated pH becomes neutral. Finally, the carboxylated MWCNTs (MWCNTs-COOH) were dried in an oven at 90 °C for 4 h. The process of amine functionalization of MWCNTs was carried out as follows. A specific weight of carboxylated MWCNTs (2.5 g) was introduced to the HDA solution (15 g HDA/35 g ethanol). The mixture was agitated for 14 h at 50 °C before being filtered. The membrane used in the filtration process was composed of a 0.2-m mixed cellulose ester (MCE). The sample was then dehydrated at 80 °C for 3 h. The name of the obtained materials was abbreviated as MWCNT-HDA. In the case of MWCNT-PDA, carboxylated MWCNTs (2.5 g) was introduced to PDA solution (15 g PDA/35 g ethanol). MWCNTs were functionalized with PDA according to a previous producer [39]. Preparation of polybenzoxazine hybrid based on aromatic amine (PBZ/ MWCNT-PDA) Nanocomposites of polybenzoxazine with MWCNT-PDA were synthesized following the same procedure used for preparing (PBZ/ MWCNT-HDA). Preparation of sensing sample To test the selection of MWCNT-PDA concentration in the PBZ matrix for developing an efficient sensor, amounts from the solvent with varying concentrations for each component were collected. From a sensing standpoint, the findings of gas sensing for these samples revealed that the percolation theory agrees with the concentricity of nanotubes [41]. The sensor was placed in the vapor cell and subjected to 100 ppm of solvents. The gas combination was created, and the air was used to make the gas in low concentration and change the amount of solvent in the vapor cell to change the concentration of solvent. A micro-pipette was used to inject a specific amount of solvent into the unit for vapor detection. In the unit detecting vapor, the sensor was exposed to the solvent vapor after being completely mixed with the diluting gas (in. air). The resistance of the sensor began to increase because of exposure, and then at a constant value of resistance, the sensor was removed to recuperate in the open area. A 6517 Keithley Source meter was used to test the conductivity and responsiveness of the MWCNT-PDA/PBZ nanocomposite films in the presence of solvent fumes [41]. Characterization The Perkin Elmer-1430 was used to perform FTIR analysis in the wavenumber range of 4000-400 cm −1 . Thermogravimetric analysis (TGA) was performed using a Shimadzu TGA-50H using 8-10 gm samples at a heating rate of 10 °C/ min −1 . The samples were heated from room temperature to 800 °C under a nitrogen atmosphere. The morphology of the nanocomposite was measured using X-ray diffraction (XRD) pattern on a contemporary PAN analytical diffract meter, the Xpert PRO model. At room temperature and under continuous operating circumstances, all the diffraction patterns were studied (40 kV & 40 m A). Shore D hardness of the test specimens was determined using an ASTM-D2240-05 durometer. All these experiments were carried out at room temperature (25 °C + 1 °C). A Keithley electrometer type 6517 A was used to test the conductivity using a four-probe method. The pellets are put between two copper electrodes and linked to the Keithley electrometer's two terminals. FT-IR analysis The FT-IR spectra of MWCNT-COOH, MWCNT-HAD, and MWCNT-PAD is shown in Fig. 1. The FT-IR spectra of MWCNT-COOH were utilized to assess the impact of acid pretreatment on carbon nanotubes (Fig. 1a). The H-C-O band vibration is associated with the peak at 1380 cm −1 . Besides, the presence of COO − bunches in the design of MWCNT-COOH, was resulted from the acidic behavior, is The carboxylic groups which loaded on the carbon nanotubes by the oxidation process, as seen by these bands. In addition to providing the surface with carboxyl groups, the acid pretreatment was reported to disturb the hexagonal shape of MWCNTs making it easier to functionalize nanotubes with amines [40]. The peaks occurring at 1052 and 1083 cm −1 in MWCNT-HDA and MWCNT-PDA spectra ( Fig. 1b and c, respectively) are attributed to the C-N stretching vibration peak. In addition, the bands at 1457 and 1453 cm −1 peaks are related to the amide group (CO-NHR). Furthermore, the bands at 1516 and 1509 cm −1 showed the development of secondary amine groups on the MWCNT design [40]. The spectra of PBZ/0.3% MWCNT-HDA and PBZ/0.3% MWCNT-PDA are shown in Fig. 2a and b. The spectra exhibit the distinctive bands related to PBZ structure; peak at 1465 cm −1 corresponding to tri-substituted phenyl group, band at 1361 cm −1 corresponding to oxazine ring, peaks at 1225 and 1206 cm Fig. 3a-f, respectively. The image of modified MWCNT is entirely different from the others in the matrix, as the tube of MWCNT did not degrade when handled with an oxidative agent and amine-functionalized ( Fig. 3a and b). This indicates that MWCNT has a high level of stability and resistivity in acid and amine media. The acid pretreatment was reported to disturb the hexagonal shape of MWCNTs facilitating the functionalization of nanotubes with amines. The surface of PBZ/MWCNT-HDA/ (0.1% and 0.3%) showed great homogeneity and order of MWCNT in the polybenzoxazine resin ( Fig. 3c and d). Figure 3e and f showed that the PAD does not cover the MWCNT surface in the PBZ/0.1% and 0.3% MWCNT-PDA samples. Figure 4a and b show the XRD pattern of pristine MWCNT-HDA and PBZ composites. Figure 4a shows that MWCNT-HDA differs from the two other samples due to the development of a peak at 2θ = 18.5 and 24°, suggesting that the HDA reagent is present in the filler. This peak appeared at 2 θ = 24 and 18 degrees for 0.5 percent MWCNT-HDA and 1%MWCNT-HDA, respectively [35]. X-ray diffraction The remarkable variation in sites of peak indicates that MWCNT-HDA penetrates the benzoxazine matrix extremely well and exfoliates the structures. Figure 4b shows the XRD patterns of MWCNT-PAD, 0.5%MWCNT-PDA, and 1% MWCNT-PDA. Because of the extreme PDA surfactant, two peaks at 2θ = 18.5 and 24° appeared. However, in hybrid materials, 0.5%MWCNT-PDA and 1% MWCNT-PDA had a peak at 2 θ = 18° for both ratios which corresponding to the PAD reagent [40]. Fig. 5a-d. Under nitrogen conditions, there are usually three stages of weight loss. The first weight loss was due to the volatilization of amines (under 300 °C) and phenolic moieties (300-400 °C) which were assigned to the initial weight reduction phases. The second weight loss was identified with the degradation of the char (over 400 ºC) [42]. In contrast, the thermal stability of the PBZ/MWCNT hybrid complexes was higher than those of the pure PBZ due to the strong π-π interactions between PBZ and the MWCNT [43]. Thermal properties of nanocomposites However, in the derivative of TGA curves, MWCNT-HDA/PBZ and MWCNT-PDA/PBZ with higher thermal stability only displayed two overlapping peaks, indicating that bisphenol-A backbone degradation occurs shortly after Mannich bridge cleavage. Because of the increased thermal stability of bridge structures, the initial peak was delayed until 350 °C [44]. Because of its greater aromaticity, the MWCNT-PDA/ PBZ has better thermal stability than the MWCNT-HDA/ PBZ. It was observed that adding MWCNTs to MWCNT-PDA/PBZ and MWCNT-HDA/PBZ enhanced their thermal characteristics. Indeed, adding MWCNTs causes a positive shift to higher temperatures in the decomposition temperatures of PBZ/ 0.5% MWCNT-HDA (T 1 %) and (T 10 %), which were (226 and 336 °C, respectively. In contrast to neat PBZ, the major degradation temperature of PBZ /0.5% MWCNT-PDA nanocomposites was enhanced by 10%. In addition, it was seen that MWCNTs were reduction the rate of decomposition of the nanocomposites if compared with neat PBZ. The char residue demonstrated more significance especially with the sample of PBZ/ MWCNT-PDA resins due to the aromatic structures. Nanocomposites' electrical conductivity All the prepared materials exhibited a nanostructured solid network (specific surface areas). These features make them ideal candidates for portable chemical sensors [14]. It has previously been established that rheological percolation which requires particle contact, occurs at a very low concentration of MWCNT in PBZ/MWCNT-HDA and BZ/MWCNT-PDA hybrids. The excellent dispersion of MWCNTs throughout the matrices was further shown by SEM studies (shown in Fig. 5). Electrical measurements of MWCNT filled with polybenzoxazine matrices agree well with these findings. From Fig. 6a and b, it was found that even with an MWCNT concentration of 0.1 wt%, substantial improvements in electrical conductivity were achieved which reached 1.04 × 10 -8 Ω −1 cm −1 . Increasing the quantities of [42]. In comparison to MWCNT-HDA nanocomposites, MWCNT-PDA based nanocomposites have a somewhat higher electrical conductivity of 6.32 × 10 -9 and 6.11 × 10 -8 Ω −1 cm −1 . The differentiation of morphologies seen for nanocomposites might be a possible clarification based on our experimental data. To better clarify these results, another presupposition might be offered. It was also linked to a larger amount of crosslinking in the PBZ network, resulting in a higher and more positive immobilization of the MWCNT in the polymer framework to aid electron transport. Sensing properties The prepared sample (1%MWCNT-PDA) was tested as a sensing material for methanol, acetone, and toluene. This method was used with all solvents. The gas combination was created, and the air was used to make the gas in low concentration and changing the amount of methanol in the vapor cell to change the concentration of methanol. The resistance of the sensor began to increase as a result of exposure, and then at a constant value of resistance, the sensor was removed to recuperate in the open area. Source meter was used to test the conductivity and responsiveness of the MWCNT-PDA/PBZ nanocomposite films in the presence of methanol fumes [41]. PBZ/ 1%MWCNT-PDA at the percolation threshold was selected as sensing material for VOC vapor due to its good electrical performance resulted from the doping content of MWCNT which was functionalized by PDA. This functionalization raises the concentration of sp2 linked carbon atoms on the nanotube surface, which can adsorb VOCs [41]. Figure 7a-c showed the sensing behavior of PBZ/1%MWCNT-PDA towards toluene, acetone, and methanol. It was found that the electrical conductivity of films changes with temperature which can be attributed to the increment in the number of electron pathways of aromatic structures [45]. It was observed that there are changes in values of electrical conductivities of PBZ/MWCNT-PDA films which exposer to VOC. This Fig. 7 Conductivity of 1%CNT-PDA/PBZ nanocomposites sensor for a Toluene, b Methanol and c Acetone is attributed to little damage to the CNT surface [41]. PBZ/MWCNT-PDA sensor showed some differences in case acetone and methanol that appears clearly in Fig. 6. The temperature has a severe effect on the movement of electrons which is responsible for electrical conductivity. Therefore, the increase of temperature caused a decrease in values of conductivity that attributed to a change in the physical properties of these solvents. However, in the case of toluene (Fig. 6a) high values of conductivity can be attained at high temperatures up to 120 °C. Conclusion In this study polybenzoxazine/modified carbon nanotubes were prepared for sensing VOCs. Functionalization of carbon nanotubes with amines after acidic pretreatment enhanced the compatibility with benzoxazine matrix. The nanocomposites of polybenzoxazine and modified carbon nanotube with aromatic and aliphatic amine were evaluated with XRD, SEM, thermogravimetric analysis, and electrical conductivity. The nanomaterial based on carbon nanotube functionalized with aromatic diamine (p-phenylenediamine) has higher thermal stability and electrical conductivity than the other functionalized with aliphatic diamine (1,6 hexyl diamine). The composite based on 1% polybenzoxazine and phenylenediamine-functionalized carbon nanotube (PBZ/1%MWCNT-PDA) showed a higher conductivity than the neat resin and other composites based on HDA. Accordingly, the PBZ/1%MWCNT-PDA sample was tested as a sensing sheet for VOCs including methanol, acetone, and toluene. This sample introduced a good sensing for these solvents with high sensitivity and a long-life sheet used in most environmental conditions. Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-07-18T13:33:07.000Z
2022-07-18T00:00:00.000
{ "year": 2022, "sha1": "ab5ee115c583ad70e67f609ff2e87d3190de97c8", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10965-022-03169-1.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "ab5ee115c583ad70e67f609ff2e87d3190de97c8", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
24567343
pes2o/s2orc
v3-fos-license
A vignette-based survey to assess clinical decision making regarding antibiotic use and hospitalization of patients with probable aseptic meningitis 1Departments of Pathology and Laboratory Medicine, Queen Elizabeth II Health Sciences Centre; 2Interdisciplinary Research, IWK Health Centre, Halifax, Nova Scotia Correspondence: Mr Glenn Patriquin, Dalhousie University, Room 482 Bethune Building, 1276 South Park Street, Halifax, Nova Scotia B3H 2Y9. Telephone 902-473-7997, fax 902-473-4067, e-mail glenn.patriquin@dal.ca There are many infectious causes of meningitis, and one of the initial considerations by the physician is whether the etiological agent is bacterial or if the presentation is of an ‘aseptic’ nature. Bacterial meningitis must be treated aggressively with intravenous antibiotics, while aseptic meningitis is most often viral (1) and is treated supportively, often without the need for admission (2). When a patient presents with signs and symptoms of meningitis, a fundamental investigation is examination of the cerebrospinal fluid (CSF) for bacteria, cellular differential and chemistry. If these results confirm or suggest a bacterial infection (eg, positive Gram stain, elevated white blood cell [WBC] count, elevated protein, depressed glucose), the patient is treated with antibiotics and is admitted. If these classical findings of meningitis are not apparent, the physician must decide whether to admit and treat empirically or await the results of further investigations. When bacterial culture of CSF is negative, polymerase chain reaction (PCR) analysis for enterovirus is often performed and, if positive, supports the discontinuation of antibiotics and discharge, avoiding unnecessary costs and adverse patient side effects. The purpose of the present study was to compare patient characteristics on the basis of their influence on antibiotic use and hospitalization, in those whose meningitis etiology is unclear. It was predicted that factors more suggestive of a bacterial cause would positively influence antibiotic use and hospitalization. A vignette-based survey to assess clinical decision making regarding antibiotic use and hospitalization of patients with probable aseptic meningitis Glenn Patriquin MSc 1 , Jill Hatchette PhD 2 , Kevin Forward MD T here are many infectious causes of meningitis, and one of the initial considerations by the physician is whether the etiological agent is bacterial or if the presentation is of an 'aseptic' nature.Bacterial meningitis must be treated aggressively with intravenous antibiotics, while aseptic meningitis is most often viral (1) and is treated supportively, often without the need for admission (2). When a patient presents with signs and symptoms of meningitis, a fundamental investigation is examination of the cerebrospinal fluid (CSF) for bacteria, cellular differential and chemistry.If these results confirm or suggest a bacterial infection (eg, positive Gram stain, elevated white blood cell [WBC] count, elevated protein, depressed glucose), the patient is treated with antibiotics and is admitted.If these classical findings of meningitis are not apparent, the physician must decide whether to admit and treat empirically or await the results of further investigations.When bacterial culture of CSF is negative, polymerase chain reaction (PCR) analysis for enterovirus is often performed and, if positive, supports the discontinuation of antibiotics and discharge, avoiding unnecessary costs and adverse patient side effects. The purpose of the present study was to compare patient characteristics on the basis of their influence on antibiotic use and hospitalization, in those whose meningitis etiology is unclear.It was predicted that factors more suggestive of a bacterial cause would positively influence antibiotic use and hospitalization. Pilot study and vignette development Six infectious disease physicians were presented with the following scenario: "A patient presents to you in the Emergency Department.bACKGround: The many etiologies of meningitis influence disease severity -most viral causes are self-limiting, while bacterial etiologies require antibiotics and hospitalization.Aided by laboratory findings, the physician judges whether to admit and empirically treat the patient (presuming a bacterial cause), or to treat supportively as if it were viral.obJeCtIve: To determine factors that lead infectious disease specialists to admit and treat in cases of suspected meningitis.Methods: A clinical vignette describing a typical case of viral meningitis in the emergency department was presented to clinicians.They were asked to indicate on a Likert scale the likelihood of administering empirical antibiotics and admitting the patient from the vignette and for eight subsequent scenarios (with varied case features).The process was repeated in the context of an inpatient following initial observation and/or treatment.results: Participants were unlikely to admit or to administer antibiotics in the baseline scenario, but a low Glasgow Coma Score or a high cerebrospinal fluid (CSF) white blood cell count with a high neutrophil percentage led to empirical treatment and admission.These factors were less influential after a negative bacterial CSF culture.These same clinical variables led to maintaining treatment and hospitalization of the inpatient.ConClusIons: Most participants chose not to admit or treat the patient in the baseline vignette.Confusion and CSF white blood cell count (and neutrophil predominance) were the main influences in determining treatment and hospitalization.A large range of response scores was likely due to differing regional practices or to different levels of experience.The patient complains of a fever, headache and stiff neck.There is CSF pleocytosis.There are no localizing clinical findings."They were then asked to indicate, on a Likert scale, the importance of each of 13 variables (chosen by a review of cases [3] and anecdotal evidence) in admitting the patient or administering antibiotics.Seven of the 13 pilot variables were then chosen for use in clinical vignettes (eight variables in total, including one that was a combination of two individual variables), consisting of those that were ranked as highly 'influential' from the pilot survey and two that ranked low in influence (patient age and month of presentation). Participants There were 34 participants including six pediatric infectious diseases physicians, 21 adult infectious diseases physicians, two medical microbiologists and five combined specialists in infectious diseases and medical microbiology.Of the 32 who indicated, the mean (± SD) number of years since medical school graduation was 24.1±7.2(range eight to 40 years). Procedure In a seminar setting, two baseline scenarios representing a patient with meningitis on presentation to the emergency department, and an inpatient upon reassessment (Table 1) were presented to participants.Without interacting with one another, the participants were asked to indicate on an 11-point Likert scale their likelihood of starting/stopping antibiotics or admitting/discharging the patient.The numerical responses were categorized as follows: 0 to 3, unlikely; 4 to 6, undecided; 7 to 10, likely.Clinical variables were then individually altered and the same Likert scale was used to indicate the influence of each variable on treatment and admission.To assess the possibility that new scenarios may influence the responses to subsequent scenarios, the first scenario was represented at the end of the session, asking participants to respond without consulting their previous entries.Nonlocal participants received the vignettes by e-mail and completed the same response forms as the seminar participants, which were then mailed to the study authors.Paired t tests were performed using SPSS statistical software (IBM Corporation, USA) and graphs were created using Excel (Microsoft Corporation, USA). results emergency room patient vignette Infectious diseases physicians (n=34) from eight provinces were presented with the baseline emergency room vignette (Table 1).The participants indicated the likelihood of administering antibiotics or admitting the patient for treatment and observation according to the baseline characteristics or clinical variables (Figure 1).Most variables yielded a wide range of participant choices; however, the mean scores and distribution for the baseline vignette indicated that most participants would not administer antibiotics to the patient (28 of 34 participants answered in the unlikely categories).All variables presented led to an increase in mean likelihood scores for both antibiotic administration and admission when compared with the baseline vignette. Only the CSF WBC + high neutrophil percentage variable led to a narrow distribution of responses, with 31 of 34 participants choosing likely to administer antibiotics.The variables for which unlikely to administer antibiotics was the most chosen response were onset (19 of 34 responses) and month (21 of 33 responses). Most participants were unlikely to admit the baseline patient (24 of 34 responses) (Figure 1).Altering the month of presentation also resulted in most participants choosing unlikely to admit (19 of 33) and resulted in a nonsignificant mean change from the baseline (P=0.052).Fifteen of 34 participants chose unlikely to admit for the onset variable, with a mean difference from the baseline that reached statistical significance (P=0.022).All other variables resulted in most participants choosing likely to admit, especially a Glasgow Coma Score (GCS) of 12 (33 of 34 responses), reflecting that the patient was confused, or a CSF WBC count of 2.980×10 9 /L + neutrophil level of 80% (32 of 34 responses).All mean differences from the baseline reached statistical significance (P<0.001). Using the same vignette from the emergency room presentation, the participants were asked how likely they would be to stop antibiotics, had they been started, on the receipt of a negative CSF bacterial culture.They were also asked about the likelihood that they would discharge the patient, had they been admitted, on the receipt of a negative CSF bacterial culture.The participants had unanimously elected to discontinue antibiotics and discharge the patient in the baseline scenario (Figure 2).Most participants indicated that they were likely to discontinue antibiotics in all scenarios except for those with a high CSF WBC + neutrophil predominance, where there was a balanced response (15 of 34 chose likely, 15 of 34 chose unlikely and the remaining four were undecided).There were clear relative similarities between the baseline scenario and the altered variables.Regarding patient discharge, all respondents chose to send the patient home in the baseline scenario, as well as in the case of a short onset or in the month of February.According to mean likelihood scores, only patients with CSF WBC of 2.980×10 9 /L + neutrophil level of 80% would have remained in hospital, although this was not a unanimous decision among the participants. Admitted patient vignette Subsequently, participants were presented with a vignette describing an inpatient who was initially admitted for meningitis symptoms and whose status was improving (Table 1).In addition to the negative CSF Gram stain and bacterial culture, the results of the CSF enterovirus PCR were negative.Using the same approach as outlined previously, each variable was individually changed so as to compare each patient characteristic in isolation.All participants opted to stop the antibiotics for the baseline patient (Figure 3).A low GCS and high CSF WBC count + neutrophil predominance caused the most divergence from the mean baseline score (each with P<0.001).A low GCS led 18 of 34 participants to indicate that they were unlikely to discontinue antibiotics, while the same was true for 23 of 34 participants for a high CSF WBC count + neutrophil predominance.For all other variables, most participants indicated that they were likely to discontinue antibiotics, although all mean differences were significantly different from the baseline mean.Most participants (32 of 34) also chose to discharge the baseline inpatient, but to not discharge the patient in the low GCS scenario (32 of 34) or in the high CSF WBC count + neutrophil predominance scenario (21 of 34), each with P<0.001.The change in month was the only scenario in which the mean change from baseline was not significant (P=0.294). Table 1 baseline vignettes presented to survey participants emergency room Inpatient A patient presents to the emergency room complaining of fever, a headache and a stiff neck.There is no travel history or exposure of note.The patient is otherwise previously healthy.No focal neurological findings are reported and there are no signs of increased intracranial pressure.A Gram stain of the patient's CSF is negative for organisms.CSF protein is 0.70 g/L. A patient was admitted to hospital 48 hours ago and was given empirical antibiotics for meningitis.The patient's symptoms have somewhat improved.There is no travel history or exposure of note.The patient is otherwise previously healthy.No focal neurological findings are reported and there are no signs of increased intracranial pressure.Cerebrospinal fluid protein is 0.70 g/L.A Gram stain of the patient's CSF is negative for organisms.Bacterial culture of CSF is negative.Enterovirus polymerase chair reaction testing of CSF is negative. CSF Cerebrospinal fluid dIsCussIon In the present study, we sought to determine the presenting characteristics that were most influential in the management of meningitis, both in terms of antibiotic administration and hospitalization.The clinical vignettes enabled us to individually exchange these characteristics and to study resultant changes in decision making.Most participants chose in the baseline scenario not to start or to discontinue antibiotics, not to admit or to promptly discharge.These were likely appropriate decisions because the baseline scenario represented a typical enterovirus-positive patient (based on observed medical records, not published).Some of the physician's decisions for subsequent variations on the vignettes were as expected because a GCS <15 and (to a lesser extent) <10 have been shown to have higher association with a bacterial cause than with one that is viral (4).Not unexpectedly, participants administered antibiotics in response to the patient's low GCS (ie, when the patient was confused; most common response was 10), and were reluctant to discharge them from the hospital when both CSF culture and enterovirus PCR were negative (most common response was 0).The inpatient vignette with a low GCS led to a divergence from a general trend throughout the survey.In most permutations of the vignettes, the distribution of scores for the likelihood of administering antibiotics mirrored that of the likelihood scores for hospitalization.In the inpatient low GCS presentation, however, the vast majority (32 of 34) were unlikely to discharge the patient, while the participants were divided on whether to discontinue antibiotics, with 15 of 34 likely and 18 of 34 unlikely to discontinue the drugs.This possibly demonstrates the acceptance that a low GCS is not due to a bacterial cause, but is a serious manifestation of pathology that must be addressed in hospital.A higher CSF WBC count, as well as a higher CSF neutrophil percentage, is suggestive of bacterial meningitis, although there is extensive overlap between bacterial and viral meningitis (5). Figure 1 demonstrated that for the emergency room vignette, either a high CSF neutrophil distribution or a high CSF WBC count individually yielded an almost even distribution of responses, with slightly more representing the 'likely' side of the scale for both antibiotic administration (mean scores of 6.47 and 6.32, respectively) and hospital admission (mean scores of 6.50 and 6.97, respectively).When these two CSF characteristics were combined, however, participants overwhelmingly chose to administer antibiotics (mean score of 9.38) and to admit the patient (mean score of 9.32).Such a relationship was not observed in Figures 2 or 3, suggesting that these CSF data are less convincing of bacterial infection when more patient information is known (ie, negative CSF culture results, health has somewhat improved). We postulated that the month of onset influences treatment and admission symptoms.Eighty-eight per cent of enteroviral meningitis cases in Nova Scotia and those in other studies of enteroviral epidemiology occur in the latter six months of the year (3,6).While the baseline presentation in the month of October suggests a greater likelihood of enteroviral meningitis, a similar presentation in February would make an enteroviral etiology less likely and perhaps antibiotic use and admission more warranted.The general distribution of responses for vignettes for both months of presentation was similar for all questions asked, the mean differences between months often not meeting statistical significance.This was especially evident in Figure 2 and Figure 3, in which the distribution curves for a February presentation appeared strikingly similar to those of an October distribution, except for a small number of outliers, demonstrating a reluctance of several participants to discharge or stop antibiotics when enterovirus was unlikely, based on the season.The ranges of scores demonstrated more than the expected variation in the decision-making processes.For most scenarios, scores ranged from the 'unlikely' end of the spectrum to the 'likely', even when median scores were near the extremes.Indeed, only four of the 54 total questions asked yielded a group of scores that were unanimous in their decision making.These wide ranges of answers may reflect the diversity of regional practice, years of experience or specialty (adult or pediatric).Unfortunately, our small sample size does not support statistical analysis to better understand the influence of these factors.The present study was also limited by the methodology because we were unable to use the data to determine the combined effects of two or more variables.It is reasonable to believe that some patient variables, when presented together, would have an effect on decision making that is greater than the sum of the two parts.In future studies, inclusion of other clinical findings, such as meningismus, heart murmur or rash, might add more depth to the patient presentation and may give more insight into clinical decision making.Some respondents believed that they were restricted in their choice of pharmaceuticals, and that they would like to have had the option of prescribing acyclovir to the patient, because it is indicated in the treatment of Herpes simplex meningitis (7) and is sometimes administered in cases resembling our vignettes (3).However, administration of acyclovir was not identified as important by our pilot panel and was only raised during subsequent studies.We anticipated the possibility that participants may have been influenced in responding to a given variation by merely viewing previous variations.To address this potential source of bias, we assessed for consistency by repeating the baseline vignette at the end of the session.The mean likelihood difference between the baseline vignette presented early in the session and that presented late in the session was 0.15±1.0(P=0.4060),indicating that participants had not modified their approaches during the exercise. Although several patient variables were consistently influential in deciding antibiotic treatment and hospital stay, and conversely, many variables consistently did not affect the decision-making process, the GCS Glasgow Coma Score; WBC White blood cell ranges of responses were unexpectedly wide.The present study demonstrates that physicians practicing in similar fields and seeing the same patient, may approach their care differently, and also exemplifies the difficulty in standardizing the treatment for those with symptoms and findings of meningitis. G Patriquin, J hatchette, K Forward.A vignette-based survey to assess clinical decision making regarding antibiotic use and hospitalization of patients with probable aseptic meningitis.Can J Infect dis Med Microbiol 2012;23(3):125-129. Figure 1 ) Figure 1) Likelihood of administering antibiotics or admitting the patient based on the baseline vignette and subsequent scenarios.CSF Cerebrospinal fluid; GCS Glasgow Coma Score; WBC White blood cell Words: Clinical decision making; Judgment; Meningitis; Survey; Vignette un sondage fondé sur une saynète pour évaluer la prise de décision clinique relative à l'utilisation des antibiotiques et à l'hospitalisation chez des patients atteints d'une méningite aseptique présumée hIstorIQue : Les La plupart de participants choisissaient de ne pas hospitaliser ou traiter le patient observé dans la saynète de base.La confusion et la numération des globules blancs dans le LCR (et la prédominance en neutrophiles) étaient les principales influences pour déterminer le traitement et l'hospitalisation.La vaste plage d'indices de réponse était probablement attribuable à des pratiques régionales divergentes ou à divers niveaux d'expérience.Can J Infect Dis Med Microbiol Vol 23 No 3 Autumn 2012 ConClusIons : Likelihood of discontinuing antibiotics or discharge on receipt of a negative bacterial culture result.CSF Cerebrospinal fluid; GCS Glasgow Coma Score; WBC White blood cell Can J Infect Dis Med Microbiol Vol 23 No 3 Autumn 2012 reFerenCes Figure 3) Likelihood of discontinuing antibiotics or discharge on receipt of a negative enterovirus polymerase chain reaction test.CSF Cerebrospinal fluid;
2018-04-03T02:17:43.429Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "4b6a9e5252b9fea22f3b5162c38564e3440c729a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cjidmm/2012/289230.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4b6a9e5252b9fea22f3b5162c38564e3440c729a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6596237
pes2o/s2orc
v3-fos-license
Curated collection of yeast transcription factor DNA binding specificity data reveals novel structural and gene regulatory insights Background Transcription factors (TFs) play a central role in regulating gene expression by interacting with cis-regulatory DNA elements associated with their target genes. Recent surveys have examined the DNA binding specificities of most Saccharomyces cerevisiae TFs, but a comprehensive evaluation of their data has been lacking. Results We analyzed in vitro and in vivo TF-DNA binding data reported in previous large-scale studies to generate a comprehensive, curated resource of DNA binding specificity data for all characterized S. cerevisiae TFs. Our collection comprises DNA binding site motifs and comprehensive in vitro DNA binding specificity data for all possible 8-bp sequences. Investigation of the DNA binding specificities within the basic leucine zipper (bZIP) and VHT1 regulator (VHR) TF families revealed unexpected plasticity in TF-DNA recognition: intriguingly, the VHR TFs, newly characterized by protein binding microarrays in this study, recognize bZIP-like DNA motifs, while the bZIP TF Hac1 recognizes a motif highly similar to the canonical E-box motif of basic helix-loop-helix (bHLH) TFs. We identified several TFs with distinct primary and secondary motifs, which might be associated with different regulatory functions. Finally, integrated analysis of in vivo TF binding data with protein binding microarray data lends further support for indirect DNA binding in vivo by sequence-specific TFs. Conclusions The comprehensive data in this curated collection allow for more accurate analyses of regulatory TF-DNA interactions, in-depth structural studies of TF-DNA specificity determinants, and future experimental investigations of the TFs' predicted target genes and regulatory roles. Background Transcription factors (TFs) control and mediate cellular responses to environmental stimuli through sequencespecific interactions with cis regulatory DNA elements within the promoters and enhancers of their target genes, thus directing the expression of those genes in a coordinated manner. Because of the importance of TFs and their DNA binding sites in targeting gene regulation, numerous studies have aimed to identify the DNA binding specificities and target genes of these regulatory factors. Saccharomyces cerevisiae is one of the most extensively studied eukaryotic organisms and has served as an important model in understanding eukaryotic transcriptional regulation and regulatory networks [1,2]. Computational approaches, including phylogenetic footprinting [3,4], sequence analysis of sets of functionally related genes [5], and analysis of co-expressed groups of genes [6], as well as experimental approaches, including in vivo chromatin immunoprecipitation (ChIP) followed by microarray readout (ChIP-chip) [7], protein binding microarrays (PBMs) [8][9][10][11], and in vitro mechanically induced trapping of molecular interactions (MITOMI) [12], have sought to determine and catalog the DNA binding specificities of S. cerevisiae TFs. Recently, several studies [10][11][12] have examined at high resolution (that is, at the level of 'k-mer' binding site 'words') the in vitro DNA binding preferences of a large number of S. cerevisiae TFs. These studies used high-throughput in vitro techniques (PBM or MITOMI) to measure the DNA binding specificities of TFs for all possible 8-bp DNA sequences (8-mers), and used the resulting data to derive DNA binding site motifs. In addition to the comprehensive nature of the in vitro data reported in these studies (that is, covering all possible 8-mers), these data reflect the direct DNA binding preferences of the tested TFs; in contrast, ChIP data sometimes reflect indirect DNA binding of the immunoprecipitated TF by recruiting TFs [13]. The in vitro data reported in these studies are complementary to ChIP data, in that the in vitro data provide higher-resolution measurements of DNA binding preferences compared to ChIP (8 bp versus hundreds of base pairs, respectively) and they test the intrinsic DNA binding specificity of a TF in the absence of any protein co-factors or competitors (such as other TFs or nucleosomes). There is substantial overlap among the sets of TFs tested in the in vitro studies. Badis et al. [10] and Zhu et al. [11] report PBM data for 112 and 89 TFs, respectively, with data for 64 TFs reported by both studies. Fordyce et al. [12] report MITOMI data for 28 TFs, 20 of which also have PBM data reported by either Badis et al. or Zhu et al. Despite the large overlap among these studies, a comprehensive comparison, evaluation and integration of these different data sets has been lacking. Where DNA binding site motifs have been reported in several studies, in most cases the motifs agree across the studies, but it is unclear which motif would be best to use, such as for prediction of putative TF binding sites. Here, we analyzed the existing in vitro DNA binding specificity data from prior studies [10][11][12] and complemented those data with new PBM data for 27 DNAbinding proteins, with the goal of creating a single, curated resource of comprehensive DNA binding specificity data for S. cerevisiae TFs. We analyzed a total of 150 TFs, 90 of which have now been tested in at least two different studies. For each TF we report both its optimal DNA binding site motif that we selected from the four surveys (evaluated according to several criteria, including concordance with in vivo data) and the corresponding DNA binding specificity measurements for all 8-mer DNA sequences. This curated collection allowed for an in-depth investigation of the DNA binding specificities within an important eukaryotic family of TFs (the basic leucine zippers, or bZIPs), resulting in novel findings of plasticity in TF-DNA recognition. We found that the newly characterized VHT1 regulator (VHR) TFs (Vhr1 and Vhr2) recognize bZIP-like DNA motifs, while the bZIP TF Hac1 recognizes a motif highly similar to the canonical E-box motif of basic helix-loop-helix (bHLH) TFs. We also observed that 39 of the 150 yeast TFs in our curated list have distinct primary and secondary motifs, likely corresponding to different modes of binding DNA and potentially different regulatory functions. Thus, our results illustrate how one can take advantage of the comprehensive nature of the in vitro DNA binding specificity data in our curated collection to identify novel structural and gene regulatory features of TF-DNA interactions. These comprehensive data will allow for more accurate computational analysis of gene regulatory networks and directed experimental investigations of their predicted target genes and regulatory roles, as well as more in-depth structural studies of TF-DNA specificity determinants. Results and discussion Curated collection of high-resolution in vitro DNA binding data for S. cerevisiae TFs We compiled in vitro DNA binding specificity data from three prior large-scale studies [10][11][12] (Tables S1 and S2 in Additional file 1) and complemented them with newly generated universal PBM data for 27 TFs (see below), with the goal of generating the most up-to-date and comprehensive resource of in vitro DNA binding site motifs (Additional file 2) and corresponding highresolution DNA binding data, represented here as measurements of DNA binding specificity for all possible 8bp sequences (Additional file 3). Briefly, the relative binding preference for each 8-mer on universal PBMs is quantified by the PBM enrichment score (E-score) [14]. The E-score is a modified form of the Wilcoxon-Mann Whitney statistic and ranges from -0.5 (least favored sequence) to +0.5 (most favored sequence), with values above 0.35 corresponding, in general, to sequence-specific DNA binding of the tested TF [8]. We used the 8mer data to compute DNA binding site motifs using the Seed-and-Wobble algorithm [8,15]. For each TF we ranked all the 8-mers according to their E-scores and chose the highest scoring 8-mer as a seed to construct a primary motif. The PBM data were then analyzed to determine if there are spots of high signal intensity that do not score well by the primary motif; the 8-mer data were then analyzed to derive a secondary motif that does explain the residual binding to the DNA microarray probes. The set of 8-mers represented by a secondary motif can be of similar affinity as those of the primary motif, or can be of distinctly lower affinity [16]. We note that the E-scores we report for 8-mer seeds of secondary motifs are based on the initial ranking of all 8-mers and thus are directly comparable with the Escores reported for primary motif 8-mers. Secondary motifs derived from PBM data are unlikely to be attributable to a motif-finding artifact, and TF binding to secondary motifs has been confirmed by electrophoretic mobility shift assay for six mouse TFs [16]. Supporting results from a recent PBM survey of 104 mouse TFs [16], we observed that 39 of the 150 yeast TFs in our curated list recognize distinct primary and secondary DNA motifs (discussed in detail in a separate section in the Results and discussion). We analyzed in detail one of these 39 TFs, Sko1, and found that both the primary and secondary motifs are utilized in vivo and that they are potentially associated with different regulatory functions of Sko1 (discussed in detail later in the Results and discussion). Specifically, to complement the existing in vitro DNA binding data for S. cerevisiae TFs, we tested 155 proteins on universal PBMs [8]. Unlike previous studies, which focused on known and predicted TFs based on the presence of known sequence-specific DNA-binding domains (DBDs), our criteria for including candidate regulatory proteins were permissive and included many proteins without well-characterized DBDs and proteins for which we had low confidence in their being potential sequence-specific, double-stranded DNA binding proteins; thus, we did not expect many of these proteins to yield highly specific DNA binding sequences typical of TFs, but we tested them nevertheless in an attempt to obtain the most comprehensive TF DNA binding specificity collection possible. We also included proteins for which the existing in vitro data were of low quality or did not agree with previous literature (for example, Ste12, Ecm22). Of the 155 proteins attempted on universal PBMs, 27 resulted in sequence-specific DNA binding. In total, our collection encompasses 150 TFs, 90 of which have been examined in at least two different studies (Tables S3 in Additional file 1 and Additional file 4). For each of these 90 TFs, we chose the highest quality motif based on the agreement between the motif and other in vitro binding data, the enrichment of the motif in ChIP-chip data [7], and the quality of the raw 8-mer data used to generate the motif (Additional file 1). The enrichment of a motif in a ChIP-chip data set was expressed as an area under the receiver operating characteristic (ROC) curve (AUC); an AUC of 1 corresponds to perfect enrichment, while an AUC of 0.5 corresponds to the enrichment of a random motif. The selected DNA binding site motifs for the 150 TFs (represented as position weight matrices (PWMs)) are available in Additional file 2 with the source of each motif specified in Table S3 in Additional file 1. For most TFs analyzed here, the motifs reported in different studies look very similar, but are not equally enriched in the ChIP-chip data. For example, the Cin5 motifs reported in this study, Badis et al. [10], and Fordyce et al. [12] are very similar (Figure 1a), but their AUC enrichment in the Cin5_YPD ChIP-chip data [7] is 0.89, 0.88, and 0.81, respectively; thus, we chose the Cin5 motif newly reported in this study. For other TFs, the motif reported in one study is a truncated version of the motif reported in a different study, as illustrated in Figure 1b for Cst6; in this case, we chose the DNA binding site motif reported in this study because it better matches TGACGTCA, the known site for the ATF/ CREB family of bZIP TFs [17], of which Cst6 is a member. There are also a few TFs for which the motifs reported in different studies do not match, as shown in Figure 1c for Ecm22; in this case we turned to the existing literature and found that Ecm22 (and its close paralog Upc2) bind to the sterol regulatory element (SRE; TCGTATA) [18], which clearly matches the motif reported in this study, but not the motif reported by Badis et al. [10]. Overall, no single study clearly outperformed the other studies in terms of quality of the reported motifs (Additional file 4). We also compared the curated, in vitro DNA binding site motifs against motifs derived from the in vivo ChIPchip data of Harbison et al. [7], which were available for 85 TFs (Table S5 in For TF Ecm22 we selected the motif obtained in this study (which is different from the motif previously reported by Badis et al. [10]). The selected motif matches the sterol regulatory element TCGTATA, which had been reported to be bound by Ecm22 (and also its close paralog, Upc2). N/A, not available in Fordyce et al. [12]. good agreement, and we did not find that the in vivo motif explains the ChIP-chip data either better or worse than the in vitro motifs (data not shown). We did find, however, 15 TFs for which the in vivo and in vitro motifs are different ( Figure 2; Additional file 6), typically because the TF profiled by ChIP does not bind DNA directly (in which case the motif of the mediating factor is recovered from the ChIP data), or alternatively because a motif of a co-factor is also enriched in the sequences bound by ChIP (and is reported as the ChIPderived motif) (Additional file 6). For example, our analysis supports a model whereby Fhl1 binds DNA indirectly through a mediating factor, Rap1 [19], since the Fhl1 motif is not significantly enriched in the ChIP data whereas the Rap1 motif is, and the two TFs belong to different structural classes and thus are not anticipated to have similar DNA binding site motifs. In Figure 2 we show the in vitro and in vivo motifs for Sok2 and Sut1, members of the HTH APSES and Zn 2 Cys 6 families, respectively. The Sok2 and Sut1 in vitro motifs are in excellent agreement with the PBM-derived motifs for the highly similar TFs Phd1 and Sut2, respectively, but are significantly different from the motifs derived from ChIP-chip data [7,20]. As shown in Figure 2, both the PBM-derived motifs and the ChIP-derived motifs of Sok2 and Sut1 are significantly enriched in the ChIPchip data. In such cases we conclude that the PBMderived motifs reflect the direct DNA binding specificities of the TFs, while the ChIP-derived motifs may represent the DNA binding specificities of co-regulatory TFs (often belonging to different DBD structural classes) that bind in vivo to many of the genomic regions bound by the TFs profiled by ChIP. In total, we noticed discrepancies between in vitro and in vivo TF binding data for 15 of the 150 TFs in our curated list. These cases are discussed in detail in Additional files 1 and Additional file 6 and later in the Results and discussion section we present a thorough re-analysis of the in vivo ChIP-chip data of Harbison et al. [7] using our curated collection of in vitro motifs. Comprehensive PBM data reveal new insights into the DNA binding specificities of bZIP and VHR TFs Comprehensive data on the DNA binding specificities of TFs, such as PBM data, can reveal insights into the differences in DNA sequence preferences among TFs within the same structural class [21][22][23][24][25]. Here, we studied in detail eight bZIP DNA-binding proteins: five Yap (yeast AP-1) proteins and three additional bZIP proteins (Cst6, Gcn4, and Sko1) for which high-resolution PBM data are available (this study and Zhu et al. [11]). In Figure 3a, next to each DNA binding specificity motif logo we show the E-score of the 8-bp seed sequence used to construct the PBM-derived motif [8]. E-scores above 0. 45 For both the in vitro and in vivo motifs of the three TFs we show their enrichment in the corresponding ChIP-chip data set, measured by the AUC and the associated P-value. We also show the in vitro motifs (from our curated collection) that are most similar to the in vitro and in vivo motifs of the three TFs of interest (the red lines indicate which parts of the motifs are similar). We notice that in all three cases the in vivo motifs are similar to the DNA binding site motifs of TFs from a different structural class. This suggests that in each of the three cases the in vivo motif (derived from ChIP-chip data) does not belong to the TF profiled by ChIP, but either to a co-regulatory TF (which binds a common set of targets as the profiled factor), or to a mediating TF (which binds DNA directly and mediates the interaction between the TF profiled by ChIP and the DNA -in this case we hypothesize that the TF tested by ChIP binds DNA indirectly thought the mediating TF). Motif sources from this study and Zhu et al. [11], Badis et al. [10], and MacIsaac et al. [20]. The bZIP DBD consists of two functionally distinct subdomains: the basic region (which makes specific DNA contacts) and the leucine zipper region (which is involved in dimerization) [26]. Proteins of this class homo-and heterodimerize, and typically bind either overlapping or adjacent TGAC half-sites, based on which bZIPs are often categorized into two subclasses: AP-1 factors that prefer the TGA(C|G)TCA motif and ATF/CREB factors that prefer TGACGTCA [17]. The S. cerevisiae genome encodes 14 bZIP factors, 8 of which belong to the fungal-specific Yap subfamily [27] and bind overlapping or adjacent TTAC half-sites instead of TGAC half-sites. Our results on the DNA binding specificities of bZIP proteins largely agree with what has been reported previously based on ChIP data: Yap3, Yap4 and Yap6 prefer adjacent TTAC half-sites, Yap1 and Yap2 prefer overlapping TTAC half-sites [28,29], and Gcn4 prefers overlapping TGAC half-sites [30]. Also in agreement with previous reports [17], we find that AP-1 bZIPs (Yap1, Yap2, and Gcn4), which generally prefer overlapping half-sites, bind to adjacent half-sites with almost equal affinity: the E-scores of the 8-bp seeds for the primary and secondary DNA binding site motifs of Yap1, Yap2, and Gcn4 are very close or even identical ( Figure 3a). Previous reports also suggest that ATF/ CREB bZIPs, which generally prefer adjacent half-sites, bind poorly to overlapping half-sites [17]. However, our high-resolution PBM data indicate that while this is true for Cst6, Sko1, Yap4, and Yap6, the TF Yap3 can also bind overlapping TTAC half-sites with high specificity Vhr1 NKFSGNGATHKIRELLNFNDEKKWKQFSSRRLELIDKFQLSQYKASEQ TTHKIRAQLNFNDEKKWKKFSSRRLELIDSFGLSQHKASEQ *::* *:***** **********:**********.* ***:***** Vhr1 DQNIKQIATILRTEFGYPVSCSKEFEKLVTAAVQSVRRNRKRSKKR Vhr2 DDNIRQIATILRSEFEYPDTFSAEFEKLVTAAVQSVRRNRKRSKKK *:**:*******:** ** : * **********************: Overlapping Base-pair contacting residues (in bZIPs) : * :***************** * = identical : = strong similarity . = weak similarity 8-mers with E-score 0.35 Figure 3 bZIP and VHR TFs. (a) Phylogeny and PBM-derived motifs for the eight bZIP and two VHR proteins analyzed in this study. The evolutionary tree was built from a ClustalW2 [59] multiple sequence alignment of the DBDs of the ten proteins, as annotated in UniProt [60]. Green and magenta backgrounds correspond to TFs that bind primarily to overlapping or adjacent half-sites, respectively. TFs that bind Yap-like half-sites are shown in red. TFs that bind Gcn4-like half-sites are shown in blue. All motif logos were generated using EnoLOGOS [58], based on motifs generated from PBM data in this study and Zhu et al. [11] using the Seed-and-Wobble algorithm [8,15]. The numbers next to the motif logos represent the E-scores of the 8-mer seeds used to construct the motifs [8]. For proteins that bind both overlapping and adjacent half-sites, the motif corresponding to the largest seed E-score (sometimes referred to as the primary motif) is shown in a black box. (b) ClustalW2 multiple sequence alignment of the basic regions of bZIP proteins against the DBDs of VHRs. The Vhr1 and Vhr2 regions shown are the ones that best align to the eight basic regions considered, and they correspond to the first putative VHR basic region (see (e)). The residues shown in red and blue are important for YAP-like versus Gcn4-like half-site specificity. The residues shown in green and magenta are important for overlapping versus adjacent half-site binding. (c) Recognition of Yap-like and Gcn4-like half-sites [30,61]. (d) Heat map of the DNA-binding preferences of Yap1 (as a representative of the Yap subfamily), Cst6, Sko1, Gcn4, Vhr1, and Vhr2. The rows correspond to 8-mers with an E-score ≥0.35 for any of the six TFs; the columns correspond to the TFs. The E-score scale is shown at the bottom. Black boxes indicate the 8-mers that correspond to various motifs (shown on the right). (e) Alignment of the full DBDs of Vhr1 and Vhr2. Residues that fold into alpha-helices (according to PSIPRED [62]) are shown in bold. Black boxes show the two putative basic domains in VHR proteins. (f) Alignment of the second putative VHR basic region to basic regions of the eight bZIPs analyzed in this study. (the seed E-score for the secondary Yap3 motif is 0.493, close to that of the Yap3 primary motif seed: 0.497). This finding suggests that, despite the fact that some of the residues important for half-site spacing specificity have been identified (Figure 3b; Additional file 1), it is not yet fully understood how these proteins achieve their specificity. It is possible that specific combinations of residues (not necessarily DNA-contacting residues) determine the preference for binding to overlapping versus adjacent half-sites. Since the Yap family of bZIP proteins was first characterized [27], the basic region residues Gln9, Gln14, Ala16, and Phe17 ( Figure 3b) have been reported to provide specificity for Yap-like half-sites (TTAC). However, we noticed that Sko1, a typical bZIP protein that binds to adjacent TGAC half-sites [31], also has a phenylalanine at position 17 of the basic region. Our highresolution PBM data allowed us to analyze in more detail the specificity of Sko1 for TGAC versus TTAC half-sites. As shown in Figure 4, Sko1 does indeed have a higher preference for TTAC half-sites than do the typical bZIP proteins Gcn4 and Cst6. This finding confirms the importance of residue Phe17 for conferring Yap-like versus Gcn4-like half-site preference. In addition to bZIP proteins, we analyzed PBM data for Vhr1 and Yer064c, members of the fungal VHR (VHT1 regulator) class of DNA-binding proteins, for which only a single DNA consensus sequence had been reported previously [32]. The Yer064c protein sequence and its DNA binding specificity are very similar to those of Vhr1 (Figure 3), so we henceforth refer to Yer064c as Vhr2. Our PBM data indicate that these VHR proteins bind Gcn4-like motifs despite the fact that their DBD is of a different structural class. As shown in the dendrogram in Figure 3a, the DBDs of Vhr1 and Vhr2 are closely related to each other, but not to DBDs of bZIP proteins. Furthermore, in an alignment of the Vhr1 and Vhr2 DBDs against the basic regions of bZIP proteins (Figure 3b), it is apparent that essential DNA-contacting residues in the basic region of bZIPs (for example, Asn10, Arg18; Figure 3c) are not found in the VHR domain. In an attempt to identify the DNA-contacting region in the VHR domain, we analyzed the protein sequences of Vhr1 and Vhr2 and found that these proteins have two putative basic regions, which we denote as b1 and b2 (Figure 3e). The second basic region seems to align better to the basic regions of bZIP proteins (Figure 3b) than does the first basic region, and it is also more conserved across Saccharomyces species in the sensu stricto clade (Figure 3f; Figure S1 in Additional file 1). These observations suggest that the second basic region in the VHR domain is more likely to be the one that interacts with DNA. Identifying the exact DNA-contacting residues and key specificity determinants will require further experimentation, involving mutagenesis experiments and structural analyses. It would be interesting to see whether VHR proteins contact DNA in a way similar to bZIPs or if they utilize a completely different structural mode of protein-DNA recognition. We also note that VHR proteins bind exclusively to overlapping TGAC half-sites, unlike AP-1 proteins (including Gcn4), which can bind both overlapping and adjacent half-sites (Figure 3a,d). We are not aware of any AP-1 protein that binds exclusively to overlapping half-sites. As shown in Figure S2 in Additional file 1 all AP-1 proteins with PBM data in UniPROBE can also bind adjacent half-sites, unlike VHR proteins. All this evidence indicates that VHR is a distinct DBD structural class, despite the fact that there is significant overlap between the DNA sequences preferred by VHR and bZIP proteins. Yeast Hac1 is a bZIP TF whose specificity is more similar to bHLHs than bZIPs In the above analysis of bZIP factors, we did not include Hac1, an essential TF involved in the unfolded protein response in S. cerevisiae [33], for which high-resolution PBM data are available (this study and Badis et al. [10]). According to key residues in its DBD (Figure 5a, residues marked in blue), Hac1 is a bZIP factor that should bind either overlapping or adjacent TGAC half-sites. However, its primary PBM-derived motif, obtained using the full-length protein in PBM experiments, is most similar to an E-box, which is characteristic of bHLH proteins such as Cbf1 (Figure 5b). We note that Hac1 does not have a secondary DNA binding site motif that resembles a bZIP motif. Furthermore, its E-box motif appears to be utilized by Hac1 in vivo: this motif is significantly enriched in the Harbison et al. [7] Hac1_YPD ChIP-chip dataset (AUC = 0.6906, P = 0.005), while typical bZIP motifs (TGAsTCA and TGACGTCA) are not significantly enriched (P > 0.1) in that same ChIPchip dataset. Visual inspection of the Hac1 DBD revealed a portion that aligns well to the basic regions of bHLH proteins, especially those of the human myogenic factor MyoD1 and its Caenorhabditis elegans ortholog HLH-1. Hac1 shares many of the DNA-contacting residues [22] with the myogenic bHLHs ( Figure 5a). However, unlike the myogenic factors, which prefer the hexamers CACCTG and CAGCTG [34], Hac1 strongly prefers CACGTG; thus, we compared the DNA binding specificity of Hac1 with that of the S. cerevisiae TF Cbf1, which also strongly prefers CACGTG. Although the motifs of Hac1 and Cbf1 are very similar, the 8-mer PBM data reveal that there are significant differences in their DNA binding specificities. Whereas Cbf1 has a strong preference for G or T upstream of the CACGTG core motif, Hac1 prefers A or C (Figure 5c). Similarly, while both Hac1 and Cbf1 bind CACGT with high affinity, Cbf1 strongly prefers CACGT(G|T) to CACGT(A|C) (Figure 5d). These differences in specificity are supported by the PBM data from Badis et al. [10], which show the same trends ( Figure S3 in Additional file 1). Thus, despite the fact that the Hac1 and Cbf1 motifs look very similar, there are substantial differences in the DNA binding preferences of these two proteins, which likely contribute to their in vivo specificities. Indeed, all sequences bound by Cbf1 in a ChIPchip experiment performed on yeast grown in rich medium (Cbf1_YPD) [7] contain (T|G)CACGT, while only 4 of the 16 sequences bound by Hac1 in this same condition (dataset Hac1_YPD) contain this motif, and 2 of these 4 sequences also contain the (A|C)CACGT motif that is preferred by Hac1 (Figure 5c). In conclusion, Hac1 seems to be a hybrid between a bHLH and a bZIP protein. Its DBD strongly resembles the domains of Basic region is not simply due to the fact that Sko1 prefers adjacent half-sites to overlapping half-sites. If this were the case, we would expect Gcn4 to bind overlapping Yap half-sites with higher affinity than Sko1, but we do not observe such a trend. bZIP proteins, although part of its basic region shows strong similarity with the basic regions of bHLHs (Figure 5a); the similarity to bHLH proteins likely explains why it can bind an E-box motif. However, the DNA binding specificity of Hac1, as analyzed here by PBM, is not that of a typical bHLH protein. In-depth structural investigations of Hac1 and its homologs in other organisms would reveal whether its DNA-contacting residues are indeed the same as in bHLH proteins and might provide insights into the evolutionary relationship between bZIP and bHLH domains. S. cerevisiae TFs with two distinct DNA binding site motifs Prior surveys have not investigated whether S. cerevisiae TFs recognize primary and secondary DNA binding site motifs, as do numerous mouse TFs [16]. We found that 39 of the 150 TFs in our curated list recognize two distinct motifs (Figure 6a; Figures S4 and S8 in Additional file 1). For 5 of the 39 TFs (Leu3, Lys14, Tea1, Ylr287c, and Zap1), the two motifs correspond to a full motif versus a single half-site; while this might be an artifact of Seed-and-Wobble, the algorithm used to compute the motifs from PBM data, the fact that TFs can bind DNA Base-pair contacting residues (in bHLHs) Base-pair contacting residues (in bZIPs) both as homodimers and as monomers is supported by results reported in a recent survey of mouse TFs using PBMs [16] and a recent survey of human TFs using an in vitro selection approach [35]. We note that for two TFs that have ChIP-chip data available (Leu3 and Zap1) [7], the full motif was more enriched than the half-site, which is consistent with the model that these TFs bind DNA in vivo as homodimers, at least in the conditions tested thus far by ChIP. The remaining 34 TFs with secondary DNA motifs can be grouped into three categories, analogous to categories noted previously for mouse TFs [16]. We found five variable spacer length TFs (Gcn4, Pdr3, Yap1, Yap2, and Yap3), for which the primary and secondary motifs contain similar half-sites separated by different spacer lengths. For some of these TFs (Yap1 and Gcn4) the secondary motifs were bound nearly as well as the primary motifs, as illustrated by the fact that the 8-mer seeds for the two motifs have similar or identical Escores ( Figure 3). We found 24 cases of position interdependence TFs (Figure 6a; Figure S4 in Additional file 1). For each of these 24 TFs, the primary and secondary motifs share a common portion that typically spans three to five (often adjacent) nucleotide positions, but Regions that score highly according to the primary but not the secondary motifs are shown in red. Regions that score highly according to the secondary but not the primary motifs are shown in blue. (c) Gene Ontology (GO) categories enriched in the regions that score highly according to the primary but not the secondary Sko1 motif. (d) GO categories enriched in the regions that score highly according to the secondary but not the primary Sko1 motif. See main text and Additional file 1 for details. that are otherwise different. For example, the primary and secondary Ecm22 motifs share the core TCGT(A| T), but the primary motif ends in TA(A|G) while the secondary motifs ends in CCT. In such cases the primary and secondary motifs cannot be combined into a single PWM because the PWM model assumes independence between nucleotide positions. This implies that in order to accurately represent the DNA binding specificity of these TFs using standard PWM models, one has to consider both the primary and secondary motifs. The secondary motifs of five TFs were not readily explainable by either variable spacer length or position interdependence. These TFs, classified as alternative recognition interfaces, might bind DNA either through alternative structural features [36] of the DBD or by adopting alternative conformations. Given the high number of TFs with secondary DNA motifs, we asked whether both modes of binding DNA are used in vivo and whether the primary and secondary motifs of a TF are associated with different regulatory functions. We first attempted to use the ChIP-chip data from the large-scale study of Harbison et al. [7] to address these questions. However, of the 34 TFs classified as variable spacer length, position interdependence, or alternative recognition interfaces, 12 TFs are not represented in the ChIP-chip data and for another 11 TFs neither the primary nor the secondary motif is enriched in the ChIP-chip data. Of the remaining 11 TFs, 5 have fewer than 30 bound sequences in the ChIP-chip data (for this analysis of primary and secondary motifs, we required a minimum of 30 bound sequences), and 6 TFs were tested only in rich medium although they are known to function in different cellular conditions. Thus, the ChIP-chip data of Harbison et al. [7] cannot be used to address the question of whether the primary and secondary motifs may be associated with different biological functions of the same TF. This question needs to be addressed for each TF individually using high-quality, high-resolution in vivo DNA binding data collected under cellular conditions where the TF is known to be active. While generating or compiling such data is beyond the scope of this paper, for one of the TFs with a secondary motif, Sko1, suitable ChIP-chip data were readily available and we analyzed them in detail (see below). Primary and secondary DNA binding site motifs for TF Sko1 are associated with different regulatory functions When the SKO1 gene was first cloned [31], it was reported to encode a bZIP protein that binds to the ATF/CREB motif (TGACGTCA) but that can also bind a slightly different site (ATGACGTACT) in the promoter region of SUC2 (a sucrose hydrolyzing enzyme), acting as a repressor of SUC2 transcription [31]. These two sites are perfect matches for the secondary and primary Sko1 motifs -TGACGTCA and ATGACGTArespectively. Recently, Ni et al. [37] analyzed the temporal DNA binding of several TFs involved in osmotic stress response in S. cerevisiae, including Sko1, by ChIP-chip on high-density oligonucleotide arrays. The ChIP-chip experiments were performed after incubation of the yeast in high salt concentration for 0, 5, 15, 30, and 45 minutes; for each time point, Ni et al. reported the regions bound by Sko1 at a false discovery rate of 0.01. Each bound region located within 1 kb of a gene was assigned to that gene [37]. We scored the regions bound by Sko1 in vivo according to the primary and the secondary motifs using the GOMER model [38], which computes the probability that a DNA sequence is bound by a TF with a particular PWM. Figure 6b shows a scatter plot of these scores for the regions bound by Sko1 in vivo after salt treatment for 5 minutes; we obtained similar results for other time points (data not shown). There are high-scoring regions for both the primary and the secondary Sko1 motifs, which suggests that both motifs are utilized in vivo. Next, for the bound regions that score highly according to the primary motif but low according to the secondary motif (marked in red in Figure 6b), we performed a Gene Ontology (GO) annotation term enrichment analysis of the bound genes using FuncAs-sociate2 [39] and found significant enrichment (P < 0.005; Additional file 1) for the categories hexose metabolic process, polysaccharide catabolic process, monosaccharide metabolic process, and carbohydrate metabolic process (Figure 6c). Similarly, we analyzed the ChIP-bound regions that score highly according to the secondary motif but low according to the primary motif (marked in blue in Figure 6b) and found that different GO categories were significantly enriched, including peroxidase activity, cellular response to oxidative stress, response to oxidative stress, and antioxidant activity (Figure 6d), which indicates that the secondary Sko1 motif is associated primarily with genes involved in oxidative stress. In addition to its critical role during osmotic stress response [37], Sko1 has also been shown to regulate genes encoding enzymes implicated in protection from oxidative damage [40]; our analysis suggests that Sko1 performs this function through its secondary DNA binding site motif. We also find that the Sko1 secondary motif may be used to regulate heat response genes, which suggests a novel regulatory function for this TF. Sko1 is not the only TF that utilizes both the primary and the secondary motifs in vivo. Evidence from smallscale studies shows that Gcn4, which binds primarily to TGACTCA sites upstream of amino acid biosynthetic genes [41], also binds with high affinity to the secondary motif TGACGTCA and activates transcription through this site in vivo [42]. We anticipate that future in-depth analyses of high-quality ChIP-chip data, similar to the analysis we performed for Sko1, will show that many of the secondary DNA binding site motifs of yeast TFs are used in vivo, and that they are associated with different regulatory functions of the TF. Predicted functions of the newly characterized TFs Vhr1 and Vhr2 We used the PBM data in a sequence-based promoter analysis as described previously [11] to predict target genes and biological roles for the newly characterized proteins (Additional file 7). Briefly, this method scores genes according to the presence of PBM-derived DNA binding sequences in their promoter regions; although the presence of a binding site sequence does not guarantee in vivo TF binding and regulation of the downstream gene, this analysis provides computational predictions of TF regulatory targets and associated biological functions. This analysis allowed us to make initial function predictions for two newly characterized proteins, Vhr1 and Vhr2, with poorly annotated functions. The top 200 predicted target genes of Vhr1, scored according to the PBM 8-mer data (Additional files 1 and 8), are significantly enriched [39] (P_adj ≤ 0.001) for the GO categories small molecule biosynthetic process, small molecule metabolic process, and cofactor binding (Additional file 7), consistent with its previously discovered role in regulating VHT1 (Vitamin H transporter) and BIO5 in a biotin-dependent manner [32]. Additional, novel roles for Vhr1 are predicted for cellular nitrogen compound biosynthetic process and the biosynthesis and metabolism of arginine, glutamine, serine, and other amino acids (Additional file 7). Because of its highly similar DNA binding specificity, Vhr2 is also predicted to function in most of these same biological processes. Gene expression data from a large microarray compendium containing 352 datasets from 233 published studies [43] lend additional support for a role of Vhr1 in amino acid and nitrogen-related biological processes. Using the SPELL search engine [43], we find that gene expression microarray experiments involving leucine [44] and histidine limitation [45] are among those ranking highest for Vhr1 differential gene expression. Additionally, when considering the 50 genes most similarly expressed as Vhr1 across all datasets, the significantly enriched GO terms (P < 0.05, Bonferroni-corrected Fisher's exact test [43]) include cellular amino acid biosynthetic process and cellular nitrogen compound biosynthetic process; similar enrichment is observed for Vhr2. These amino acid-related roles for Vhr2 are further supported by its known physical interaction with Ape2p [46], a leucine aminopeptidase involved in the cellular supply of leucine from external substrates as well as in general peptide metabolism [47,48]. Finally, we used the CRACR algorithm [49] to survey approximately 1,700 gene expression microarray data sets to identify conditions in which Vhr1 or Vhr2 are predicted to regulate their target genes, and found that the putative target genes of these TFs are predicted to be significantly induced under amino acid starvation and nitrogen depletion conditions (Additional file 9). Inference of direct versus indirect TF DNA binding in ChIP-chip data ChIP-chip and ChIP-Seq data, which reflect genomewide, in vivo TF DNA binding, are powerful approaches for determining what genomic regions are occupied by a TF in vivo and thus what target genes they might regulate. Although such ChIP data are often used to derive TF DNA binding site motifs, the reported binding sites and motifs may reflect the DNA specificity of multiprotein complexes in addition to, or instead of, direct DNA binding of the profiled factor. We re-analyzed the S. cerevisiae in vivo ChIP-chip data of Harbison et al. [7] using the in vitro motifs for 150 TFs to determine whether the factors profiled by ChIP bind DNA directly or indirectly [13]. For each ChIP data set we computed the enrichment of the 150 primary motifs and the 39 secondary motifs in the ChIP-bound versus the ChIPunbound sequences, as described previously [13] and in the initial section of the Results and discussion. We consider a motif significantly enriched in a ChIP data set if it has an AUC ≥ 0.65 and an associated P -value ≤0.005 (based on randomizations of the motif) [13]. For each ChIP-chip data set, if either the primary or the secondary motif of the profiled TF was significantly enriched, then we conclude that the factor binds DNA directly. This was the case for 71 of the 167 examined ChIP-chip data sets. For 22 additional data sets the profiled TF was enriched, but its enrichment was just below our stringent significance criteria. We analyzed these sets more closely and similarly conclude that direct DNA binding of the profiled TFs is the most likely explanation for these 22 data sets (Additional file 10). For 33 ChIP-chip data sets, the motif of the profiled TF was not significantly enriched and only the motifs of TFs with different DNA binding specificities were significantly enriched. The most likely explanation for these data sets is indirect DNA binding of the profiled factor through one of the TFs whose motifs are significantly enriched. Thus, of the 167 ChIP-chip data sets for which high-resolution in vitro data were available for the profiled TF, roughly half (93) can be readily explained by direct DNA binding, about 20% can be explained by indirect DNA binding, while the remaining 41 data sets were not explained by any of the in vitro motifs, either because the set of motifs is still incomplete, or because the analyzed ChIP-chip data were too noisy, or because the profiled TF might bind DNA directly or indirectly through association with a variety of different motifs, no one of which is responsible for a significant fraction of the regions occupied in vivo. Approaching a complete collection of TF DNA binding specificities in S. cerevisiae Because of our goal of identifying previously unknown TFs and our willingness to test even low-confidence predictions of potentially sequence-specific DNA binding proteins, our criteria for including candidate regulatory proteins in this study were permissive (that is, chromatin-associated proteins or proteins simply annotated as transcriptional regulatory protein) and thus included proteins that likely do not have sequence-specific DNA binding activity. Of the 92 proteins (out of 155 attempted) that did not belong to a well-characterized DBD family that we nevertheless assayed by PBM, only 2 (Msn1, Gcr1) resulted in sequence-specific DNA binding motifs. Several classes of proteins contain structural domains that have failed to yield sequence-specific DNA binding motifs in this study or any of the previous high resolution in vitro studies performed for S. cerevisiae or mouse proteins [10][11][12]16]: bromodomain; c; FYVE; HhH-GPD; HHH; HTH_3; PHD; SAP; SIR2; SNF2_N; XPG_N; zf-CCCH; zf-CCHC; zf-DHHC; zf-MIZ; and zf-BED. Furthermore, both the CBFD_NFYB_HMF and Copper-fist domains have produced sequence-specific DNA binding motifs from in vivo ChIP-chip experiments [7,20], but have failed to do so in any of the aforementioned in vitro studies, most likely due to the absence of protein partners or the necessary copper ion cofactor, respectively. Of the 27 TFs whose DNA binding specificities were determined successfully by PBMs in this study, nine lacked prior high-resolution in vitro DNA binding data from universal PBM or MITOMI assays: Gcr1, Hmlal-pha2, Mot3, Stp1, Sut1, Upc2, Vhr1, Vhr2, and Zap1 ( Figure S5 in Additional file 1 and Additional file 11). Vhr1 and Vhr2 are discussed in detail in an earlier section. Sut1, a member of the Zn 2 Cys 6 TF family, binds the motif AASTCCGA, which is in excellent agreement with the PBM-derived motif for the highly similar Zn 2 Cys 6 TF Sut2 [11], but differs significantly from a prior motif for Sut1 derived from in vivo ChIP-chip data [7,20]. As discussed above, we conclude that the ChIPderived motif represents the DNA binding specificity of a co-regulatory TF (the ChIP-derived Sut1 motif matches the motifs of the TFs Mig1, Mig2, and Mig3; Figure 2). For 13 of the 27 factors characterized in this study, PBM data have been reported previously by Badis et al. [10], and for 18 of the 27 factors MacIsaac et al. [20] reported DNA binding site motifs derived from ChIP-chip data [7]. However, when we computed the enrichment of our PBM-derived motifs and previously reported motifs in 17 ChIP-chip data sets where these factors were profiled [7], we found that in 13 of the 17 ChIP data sets the motif reported in this study was the most significantly enriched motif ( Figure S5 in Additional file 1). Thus, the new PBM data reported in this study improve on and complement the existing highresolution DNA binding specificity data, bringing us closer to the goal of obtaining a complete set of high-resolution DNA binding specificity data for all S. cerevisiae TFs. Conclusions In this study, we present high-resolution in vitro DNA binding specificity data and motifs for 27 S. cerevisiae TFs, including some that contain a DBD for which no high-resolution motif had existed previously (for example, Vhr1 and Vhr2). These results contribute towards a complete set of high-resolution DNA binding specificity data for all TFs in this important model organism. In particular, our in vitro PBM analysis of S. cerevisiae TF DNA binding brings the set of known yeast TFs with high-resolution DNA binding specificity data to 150 (about 85%) out of a conservative total estimate of 176 TFs likely to have inherent sequence-specific, doublestranded DNA binding activity. With the addition of a more permissive set of 40 proteins (Additional file 12) that might exhibit DNA binding specificity (total of 216), this still brings us to at least 70% coverage of all S. cerevisiae TF DNA binding specificities. We note that these estimates may differ from previous studies because we refer strictly to TFs with intrinsic DNA binding specificity and do not include proteins that interact with DNA only indirectly. In total, our curated collection contains high-resolution DNA binding data for approximately 85% of all known and likely sequence-specific DNA-binding proteins in S. cerevisiae. The remaining approximately 15% of sequence-specific S. cerevisiae DNA-binding proteins might require targeted investigation or specialized strategies in order to achieve complete coverage of highresolution DNA binding specificity data for all S. cerevisiae TFs. We have identified 26 proteins that either are known TFs or have demonstrated lower resolution experimental data on their DNA binding specificity, or that contain a known sequence-specific DBD; we consider these proteins as the highest confidence candidates for future high-resolution in vitro PBM analysis (Additional file 12). Although most of these 26 proteins are from DBD classes with known sequence-specific DNA binding activity (bZIP, homeodomain, zinc cluster, copper-fist, bHLH), their previous failed attempts by in vitro methods may indicate that specific small-molecule cofactors and/or protein partners may be required for specific DNA binding [22]. Investigations of the effects of post-translational modifications on TFs might also reveal requirements for DNA binding specificity or conditions for modified DNA binding specificities. Generation of a complete set of DNA binding specificity profiles for all S. cerevisiae TFs might also require experimental testing of proteins of even lower confidence, or to be identified by other criteria, for having potential sequence-specific DNA binding activity. Considering the set of all 222 proteins identified from previous TF candidate lists [7,10,11] and updated annotations in the Saccharomyces Genome Database [50], we identified 40 proteins (Additional file 12) either that contain putative nucleic acid binding domains (Myb; zf-C2H2) found in other proteins that exhibit sequence-specific DNA binding, or that are known to interact with DNA or to be involved in transcriptional regulation, but for which it is currently unknown if they bind DNA directly in a sequence-specific manner (we note that availability of a DNA binding site motif from ChIP-chip data cannot be considered evidence of direct DNA binding of the TF tested by ChIP, as some factors may bind DNA only indirectly as part of transcriptional regulatory complexes [13]). Several of these proteins belong to multisubunit complexes (for example, Hap2/ 3/4/5 complex) and may need to be examined for DNA binding specificity in the context of their protein partners [51]. We annotated a set of 156 proteins as unlikely (Additional file 12) to possess sequence-specific DNA binding activity since they either contain protein structural domains that have never successfully yielded a motif from this or prior large-scale in vitro surveys of TF DNA binding specificity, or interact with DNA indirectly, or lack prior literature evidence for direct sequence-specific DNA binding. Finally, in addition to traditional sequence-specific DNA binding site motifs, DNA structural motifs such as the recombination intermediates recognized by HU protein [52] or alterations in DNA helical twist angle patterns could be investigated. Towards the goal of collating a complete set of cisregulatory DNA sequences in S. cerevisiae, we performed a complementary analysis -that is, considering candidate regulatory elements not from a protein-centric viewpoint, but rather from the standpoint of putative cis-regulatory motifs. We collected 4,160 previously published S. cerevisiae DNA motifs (Additional file 13), including known TF binding site motifs and candidate regulatory motifs derived from ChIP and gene expression data (Additional file 1). Our goal was to identify 'orphan' motifs, that is, those that do not match any known TF DNA binding site motifs. We identified 34 orphan motifs ( Figure S6 in Additional file 1); comparisons to all TF DNA binding site motifs in the JASPAR, TRANSFAC, and UniPROBE databases [53] (Additional file 1) did not identify significant matches to known TF DNA binding site motifs containing DBDs not yet annotated as occurring in any S. cerevisiae genes. Some orphan motifs might correspond to novel TFs with DBDs not yet annotated in yeast, while others might represent weak matches to known TF binding site motifs for TFs that might be utilized only in specific cellular conditions, or in the presence of particular co-factors, or in the context of a limited number of cis regulatory regions. Alternatively, some of the orphan motifs may represent enriched DNA sequences without a transcriptional regulatory role, or may be artifactual motifs returned by various motif discovery algorithms. Directed experimentation will be required to distinguish among these different possible scenarios. The high-resolution nature of the in vitro data that we compiled in this study allowed us to perform in-depth analyses of the DNA binding specificity of TFs, resulting in novel structural and gene regulatory insights, which would not have been possible using only the motifs reported in the literature from small-scale experiments that assay binding to only a subset of potential DNA binding sequences or from ChIP experiments. Our results suggest a number of structural studies that would be interesting to pursue to investigate distinct DNA binding specificities recognized either by an individual TF or different TF family members. For example, structural studies would aid in understanding how the bZIP protein Hac1 can bind E-boxes (typical of bHLH proteins) as well as the bZIP ATF/CREB motifs [54]. Similarly, structural studies of Upc2 would provide insights on how it (and its close paralog Ecm22) recognize the sterol response element (SRE; TCGTATA) [55], whereas most other members of the fungal-specific Zn 2 Cys 6 family recognize CG-rich binding sites primarily comprising CGG triplet half-sites separated by degenerate spacers of varying lengths [11]. It would also be interesting to determine how structurally distinct DBDs can recognize similar DNA sequences. Vhr1 and Vhr2 contain a relatively uncharacterized DBD for which no structural data are available from any species; it is not yet even known which amino acid residues in the Vhr1 DBD contact DNA. Our PBM data indicate many similarities in DNA binding specificity between the VHR class and members of the well-characterized bZIP family. Finally, the in vivo utilization of primary and secondary motifs for distinct biological functions by Sko1 suggests a novel gene regulatory mechanism, namely, the potential for different functions to be divided among distinct DNA binding sites in the genome for a particular TF. The extent of functionally distinct primary and secondary TF motifs would be interesting to investigate in higher eukaryotes in future studies. In summary, this study expands our understanding of redundancy and divergence among TF family members from a structural standpoint and in terms of their regulatory functions. Moreover, this study brings us closer to, and outlines a set of priorities for, the complete characterization of TF-DNA interaction specificities in S. cerevisiae. The data presented here will be a valuable resource for further studies of transcriptional regulatory networks, and also for further investigations of protein-DNA recognition rules within different TF families. Such efforts in S. cerevisiae serve as a template for similar work aimed at cataloguing and completely characterizing TF DNA binding specificity in higher eukaryotic model organisms and in human. Ultimately, a complete compendium of human TF-DNA interaction specificity will involve cell-and tissue-specific, as well as diseasespecific, interaction data that will provide invaluable details towards our understanding of development and disease. DNA binding specificity survey of S. cerevisiae TFs Working towards the goal of obtaining high-resolution DNA binding specificities for essentially all S. cerevisiae TFs, we considered existing yeast TF clone collections as well as additional TFs that may have been missed or did not previously generate high-quality in vitro DNA binding specificity data. The proteins we examined in this study were largely derived from a collection consisting of both full-length ORF and DBD clones constructed in our prior, large-scale survey [11], plus a few additional clones either tested previously (Hap1, Stb4, Ylr278c) [10] or newly cloned by us (Ste12, Stb5, Vhr1). We selected 106 known or putative TFs that lacked high-resolution in vitro PBM data and 122 S. cerevisiae ORFs and DBDs for which we had lower confidence in their being potential sequence-specific, double-stranded DNA binding proteins; these proteins had only putative or hypothesized domains for binding double-stranded DNA, weak homology to DNA binding proteins, or literature references to potential DNA binding activity. Overall, from the combined set of 228 ORFs and DBDs, 155 were successfully cloned, expressed by in vitro transcription and translation (see below), and attempted on universal PBMs ( Figure S7 in Additional file 1). Of these 155 proteins, we successfully obtained high-resolution DNA binding data for 27 TFs ( Figure S5 in Additional file 1 and Additional file 12). Of the 128 proteins that were unsuccessful, only 38 contained known sequence-specific DBDs (bZIP, bHLH, Homeobox, Myb, zf-C2H2, zf-GATA, Zn_clus; see Conclusions). TF cloning and protein expression Full-length ORFs and/or DNA binding domains were either cloned into the Gateway pDEST15 (amino-terminal GST-tag) expression vector (Invitrogen, Carlsbad, CA, USA) by recombinational cloning from previously created pENTR clones [11] or were cloned by PCR amplification from genomic DNA and Gateway cloning into pDONR221 as described previously [56] (Additional file 14). All pDEST15 clones were end-sequence verified; the source clones from which these clones were derived were previously full-length sequence verified. Nineteen genes were from a previously published, non-Gateway clone collection [10]. All proteins were produced from purified plasmids by in vitro transcription and translation using the PURExpress ® In Vitro Protein Synthesis Kit (New England Biolabs, Ipswich, MA, USA) according to the manufacturer's instructions. Glycerol was added to a final concentration of 38%, and proteins were stored at -20°C until further use. Western blots were performed for each protein to assess quality and to approximate protein concentration by visual inspection relative to a dilution series of a recombinant GST standard (Sigma-Aldrich, St. Louis, MO, USA), as described previously [11]. Protein binding microarray experiments and data analysis Custom-designed, universal 'all 10-mer' microarrays were synthesized (AMADID #015681, Agilent Technologies, Santa Clara, CA, USA) [21], converted to doublestranded DNA arrays by primer extension, and used in PBM experiments essentially as described previously [8,15]. All newly reported PBM data in this study are from experiments performed either on a fresh slide or a slide that had been stripped exactly once [21]. Microarray scanning, quantification, and data normalization were performed using masliner (MicroArray LINEar Regression) software [57] and the Universal PBM Data Analysis Suite [15] as previously described [8,15]. Determination of binding preferences for all 8-mers and derivation of associated DNA binding site PWMs were calculated using the Universal PBM Analysis Suite and the Seed-and-Wobble motif derivation algorithm [8,15]. Acceptable quality of PBM data was assessed according to visual inspection of the Cy3 and Alexa488 scans of the microarrays, the seed 8-mer from Seed-and-Wobble having an E-score of at least 0.45 [21], and obtaining at least five 8-mers with E-scores ≥0.45 matching the derived motif. These filtration criteria are based on our extensive experience with PBM data sets in this and prior studies. Graphical sequence logos were generated from the obtained PWMs using enoLOGOS [58]. Compilation, processing, and annotation of TF DNA binding site motifs We compiled high-resolution TF DNA binding site motifs from four studies: 1) 27 PBM-derived motifs newly generated in this study; 2) 89 PBM-derived motifs from Zhu et al. [11]; 3) 110 PBM-derived motifs from Badis et al. [10]; and 4) 28 MITOMI-derived motifs from Fordyce et al. [12] (see Additional file 1 for details). All 254 motifs were represented as PWMs. We trimmed all the motifs from both the 5' and 3' ends until two consecutive positions with information content ≥0.3 were reached. The motifs of TFs Cst6, Fkh1, Hcm1, Leu3, Rsc3, Ste12, Stp1, and Ydr520c were trimmed further after visual inspection. Next, we computed the AUC enrichment [13] of each motif in ChIP-chip data sets from the large-scale study of Harbison et al. [7]. We considered all ChIP-chip data sets with at least ten probes reported to be bound at P < 0.001. For the 90 TFs examined in at least two different large-scale studies, we compared the available in vitro DNA binding site motifs and chose the final motifs based on the quality of the in vitro data, the agreement between the in vitro motif and previously reported motifs for the same TF, and the enrichment of the motif in in vivo TF binding data [7] (see Additional file 1 for details). The selected high-resolution DNA binding site motifs are available in Additional file 2 and the source of each motif is specified in Additional file 5. Secondary motifs were computed from the PBM data using the Seed-and-Wobble algorithm, as described previously [16]. Only secondary motifs for which the 8-mer seed had an E-score > 0.48 (conservative threshold) were considered, to avoid selecting spurious secondary motifs. The selected 39 secondary motifs, trimmed as described above, are available in Additional file 2. For the comparison between in vitro and in vivo DNA binding site motifs, the in vivo motifs reported by MacIsaac et al. [20] were also trimmed, and their enrichment in the ChIP-chip data was computed as described previously [13]. ChIP-chip data analysis using PBM data We analyzed ChIP-chip data from Harbison et al. [7] essentially as described previously [13]. We use the notation TF_cond to refer to the ChIP-chip experiment for transcription factor TF under environmental condition cond. We scored DNA sequences using a model similar to GOMER [38], but taking into account DNA accessibility, as described previously [13]. Briefly, we use the probability that a TF Tbinds a DNA sequence X to score every intergenic probe present on the microarrays used in the ChIP-chip experiments [7]. Using the sets of 'bound' and 'unbound' probes from each ChIP-chip experiment, and the probabilities that TF T binds each of the probes, we compute the enrichment of the PBM-derived motif for TF T in the ChIP-chip data by an AUC value. For each ChIP-chip experiment TF_cond we computed the AUC values of the 194 in vitro DNA binding motifs selected as describe above. We consider an AUC value significant if it is at least 0.65 and has an associated P-value ≤0.005 (that is, at most one of the 200 random motifs has an AUC value equal to or greater than the AUC value of the real motif). Accession IDs PBM 8-mer data reported in this paper for 27 TFs have been deposited in the NCBI Gene Expression Omnibus (GEO) database with Platform ID GPL6796 and Series ID GSE34306. Additional material Additional file 1: Detailed methods, additional figures, and additional tables. Figure S1: ClustalW protein sequence alignment of Vhr1 and its homologs in sensu stricto Saccharomyces species. The alignment shows that the second putative basic region of Vhr1 is more conserved than the first basic region. Figure S2: unlike AP-1 bZIPs, Vhr1 and Vhr2 bind only to overlapping half-sites. (a) AP-1 bZIP transcription factors (Gcn4, Yap1, Jundm2, and the Fos-Jun heterodimer) and Vhr1 transcription factors (Vhr1 and Vhr2) bind to overlapping TGAC or TTAC half-sites. For each TF we sorted the 8-mers in decreasing order of their E-score, from 0.5 (highest affinity) to -0.5 (lowest affinity). The black lines show the 8-mers that contain TGACT (or TTACT for Yap1). (b) AP-1 factors (Gcn4, Yap1, Jundm2, and Fos-Jun) also bind to non-overlapping half-sites, while Vhr1 factors (Vhr1 and Vhr2) do not bind to nonoverlapping half-sites. The black lines show the 8-mers that contain TGACGT (or TTACGT for Yap1). The PBM data were reported in Zhu et al. [11] (Gcn4, Yap1), Badis et al. [16] (Jundm2), Alibés et al. [76] (Jun-Fos), or this study (Vhr1 and Vhr2). Figure S3: comparison of the DNA binding specificities of Hac1 (both from this study and from Badis et al. [10]) against bHLH and bZIP TFs. (a) PBM-derived motifs for bZIP TF Hac1 match motifs of bHLH TFs better than motifs of bZIP TFs. (b, c) In-depth comparison of the DNA binding specificities of Hac1 and bHLH TF Cbf1. (d) In-depth comparison of the DNA binding specificities of Hac1 (this study) and two bZIP proteins that bind overlapping or adjacent TGAC half-sites: Gcn4 and Sko1, respectively. The scatter plots show the 8-mer E-scores. Figure S4: primary and secondary DNA binding site motifs derived from high-resolution in vitro PBM data. Figure S5: comparison of motif enrichment in ChIP-chip data for the 27 TF motifs reported in this study versus previously reported PBM-derived (Badis et al. [10]), ChIPderived (MacIsaac et al. [20]), or MITOMI-derived (Fordyce et al. [12]) motifs for these 27 TFs (where available). Figure S6: S. cerevisiae orphan DNA binding site motifs. Figure S7: Schema of PBM experimental pipeline and results. A total of 228 ORFs/DBDs were considered in this study. Those lacking in vitro PBM data refers to initiation of this study in late 2008 after completion of our prior PBM survey (Zhu et al. [11]) and prior to publication of two more recent in vitro surveys (Badis et al. [10]; Fordyce et al. [12]). Table S1: TF DNA binding site motifs from the in vitro PBM data of Badis et al. [10]. Table S2: TF DNA binding site motifs from the in vitro MITOMI data of Fordyce et al. [12]. Table S3: TFs with curated high-resolution DNA binding site motifs derived from in vitro PBM data. The source of the selected motif (PWM) is indicated. Table S5: TFs with DNA binding site motifs reported by MacIsaac et al. [20] according to in vivo ChIP-chip data. TFs for which high-resolution in vitro motifs are also available are marked in boldface font. Table S8: TFs with secondary DNA binding site motifs identified from the curated set of high-resolution PBM data.
2016-03-14T22:51:50.573Z
2011-12-21T00:00:00.000
{ "year": 2011, "sha1": "d85c1033283c4e6a65deeb9a1e66edf345d5b768", "oa_license": "CCBY", "oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/gb-2011-12-12-r125", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "639f0a7afd9f2c30d6b1752ce9871ec712cd1c3d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219757974
pes2o/s2orc
v3-fos-license
The EUROfusion Materials Property Handbook for DEMO In-vessel Components – Status and the challenge to improve confidence level for engineering data Abstract The development of a specific materials database and handbook, for engineering design of in-vessel components of EU-DEMO, is an essential requirement for assessing the structural integrity by design. For baseline in-vessel materials, including EURFOER97, CuCrZr, Tungsten as well as dielectric and optical materials, this development has been ongoing for several years within the Engineering Data and Design Integration sub-project of the EUROfusion Materials Work Package. Currently the database is insufficient to ensure reliable engineering design and safety or hazard analysis and mostly does not yet exist in established nuclear codes. In this paper the current status of EU-DEMO database and handbook for key in-vessel materials is provided. This comprises practical steps taken to obtain the raw data, screening procedures and data storage, to ensure quality and provenance. We discuss how this procedure has been utilized to produce materials handbook chapter on EUROFER97 and the critical challenges in data accumulation for CuCrZr and Tungsten, planned mitigations and the implications this has on structural design. Finally, key elements and methodology of our strategy to develop the materials database and handbook for the in-vessel materials are outlined, including concepts to accommodate sparse irradiated materials data and links to EU-DEMO engineering design criteria. Introduction The development of DEMOnstration reactors, that prove the scientific and technical viability of fusion reactors, is of paramount importance to realizing commercially viable fusion power for humanity. Within EUROfusion's Power Plant Physics and Technology programme [1] the design of EU-DEMO ranks as one of the world's leading DEMOnstration reactor design endeavors [2]. Within the EUROfusion roadmap to the realization of EU-DEMO [3], one of the most critical parts is the successful engineering of the in-vessel components, chiefly the Breeder Blanket and Divertor components, though the diagnostic and heating system ports should not be forgotten [4 -7]. To enable the successful design of these components, principally through a design by analysis process [8], understanding the materials properties within the operational environment is critical. The determination of in-vessel components materials performance, prior to operation, requires statistically relevant and high-quality materials test data over the operation design window for the components. The organization, collection, collation, quality checking and dissemination of structural, armor, heat sink and optical/dielectric in-vessel materials test data is underway within EUROfusion's Power Plant and Physic programme, in the Engineering Design and Data Integration sub-project of the Materials work package. This is being realised through development of a specific materials database and handbook for these in-vessel materials. The materials database is a storage medium containing relevant materials test data that had sufficient provenance and quality to be incorporated. The materials property handbook is a summary document, based upon statistically determined and quality checked data from the materials database. The EU-DEMO in-vessel components materials property handbook will be the document used by EU-DEMO designers to determine the materials allowables input to the design code/criteria used for engineering design (such as the DEMO Design Criteria [9]). The materials handbook will be required to ensure acceptable design and to justify the structural integrity of the EU-DEMO reactors. The materials property handbook is a critical document required, to different degrees of completeness, within the conceptual design, engineering design, construction and operational phases of the EU-DEMO project. Data base and MPH shall as well serve as basis _______________________________________________________________________________ author's email: mike.gorley@ukaea.uk for future material appendices in DEMO Design Criteria (DDC) or code frameworks. This paper represents the first dedicated overview of the materials database and handbook development within EUROfusion. We will review the work on the development of the EU-DEMO in-vessel materials property handbook and databases to date; looking at the requirements of the database and handbook. Examine the current status and plans for the materials handbook and database for the key structural (EUROFER97), armor (Tungsten), heat sink (CuCrZr) and optical and dielectric materials considered for the EU-DEMO in-vessel components. Finally, we summarize the current status and strategy within EUROfusion to obtain the required materials properties to enable the engineering design of EU-DEMO in-vessel components; highlighting the proposed approaches to cover fusion specific properties within EU-DEMO project timeframes and progressively improve confidence in the engineering design. The Database The EU-DEMO in-vessel components materials database is the storage medium which houses all the required materials properties tests data. The data from this database is screened, summarized and collated to form the materials properties handbooks. Since 2014 the EUROfusion consortium has been developing a materials database for the in-vessel materials. Prior to this date (despite some efforts [10,11]) there was not a dedicated EUROfusion materials database, with previous material test data scattered across different EU-research laboratories and within the open literature. From the outset of the Engineering Data and Design Integration sub-project it was recognized that the longterm goal of the database was to hold materials data of sufficient quality and provenance that it could be used to justify the structural integrity of nuclear components. To support this, the key starting point of the work was development of database schema and templates. The schema and data templates were developed based on previous nuclear materials databases used for fission codes, designed to ensure sufficient quality and provenance of the data to allow design. The schema ensured that all the critically required test data, test parameters and supporting metadata, such as material manufacturer, batch, testing standards applied etc. were captured. The inherent value of materials data is high, and EUROfusion developed secure online storage process behind secure servers that would enable data access to key collaborators, without open access to any proprietary or sensitive data. To support pre-conceptual and conceptual design phases of EU-DEMO, it was a necessity to provide as much data as possible on the proposed materials in a ready timeframe. Resultantly significant efforts were placed in collating all available data from existing databases and from open literature. All data obtained underwent a screening procedure to ensure that it had sufficient quality and provenance. This was realized through translation of the open data onto the developed templates. Data with insufficient provenance or quality, as required by the database structures, was rejected. All accepted data was stored in the EU-DEMO in-vessel components materials database, which now provides a single readily extendable source for EUROfusion data. The Handbook The EU-DEMO in-vessel components Materials Property Handbook is the collated and screened materials property data from the materials database. This is summarized to give clear and concise materials properties that can be used to determine materials allowables for design code/criteria analysis [9]. Prior to being utilized within the materials property handbook all data within the database undergoes a screening procedure, this process ensured all data included within the EU-DEMO in-vessel components Materials Property Handbook is of sufficient quality to support nuclear component design. This screening procedure was developed to follow the same processes and quality checks as utilized in nuclear fission industry. The materials properties handbook is structured to provide concisely the materials properties required to determine the design limits of the materials. Presently the EU-DEMO in-vessel components Materials Property Handbook is divided into different chapters, where each chapter represents a different material. Each chapter is divided into different sections for the key properties, such as yield strength. For each section the data is summarized and typically provided in a basic table and graphical format. For most properties an averaged and minimal value are provided along with a simplified equation for their calculation within the limits of the data range provided (see Fig 1.). This follows typical conventions for materials handbooks for engineering design and construction projects. showing different international accepted methodologies for calculating average and minimum curves, reproduced from [12]. This structure enables designers to readily obtain the key materials properties required to determine the materials allowables to be used within design rules to determine design limits. The materials database and handbook structures as described here have been applied to key EU-DEMO invessel component materials. The current status and plans for the database and handbook chapters for these materials are overviewed in the proceeding section. EUROFER97 EUROFER97 is a reduced activation ferritic martensitic steel. It is the primary structural material considered for the European ITER test blanket modules [13], the EU-DEMO breeding blankets [4] and the EU-DEMO Divertor cassette [5]. As the main proposed structural material, EUROFER97 plays a paramount part in the structural integrity assessments of the in-vessel components. This is a critical material that must have sufficient quality data to allow for component design. EUROFER97 is a specialist steel in regards its allowable compositional range, however its manufacture can utilize existing steel knowledge and infrastructure, thus there are limited issues anticipated with mass production of consistent and reproducible EUROFER97 steel. There have been several batches of this materials produced on an industrial (10s Tons) scale. This has allowed a standard materials manufacturing specification to be developed for EUROFER97. Within a wider context EUROFER97 is undergoing a codification process within the RCC-MRx nuclear code, to meet the requirements of the ITER test blanket modules [14 -17]. There remains work required before this material can be moved from the probationary section of the RCC-MRx code to a fully codified material. This work was advanced under F4E (Fusion 4 Energy) and EUROfusion to support codification of EUROFER97 for the ITER test blanket modules [4]. Presently, EUROFER97 represents the most advanced material in regards development of EU-DEMO in-vessel components materials database and handbook. A dedicated review paper was recently published specifically on this [12]. The present status and development plans are briefly summarized below. The operational design window for the EU-DEMO invessel components go far beyond those of the ITER test blanket modules [4,5,13]. There is significant missing data on the materials performance of EUROFER97 to cover EU-DEMO requirements. Some of the key failure mechanisms for the in-vessel components are anticipated after irradiation ageing and there is insufficient data to date on the neutron irradiation effects on EUROFER97, especially at higher doses, with correct fluence or under fusion neutron spectrum. Where available neutron irradiated aged data is included within the EU-DEMO handbook chapter on EUROFER97. There is limited data on the interaction of EUROFER97 with proposed coolants and breeder materials, despite multi-material interfaces causing modifying effects on the steel. The welding and product forms are not confirmed for EU-DEMO and the existing handbook chapter only focuses on as manufactured plate and rod materials. Work is ongoing within EUROfusion to address these issues with long term planning to address all areas and immediate developments on design limiting factors including: i) obtaining materials test data required for the ITER test blanket modules, ii) fission materials test reactor irradiation testing to DEMO relevant levels (up to 20dpa). These represent the key developments to the EUROFER97 database and handbook with the Materials work package of EUROfusion [4,18]. As the EU-DEMO in-vessel components develop, and down select product forms, joining methods and interface materials, significant "technological" materials testing around these areas will be required to ensure the structural stability of the material and structural integrity of the EU-DEMO design. Thus, despite being the most advance in-vessel component material, there remains significant work required for the EUROFER97 handbook chapter. Tungsten Owing to the combination of high H/He plasma ions, high heat flux, high energy neutrons and energetic ions that escape from the plasma, the Divertor and Breeding Blanket components within EU-DEMO require a dedicated "armor" on their plasma exposed surfaces. Due to a range of favorable properties [19] Tungsten is the main armor material considered for the first wall of the breeding blankets [4] and plasma facing targets for the divertor [5]. While this is a primarily functional role the armor needs to retain sufficient bondage to the underlying materials, retain sufficient thermal and mechanical properties and maintain fusion specific interfacing performance (such as plasma erosion and high heat flux stability), to maintain its function. This stipulates the need for reliable and high-quality materials to be utilized, necessitating a materials property handbook chapter to support the engineering designs that utilize this armor material [9]. There is a critical issue that the fusion community needs to rapidly address to have a reliable and reproducible armor material we can use for design. Presently, to the knowledge of the author, there is no reproducible supplier of high-quality tungsten. This is a significant statement considering tungsten's proposed use in fusion for decades [20,21]. The result of this is that the materials performance of all tungsten presently produced from key manufacturers can't be included into any materials property handbook or used for design, as the properties vary significantly from: different suppliers, different product forms and even batch to batch of the same product from the same supplier [18]. A long-term effort is required here and is underway in EUROfusion, linking with colleagues from Japan via the _______________________________________________________________________________ author's email: mike.gorley@ukaea.uk Broader Approach [22]; to work with manufacturers and develop a reproducible and high-quality tungsten with consistent materials performance from batch to batch of manufacture. While this effort is underway it will be several years until this material is being reliably produced and we hold sufficient materials data (including irradiation performance) to incorporate this into a materials property handbook chapter. This is a key focus for the materials work package in EUROfusion. Owing to the timeframes of EUROfusion design, an interim materials property handbook chapter has been produced to provide designers with a consistent set of preliminary armor material performance data. This has been developed from data sourced on a variety of different Tungsten product forms and different manufacturers, with varying properties accordingly. Depending on the availability, the types of the tungsten products used for allowable calculations was unambiguously indicated throughout the interim tungsten materials property handbook chapter to highlight property variation from different products and allow the designers to select consistent sets of data. This interim handbook chapter was developed from the screened materials tungsten database, which was based from open literature and EUROfusion laboratory databases. There was insufficient materials data within the developed database to provide full design required materials performance, and presently designers must persist with limited understanding of the full range of anticipated performance; with irradiated tungsten materials data being particularly sparse. Important progress has been made in the development of the structure of the armor materials handbook chapter on Tungsten. This chapter differed from EUROFER97 with inclusion of additional materials performance sections critical to armor, such as: Tritium retention, oxidation, hardness, plasma erosion and high heat flux performance [23]. These sections were included based on detailed discussions with all designers with interfaces with the armor material to capture all the required performance data. The inclusion of plasma interaction specific properties highlighted potential difficulties, including a lack of consistent and accepted standards for how to: record, review and summarize the materials performance. As an example, there was no consistent method of recording or indicating acceptable high heat flux performance, negating any method of including the sporadic available data within the armor handbook chapter. To accommodate these issues the materials work package developed standards that would capture all of the key information relating to high heat flux testing and from this developed a new EU-DEMO standard for this data to be included in the handbook; the details of this example work can be found at [24]. There are thus three ongoing developments within EUROfusion surrounding armor materials handbook chapter. First, the co-development with manufacturers, via Broader Approach, of a reliable and reproducible high-quality tungsten which can form the baseline armor material to be considered for EU-DEMO. Second, the development of an interim handbook chapter based on (screened and reviewed) open data on various tungsten forms to allow designers consistent data for preliminary designs. Third, development of the handbook chapter for armor materials to include all required materials performance areas, including plasma interactions specific performance and the subsequent development of standard methods for exposing this data. Copper Chrome Zirconium Copper Chrome Zirconium (CuCrZr) is considered the baseline "heat sink" material for plasma facing targets of the EU-DEMO divertor. Within present EU-DEMO divertor designs the primary use of CuCrZr is as the water-cooled pipe connected to the tungsten armor to remove the heat from the target assembly [5]. While CuCrZr is a readily available industrial material with consistent and reproducible manufacturing processes, its materials performance is strongly affected by heat treatments [25]. Within EU-DEMO there remains an open question on the materials properties that are needed for design, or specifically the material condition that should be used to best represent the performance in operation. The different manufacturing procedures for the proposed divertor target assemblies themselves can result in different heat treatments for the CuCrZr that changes its materials properties from the as supplied material condition [26]. It is also recognized that under the operational conditions proposed for the target [5] there will be significant variation in CuCrZr materials properties due primarily to irradiation and thermal effects. It is likely that the final CuCrZr material condition to be included may need to wait for a down-selection of the target manufacturing technique, to ensure the CuCrZr material condition represents that of the "as manufactured" target assembly condition, with subsequent thermal and irradiation aged effects acting upon this "correct" material condition for start of life of the component. Nevertheless, fundamental properties for most likely failure mechnisms are is urgently needed in a broad range of likely conditions. Owing to the timelines of EU-DEMO and to support preconceptual and conceptual designs, an interim materials handbook chapter on CuCrZr has been produced within the materials work package of EUROfusion. The CuCrZr handbook chapter is based on screened and summarized open literature data that was collated into our database and inclusive of a range of different heat treatment conditions, on ITER grade comical composition CuCrZr [27], to provide the EU-DEMO designs with a set of self-consistent materials property data that can be used. Further details of the interim CuCrZr handbook development and included data can be found in the recent paper [25]. There are several ongoing efforts within EUROfusion related to the heat sink materials handbook. First, the _______________________________________________________________________________ author's email: mike.gorley@ukaea.uk development of an interim handbook chapter based on screened and reviewed data from open literature. Second development of testing campaign requirements to ensure the ready development of a baseline materials handbook, once the material condition for the heat sink material for the divertor of the EU-DEMO design are determined. This campaign will include thermally and irradiation aged data to incorporate operational changes in the heat sink material. Optical and Dielectric materials Within the in-vessel components themselves there will be a range of functional materials associated with diagnostic and heating and current drive activities [28]. These include a wide range of optical and dielectric materials. The final materials that will be utilized for the various diagnostic and heating and current drive ports is still undetermined with a range of materials under consideration. In many cases these materials may represent safety critical systems, either as they form the only barrier from the plasma to beyond the vacuum vessel (such as port windows in Neutral Beam systems) [29,30], or as there performance may effect plasma control (such as mirrors used for Thompson scattering measurement systems), which itself may hold a safety role in an operational EU-DEMO reactor. It was thus considered important to hold high quality and regulator reviewable properties for these materials. Within EUROfusion the key initial task was determining the structure required for handbook chapters on these optical and dielectric materials, as the necessary materials performance metrics very significantly from structural materials. Working closely with design teams and the materials teams researching these optical and dielectric materials, database collection processes were started, including development of templates to standardize and provide consistent data for specialized properties, such as optical transmittance. This database formed the basis of interim materials property handbook chapters on optical and dielectric materials that have been produced and released to the EU-DEMO design teams. The interim materials property handbook chapters on optical and dielectric materials presently cover a wide range of potential materials. Materials data was acquired through dedicated screening and storage of open data, through direct contact with manufacturers and collation of internal EUROfusion research into the materials. The sparse data on the effects of the operational conditions (e.g. gamma irradiation damage) have been actively sought and included. The properties included are still limited yet provides the designers with a consistent set of data and collates all the available data into a single location. This represents a critical project orientated step to ensure there is no "loss" of data or repeating of tests. It also holds a key role in supporting future development of these materials by highlighting where data is missing. The continued development of the materials database and handbook chapters for optical and dielectric materials remains a key work within EUROfusion. This is realized via interactions with materials research and designer teams. Once materials are down selected by the design teams dedicated materials testing campaigns can be developed using the interim handbooks as a starting point for planning and developments. Challenges in data collection for EU-DEMO in-vessel components materials. There have been many papers that highlighted the challenges associated with gathering materials properties data for fusion, [9,18,[31][32][33][34][35]. Given the significance of these challenges on the development of the EU-DEMO in-vessel components materials database and handbook some, but certainly not all, of the key challenges are highlighted below. Importantly, the presently proposed strategy for EUROfusion to address these challenges are also overviewed. Dealing with fusion spectrum irradiation Neutron irradiation flux is significant in EU-DEMO invessel components (see Fig 2.) and has many significant effects on the properties of materials and irradiation aged materials properties must be considered in the materials property handbooks to enable design of EU-DEMO [36]. The synergistic/in-pile effects of mechanical loads, neutron energy, fluence and temperature during neutron irradiation dramatically changes the materials properties differently to independent or sequential effects [37,38]. To provide accurate results tested materials should be subject to the correct neutron fluence, at the correct temperature while subjected to all other interfaces and loads. Without a working fusion reactor, with sufficient space to enable validation testing of new materials and components, gathering materials data that simulates all of these conditions is completely impractical. The current proposed approach with EUROfusion is to gather sufficient data to allow, engineering sound and scientifically validated, approximations of the materials _______________________________________________________________________________ author's email: mike.gorley@ukaea.uk performance, supplemented with sufficient safety factors to enable a conservative and structurally safe design. This is a common approach within engineering structures, but it still requires significant data to support the scientific cases for approximations of the full environmental operating conditions. EUROfusion is approaching this in several ways, including: Fission neutron irradiation tests to determine the effects of neutron damage to relevant dpa (displacement per atom) levels [36]. Fission irradiations will need to cover sufficient temperature ranges to highlight the strongly synergistic effects of temperature on the materials properties under neutron irradiation. It is also critical that relevant fluence level irradiation facilities are utilized to better match fusion irradiation effects. Where no microstructural changes are anticipated in the intermediary conditions, the "worst case scenarios" with highest dpa levels at maximum and minimum operational temperatures can be utilized to reduced initial testing volumes. Selective "in-pile" fission neutron irradiation testing will be used to elucidate synergistic loading effects under neutron irradiation. Gathering of fusion spectrum neutron irradiation testing. This is considered a critical step to ensure there is negligible, unpredictable, variations in fission vs fusions spectrum effects at the dpa levels anticipated for the first blanket and divertors in EU-DEMO [3]. A sparse data set should be possible from one or two tests within EU-DEMO timeframes via IFMIF or DONES systems [39]. The total volume of materials that can practically be neutron irradiation in fission materials test reactors or within in IFMIF/DONES systems is very limited [40]. Thus, we must build up a scientifically valid justification based on sparse and incomplete data sets, while also qualifying and utilizing small specimen test techniques to maximize data gathered [40]. Improving management of incomplete and sparse data sets will be archived by ensuring the engineering tests data is supported by mechanistic modeling of the fundamental effects on materials, as is being considered in the IREMEV sub-project of EUROfusion's materials work package [18] and by incorporating probabilistic statistics and Bayesian logic into our data processing as proposed within the EU-Japan Broader Approach [22]. Thus EUROfusion has a multifaceted approach to deal with the irradiation damaging effects including: urgent and critically important work to obtain the substantial fission spectrum irradiation test data from high use of the limited (relevant fluence level) fission materials test reactors, development of predictive modeling to anticipate irradiation damage effects and variations in fission and fusion spectrum on material properties, carefully planning and utilizing the initial fusion spectrum materials test facilities (such as IFMIF/DONES) and incorporation of Bayesian logic into our statistical treatment of the materials test data to minimize uncertainty and ensure that the materials property handbook are always conservative but can be readily improved as new data becomes available. Dealing with complex nature of in-vessel components While it is beyond the scope of this paper to provide details of the in-vessel component design (readers are directed to [4][5][6][7]), there are some key factors that make the provision of accurate materials properties challenging, as highlighted below. Owing to the significant and localized heat output from the plasma radiation on the in-vessel components, there are dramatic thermal and neutron fluence gradients within the components. Thus, the in-vessel materials properties must cover a significant range and combination of temperatures and neutron fluences. Present in-vessel components designs propose a vast array of weld types and multi-materials interfaces, including interaction with coolants and breeder materials. All of these different welds, joints and multimaterial interfaces effect the materials properties. Future codification of the materials may require specific joint and multi-material interfaces to be testing and validated to enable engineering designs to incorporate for these distinct areas. The novel nature of the fusion in-vessel components necessitates the development of new design rules [9,31]. New design rules may require validating with engineering materials test data; these new design rules may necessitate new or advanced materials properties to be acquired, as an example true stress true strain data [41] is a potentially important property to accommodate design beyond yield, yet this will required additional test data, often utilizing specialist facilities to ensure accurate collection. Although the host country is not determined the EU-DEMO plant will likely be a nuclear licensed facility and in-vessel components will fall within regulations of pressurized equipment and of nuclear code compliance [42]. Code compliance generally has a strict legal definition and may impose stringent requirements on the amount and relevance of the materials data collected. The costs, size, complexity and lack of relevant testing theaters for full scale mock-ups of these in-vessel components, also imposes a design by analysis process, rather than a design by experiment process [43], for the EU-DEMO in-vessel components. Thus, sufficient materials tests data is required to support engineering design by analysis process in advance of component construction and operation. Generally, the complexity, novelty of the materials and integrity requirements of the in-vessel components imposes a significant materials testing "volume" and often very specialized materials testing campaigns to gather the relevant data. Obtaining the full spectrum of test data is not presently available or readily achievable. The vast test data needs are being accommodated pragmatically within EUROfusion by focusing testing on the key failure modes anticipated for design and or design limiting materials performance factors first. This _______________________________________________________________________________ author's email: mike.gorley@ukaea.uk is designed to enable confidence in engineering design in preliminary stages. This pragmatic approach is supported with integrated views of the needs of DEMO, realized via strong interactions with component designers, safety specialist etc. Long term planning to cover DEMO design limiting areas, such as verification tests for inelastic design rules and irradiation modification effects are targeted. Joining, corrosion and interface effects that are not design limiting are being considered only after a down-selection of concepts for the in-vessel components, this and many other efforts are minimizing the testing required to enable focused work that ensures viable materials allowables are provided for the EU-DEMO designers in a timely manner. Conclusions Within the EUROfusion programme there has been and remains dedicated work on the development of a database and handbook for the in-vessel components: structural, armor, heat sink, optical and dielectric materials. Significant work has been placed into the development of the required infrastructure around the database and handbook. Within the EUROfusion programme there has been developed: dedicated data templates, data storage mechanisms, data collection procedures, data screening procedures, standardization of data for key fusion specific materials properties (such as high heat flux performance). None of the key materials discussed have sufficient data to cover the anticipated EU-DEMO operational conditions. Owing to the challenges of the EU-DEMO in-vessel components operational and environmental conditions there is a vast gap in the available materials test data. To accommodate the timeline and project practicalities of the EU-DEMO programme a pragmatic approach was developed. Initially data was gathered, screened and disseminated in a standardize manner from open literature and available existing databases to provide designers with as much early data as possible. Where key failure mechanisms from the design are identified, target testing is progressed to readily obtain these results. Tests on joints and materials interfaces, where possible, are postponed till down selection of components to minimize testing requirements. To accommodate fusion neutron spectrum irradiation effects and mitigate the effects of potentially sparse data, analogous fission irradiation testing, predictive modeling and Bayesian logic is applied to support the determination of materials allowables to be utilized in design. Materials Handbook chapters or interim chapters have been developed for the key structural (EUROFER97), armor (tungsten), heat sink (CuCrZr) and optical and dielectric materials for the in-vessel components of EU-DEMO. EUROFER97, is a reproducible, industrially manufacturable material with sufficient data to be included in RCC-MRx nuclear code. There remains very significant data gaps to cover the operational conditions for EU-DEMO, but plans are developed to gather much of this data needed for the conceptual design. Tungsten manufacturability has shown significant materials property variation with no acceptable supply, thus there presently exists no "baseline" tungsten within EUROfusion. Significant work is required to obtain a reproducibly manufacturable tungsten of sufficient quality to form the baseline materials for EU-DEMO. Work is ongoing, with the Broader Approach, to develop this material. An interim handbook based on varying tungsten types has been produced to provide consistent data for preliminary engineering design. A large testing campaign will be implemented once a baseline material is available. CuCrZr is a readily available industrial material. However, its properties are affected by the manufacturing conditions for the Divertor target assembly. Thus, the final materials condition to form the baseline testing, and upon which subsequent aged (thermal and irradiation) data should be gathered, is uncertain until a final manufacturing rout is determined. An interim handbook chapter has been produced covering ITER grade chemical composition CuCrZr with a range of thermomechanical processed conditions to provide the EU-DEMO team with consistent data to support preliminary design. Optical and dielectric materials are critical to the diagnostics and heating and current drive ports. Significant work has been performed to develop templates and an interim handbook chapter that incorporates the required materials properties for these components. Given uncertainty on the final materials that will be utilized this interim handbook chapter contains a range of materials of interest to designers to provide them with self-consistent data, upon which to further develop. Given the potential safety criticality of these materials and components it is important to hold high quality, high provenance and statistically significant data on these materials. The developed materials databases, containing the raw tests data of sufficient quality, feeds into the materials property handbooks. The handbooks are utilized to derive materials allowables that enable the design by analysis process for the realization of EU-DEMO design. Overall there has been significant work on the development of the materials database and handbooks. These form the basis of the preliminary design of the EU-DEMO in-vessel components. Staged completion of the handbook for the design phases of EU-DEMO requires vast testing campaigns supported by modeling, implemented in a timely manner. This work forms a critical part of the realization of the EU-DEMO project and our fusion powered future. Yet significant work remains within a very short timeframe to realise the materials allowables for the operating conditions of EU-DEMO.
2020-05-28T09:15:31.153Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "8072f911551b02e07703f6b87a9db90e88e6cf1f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.fusengdes.2020.111668", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ec8be4d9ea063583a1a49bbac2cae856ee10fe6b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
247781063
pes2o/s2orc
v3-fos-license
Clinical characteristics and treatment outcomes of primary malignant melanoma of esophagus: a single center experience Background Primary malignant melanoma of esophagus (PMME) is an extremely rare disease with poor prognosis. We aimed to determine the clinical characteristics and treatment outcomes of patients with PMME. Methods We retrospectively reviewed 17 patients diagnosed with PMME in Samsung Medical Center between 2000 and 2020 with median 34 months of follow-up. Survival outcomes were analyzed with Kaplan–Meier method. Results 15 patients (88.2%) were male and the most common presenting symptom was dysphagia (9/17, 52.9%). On endoscopy, tumors were mass-forming in 15 patients (88.2%) and diffusely infiltrative in two patients (11.8%). Lesions were melanotic in 13 patients (76.5%) and amelanotic in four patients (23.5%). The most common tumor location was lower esophagus (11/17, 64.7%). The disease was metastatic at the time of diagnosis in four patients (23.5%). As for treatment, 10 patients (58.8%) underwent surgery. In all 17 patients, the median overall survival was 10 months. In surgically treated patients, all patients experienced recurrence and the median disease-free survival was 4 months. There was no statistical difference in overall survival between patients with or without surgery. Patients with diffusely infiltrative tumor morphology had better overall survival compared to those with mass-forming tumor morphology (P = 0.048). Two patients who received immunotherapy as the first-line treatment without surgery showed overall survival of 34 and 18 months, respectively. Conclusions As radical resection for patients with PMME does not guarantee favorable treatment outcomes, novel treatment strategy is required. Further large-scale studies are warranted to determine the efficacy of immunotherapy for patients with PMME. inhibitor, the five-year overall survival rate was 36% and the median overall survival was 22.7 months in the combination immunotherapy group (Nivolumab plus Ipilimumab), which was better than those of either monotherapy group [8]. For PMME, there have been only a few small-sized studies on the outcomes of immunotherapy [9][10][11]. In the largest study by Wang et al. [11] (n = 12) patients who received programmed death (PD)-1 inhibitors for PMME showed mean progression-free survival of 15.6 months. More studies with consistent results are required to validate the efficacy of immunotherapy for PMME. In the present study, we reviewed the clinical and endoscopic features of 17 patients diagnosed with PMME in our institution and investigated their surgical and nonsurgical outcomes. Research design and study population We retrospectively reviewed patients who were diagnosed with PMME between January 2000 and December 2020 at Samsung Medical Center. Only the patients with histologic confirmation of malignant melanoma in either biopsy or surgical specimen of esophagus were included. Patients with concurrent or a history of melanoma in other sites (including skin) were excluded. The study protocol was approved by the Institutional Review Board (IRB) of Samsung Medical Center (approval number: 2021-09-030-001) and conducted in accordance with the guidelines of the Declaration of Helsinki. Because of the retrospective nature of the study, written patient consent was waived by the IRB. Variables, data sources, and measurements Clinicopathological data were extracted from the intranet database of Samsung Medical Center. Two board-certified gastroenterologists (T.S.K. and B.H.M.) thoroughly reviewed the medical records and endoscopic findings. The gross findings were categorized into two patterns: mass-forming and diffusely infiltrative. Anatomical location was defined as upper (20-25 cm from the incisor teeth (IT)), middle (IT 25-30 cm), and lower (IT > 30 cm) esophagus [12]. Because there is no standardized method of tumor staging for PMME, we categorized the patients into three staging categories with regard to lymph node metastases (LNM) and distant metastases status: localized disease (no LNM, N0), node positive disease (positive LNM, N+), and metastatic disease (M1) (adopted from Weiner et al. [3]). Surgical techniques were the same as those for patients with esophageal squamous cell carcinoma. Detailed description of surgical techniques used in our institution is reported elsewhere [13]. The survival time was calculated from the date of PMME diagnosis to the date of death or to the last date of follow-up (cutoff date: July 31, 2021). In patients who were lost to follow-up, survival data were retrieved from the National Health Insurance System Database. The disease-free survival time for patients who underwent surgery was calculated from the date of surgery for PMME to the date of first recurrence noticed during routine surveillance by computed tomography or esophagogastroduodenoscopy. Chemotherapy responses were measured according to the Response Evaluation Criteria for Solid Tumors (RECIST) version 1.1 [14]. In six patients, polymerase chain reaction (PCR) sequencing for BRAF mutation (exon 15) was performed. In three patients, immunohistochemical (IHC) staining for programmed death-ligand 1 (PD-L1) was performed, which was expressed as tumor proportion score (TPS): the percentage of viable tumor cells showing partial or complete membrane staining for PD-L1. Statistical analysis Baseline clinicopathologic characteristics were summarized in mean ± standard deviation or frequency (percent). The Kaplan-Meier survival curve was plotted for the whole study population and the differences between patient groups were tested using a log-rank test. The median follow-up time was calculated using the reverse Kaplan-Meier method. Statistical significance was set at P < 0.05. All analyses were performed using SPSS version 25.0 (IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY: IBM Corp.) Treatments and outcomes Among the 17 patients, 10 (58.8%) received surgery, five (29.4%) received chemotherapy or palliative care, and two (11.8%) were lost to follow-up without treatment. The outcomes of 10 patients who underwent surgical treatment are summarized in Table 2. The majority (70%) of patients received Ivor-Lewis operation. In the surgical specimen, the mean tumor size was 5.4 ± 2.8 cm. The invasion depth was limited to submucosal layer in seven cases (70%) while the muscularis propria was invaded in three cases (30%). Resection margin was negative in all patients and LNM was identified in six patients (60%). Post-operative complications were noticed in two patients (20%). One patient had post-operative chylothorax and was successfully treated with thoracic duct ligation surgery (number 1 in Table 2). The other patient had transient vocal cord hypomobility which gradually improved in 3 months with rehabilitative training (number 9 in Table 2). Two patients who survived longer than three years (patient number 1 and 2 in Table 2) did not have LNM. With regard to adjuvant therapy, three patients received intravenous interferon-alpha (IFN-α) and two patients received adjuvant Pembrolizumab (TPS of PD-L1 was 30% and 1% for patient number 3 and 6 in Table 2, respectively). Patients who received adjuvant IFN-α or adjuvant Pembrolizumab remained disease-free for 4, 4, and 1 months and 4 and 3 months, respectively. Apart from one patient lost during follow-up (patient number four in Table 2), recurrence was noticed in all patients who received surgery. Anastomosis (33%) and peritoneum (33%) were the most common sites of recurrence. The Kaplan-Meier estimate of median recurrence-free survival of surgically treated patients was only 4 months. The outcomes of five patients who did not undergo surgery are summarized in Table 3. Two patients who received immunotherapy as the first-line treatment without surgery showed overall survival of 34 and 18 months, respectively. One of them had distant LN and adrenal gland metastases at presentation and received Nivolumab for 24 months (3 mg/kg, biweekly) until disease progression (TPS for PD-L1 was 0%). This patient is still currently alive and undergoing clinical trial. No immunotherapy related adverse effects were reported in either patients. Two patients who received conventional chemotherapy and/or radiotherapy as the first-line treatment survived 10 and 5 months, respectively. One patient who received supportive care only due to old age died 6 months after diagnosis. The Kaplan-Meier curve for overall survival in all 17 patients is shown in Fig. 2A. The median follow-up time was 34.0 months (95% confidence interval (CI): 14.7 -53.3 months). The median survival was 10 months (95% CI: 6.0 -14.0 months) and the estimated probability of one-year and three-year survival was 35.3% and 29.4%, respectively. There was no statistical difference in overall survival between those who received surgery and those who did not (Fig. 2B). There was no statistically significant difference in overall survival between patients with localized, node positive, and metastatic disease (Fig. 2C). Patients with diffusely infiltrative tumor morphology showed significantly better overall survival compared to patients with mass-forming tumor morphology (Fig. 2D, P = 0.048). There was no statistically significant difference in overall survival between patients who received immunotherapy at any point (adjuvant or palliative) during their treatment course (patient number 1, 3, 6 in Table 2 and number 1, 2, 5 in Table 3) and those who did not (Fig. 2E). There was no mutation identified in exon 15 among the six patients who underwent PCR sequencing for BRAF. Discussion Because PMME is a rare disease entity, its clinical features and treatment outcomes have not been fully defined. In the present single-center retrospective cohort study, we analyzed the clinical characteristics and survival outcomes of 17 PMME patients. We found that the majority of PMME patients are male (15/17, 88.2%), mainly complain of dysphagia (9/17, 52.9%) and present with large dark pigmented mass (13/17, 76.5%) at lower esophagus (11/17, 64.7%). Although surgery was performed in 58.8% of cases (n = 10), no significant improvement of overall survival was found compared to those who underwent non-surgical treatments (n = 5) (P = 0.523). Having a diffusely infiltrative tumor morphology (n = 2) was significantly associated with better overall survival than mass-forming tumor morphology (n = 15) (P = 0.048). Two patients who received immunotherapy as the first-line treatment without surgery showed overall survival of 34 and 18 months, respectively. PMME is notorious for its aggressive behavior. Sabanathan et al. [1] reported five-year survival of 4.2% after radical surgical resection in the review of 139 cases reported worldwide. In a recent multicenter study from China with 70 PMME patients undergoing surgery, the median overall survival was 13.5 months and the median disease-free survival was 5.9 months [15]. In a previous study by Ahn et al. [2] which analyzed 19 South Korean PMME patients, the median overall survival was 12 months. In the present study, the estimated overall survival was 10 months (95% CI: 6.0 -14.0 months). Previous studies have shown conflicting results on the effect of surgery on survival outcome. While some studies have advocated surgery as a treatment of choice for either pallation or cure [1,[4][5][6][7]16], relatively large-scale studies by Weiner et al. [3] (n = 56) and Cheung et al. [17] (n = 39) failed to show significant association between surgery and prolonged overall survival. In the present study, whether or not the patient underwent surgery was not associated with overall survival (Fig. 2B, P = 0.523). We assume that this is mainly due to the extremely aggressive biology of PMME. Even in clinically localized diseases, early systemic dissemination at microscopic level could occur in PMME patients. In fact, all surgically treated patients experienced recurrence in our study. Consistently, there was no survival difference between patients with clinically localized disease (n = 8) and patients with node positive (n = 5) or metastatic diseases (n = 4) (Fig. 2C, P = 0.164). Furthermore, esophagectomy is known for its high risk of post-operative morbidities [18] and diminished quality of life after surgery [19]. Given the aggressive behavior of PMME and equivocal efficacy of surgery as well as the aforementioned post-operative morbidity and quality of life issues, further large-scale studies are required to determine the value of surgery as the first-line treatment modality for patients with PMME. To avoid possible bias and overcome the limitation of this study, it would be desirable if multivariate analysis can be performed in future studies with the adjustments for patients' age, performance status and adjuvant treatment. Ahn et al. [2] previously reported that regarding gross tumor morphology, patients with flat pigmented pattern tumor showed significantly better overall survival compared to those with mass-forming pattern. Consistent results were found in our study (Fig. 2D, P = 0.048). However, these results should be interpretated with caution because in both studies, the number of patients with infiltrative morphology was very small. Interestingly, in a patient with diffusely infiltrative tumor morphology who underwent surgery (patient number 1 in Table 2), pathologic tumor size was only 0.8 cm and the rest of the pigmented infiltration was benign melanosis. Given that PMME usually presents with large mass, it is possible that the favorable outcomes of diffusely infiltrative type tumors could have been due to small tumor volume. The diagnosis of PMME can be especially challenging when the tumor is amelanotic. Amelanotic PMME can be pathologically suggested when there is no melanin granule inside the tumor cells but when IHC staining is positive for human melanin black 45 or S-100 and negative for cytokeratin [20]. The prevalence of amelanotic variant of PMME is estimated to be 10-25% [21]. In the present study, four cases (23.5%) were amelanotic subtype. Clinicians should be aware that not all melanomas are dark pigmented and pathologic diagnosis may change from poorly differentiated carcinoma to malignant melanoma after IHC investigations. The prognostic value of amelanotic gross appearance is unclear. In this study, there was no significance difference of overall survival between melanotic and amelanotic subtypes. Immunotherapy has been greatly successful in the treatment of cutaneous melanoma [22]. However, previous studies have reported lower response rates of immunotherapy for mucosal melanoma compared to those for cutaneous melanoma. In a pooled analysis of clinical trials by D' Angelo et al. [23], the median progression-free survival among patients who received Nivolumab monotherapy was 3.0 months and 6.2 months for mucosal and cutaneous melanoma, respectively. A combination of Nivolumab and Ipilimumab showed better outcomes with the median progression-free survival of 5.9 months and 11.7 months for mucosal and cutaneous melanoma, respectively. In the present study, we identified two patients who received adjuvant Pembrolizumab after surgery. Although statistical analysis was not feasible due to small number of cases, disease-free survival in patients who received adjuvant Pembrolizumab after surgery did not exceed the median disease-free survival of surgically treated patients not undergoing adjuvant immunotherapy (4 months). Notably, one patient with distant LN and adrenal gland metastases received 24 months of Nivolumab as first-line therapy and succeeded in longterm survival of 34 months ( Table 3). As other recent case studies consistently report the effectiveness of immunotherapy for metastatic PMME [9,11], further large scale studies are warranted to confirm the validity of immunotherapy for PMME. To date, it is unclear whether PD-L1 expression can be a predictive marker for immunotherapy response for mucosal melanoma [23,24]. In the present study, patient with 30% of PD-L1 expression showed comparable disease-free survival to patient with 1% of PD-L1 expression after adjuvant Pembrolizumab. In addition, the patient who remained progression-free for 24 months on Nivolumab monotherapy had 0% PD-L1 expression. Further studies are needed to clarify the potential role of PD-L1 as a predictive marker for immunotherapy response in patients with PMME. While BRAF mutation occurs up to 50% in cutaneous melanoma [25], its incidence has been reported to be 4-12% in mucosal melanoma [26][27][28]. This difference may be attributed to the absence of ultraviolet light exposure for carcinogenesis in mucosal melanoma. In the present study, six patients with PMME were tested for BRAF mutation, which was not found in any one of them. Mucosal melanoma is generally considered to be chemotherapy-resistant [24,29]. However, PMME patients may benefit from novel therapeutic options such as combination of immunotherapy with conventional chemotherapy [30]. In the present study, V777L HER2 mutation was identified in patient number 1 in Table 2 through next generation sequencing study. Following Nivolumab and conventional chemotherapy, the patient received Trastuzumab-Deruxtecan, which is a monoclonal antibody-topoisomerase inhibitor conjugate, and showed at least 8 months of progression-free survival. Further studies are needed to diversify the treatment options for PMME patients. There are evident limitations to this study. This was a retrospective study performed at a single tertiary referral center. As the number of cases was small, comprehensive comparative analyses were limited and conclusive statements could not be made. Conclusions PMME is a lethal disease with distinct clinical characteristics. As the treatment for PMME is not standardized and the efficacy of surgery is still controversial, further large-scale studies are required regarding novel treatment strategies such as immunotherapy for patients with PMME.
2022-03-30T13:18:47.153Z
2022-03-29T00:00:00.000
{ "year": 2022, "sha1": "952382259561f8309f947f048217e8973dfabf8c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "a395dc1e131d9a2bce1732e2d6c4bb157474bc11", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
84686303
pes2o/s2orc
v3-fos-license
Effect of acetazolamide on stable carbon isotope fractionation in Chlamydomonas reinhardtii and Chlorella vulgaris The effect of extracellular carbonic anhydrase (CAex) on stable carbon isotope fractionation in algae is still unclear. The stable carbon isotope composition and algal growth in the presence and absence of the membrane-impermeable CA inhibitor acetazolamide were compared in Chlamydomonas reinhardtii and Chlorella vulgaris. The CAex of both algal species contributed about 9‰ of the stable carbon isotope fractionation and exhibited a dosage effect. Therefore, evidence in vivo that CAex leads to a larger carbon isotope fractionation of algae is presented. The isotopic composition of sedimented algal material is an indicator of paleoenvironmental conditions because of the well-documented effect of CO 2 concentration on marine algal carbon fractionation [1][2][3]. However, use of existing models for prediction of algal carbon isotope fractionation revealed a large deviation in some aquatic ecosystems [4]. This deviation, if not considered in prediction models, would affect the precision of predictions of paleoenvironmental CO 2 concentrations based on the carbon isotope composition. The deviation from the model-predicted isotope fractionation results not only from environmental factors, but also from some important physiological factors. Extracellular carbonic anhydrase action might be one of the main physiological processes that lead to the deviation. Carbonic anhydrase (CA; EC 4.2.1.1), a zinc-containing metalloenzyme, catalyzes the reversible interconversion between bicarbonate (HCO 3 − ) and CO 2 . The uncatalyzed, slow interconversion between CO 2 and HCO 3 − produced about 10‰ of the stable carbon isotope fractionation, whereas the intercon-version in vitro catalyzed by CA had only 1.1‰ fractionation [5,6]. The present study examines the effects of extracellular CA (CAex) on carbon isotope fractionation by comparison of the stable carbon isotope composition and algal growth with and without the membrane-impermeable CA inhibitor acetazolamide (AZ) in Chlamydomonas reinhardtii and Chlorella vulgaris. Evidence in vivo showed that extracellular CA leads to higher algal carbon isotope fractionation. Materials and methods Chlamydomonas reinhardtii and Chlorella vulgaris samples were obtained from the Institute of Hydrobiology, Chinese Academy of Sciences. Both species were grown axenically in artificial freshwater soil extract (SE) medium. Cultures were incubated at 25.0 ± 1.0°C under 150 μmol m −2 s −1 light intensity and a 16/8 h day/night cycle. Experiments were conducted with the following treatments. Treatment 1: C. reinhardtii and C. vulgaris were grown in SE media that contained different NaHCO 3 concentra-tions (0, 0.5, 2, 8, 16, or 20 mmol L −1 ) with or without AZ (10 mmol L −1 ). The cultures were treated for 12 d, with the first 8 d for expanding the culture, and the last 4 d for strengthening the culture. The added NaHCO 3 , which has a δ 13 C value of −17.4‰ in solid state, and −16.6‰ in solution, was a tracer for the dissolved inorganic carbon (DIC) sources used by algae. Treatment 2: C. reinhardtii and C. vulgaris were grown in SE media with different AZ concentrations (0, 0.01, 0.10, 0.50, or 1.00 mmol L −1 ) and 0.5 mmol L −1 added NaHCO 3 . The cultures were treated for 14 d, with the first 10 d for expanding the culture, and the last 4 d for strengthening the culture. The added NaHCO 3 was the same δ 13 C value as that of Treatment 1. All experimental treatments consisted of five replicates. Algal proteins were assayed using Coomassie brilliant blue. Aquamerck was used in the titration of bicarbonate concentrations in the media. The algal cultures were dried prior to analysis and were converted to CO 2 at 800°C in a quartz tube over copper oxide in an oxygen atmosphere. Water and oxygen were removed from the gas stream in a liquid N 2 trap, and CO 2 was double distilled and collected into a sample tube. The CO 2 sample was analyzed with an isotope ratio mass spectrome-ter (Finnigan MAT 252, Bremen, Germany). All isotopic compositions (δ 13 C) are expressed as per mille (‰) and compared with a standard (Pee Dee Belemnite) (see Formula (1)). The analytical precision was ±0.1‰. where R sample and R standard are the ratio of heavy to light isotopes ( 13 C/ 12 C) of the sample and the standard, respectively. Stable carbon isotope composition with and without acetazolamide The CO 2 and generated HCO 3 − concentrations varied with the amount of added HCO 3 − . The total DIC content increased with increasing added HCO 3 − concentration ( Table 1). The δ 13 C DIC and the proportion of HCO 3 − obtained from conversion of CO 2 to total DIC were high at low added HCO 3 − concentrations (0-2.00 mmol L −1 ), and low at high added HCO 3 − concentrations (8.00-20.00 mmol L −1 ) regardless of the presence or absence of AZ ( Table 1). The pH in the presence of AZ was lower than that without AZ. Therefore, the CO 2 concentrations in the culture media that contained Table 1 Concentration of dissolved inorganic carbon and δ 13 C DIC in original culture media of C. reinhardtii and C. vulgaris AZ were higher than in the media lacking AZ. The stable carbon isotope composition and growth of C. reinhardtii and C. vulgaris varied with the total DIC with and without AZ (Figure 1). Protein content increased at low added HCO 3 − concentrations and decreased at high added HCO 3 − concentrations without AZ. However, in the presence of AZ, the protein content increased independent of HCO 3 − concentration. The stable carbon isotope composition without AZ was significantly different from that with AZ. The mean δ 13 C values of C. reinhardtii and C. vulgaris without AZ were about 9.1‰ and 11.4‰ more positively skewed than that with AZ at low added HCO 3 − concentrations (0-2.00 mmol L −1 ), respectively. The δ 13 C values of the two algal species were similar (about −25.5‰) at 8.00 mmol L −1 added HCO 3 − regardless of the presence or absence of AZ. The δ 13 C values of C. reinhardtii and C. vulgaris without AZ were more negatively skewed than that with AZ (16.00 and 20.00 mmol L −1 added HCO 3 − , respectively). The stable carbon isotope composition in algae reflects the utilization of DIC [7]. Without AZ, the algal cells mainly utilized the HCO 3 − generated from the rapid interconversion of CO 2 catalyzed by CAex at low added HCO 3 − concentrations (0-2.00 mmol L −1 ). The carbon isotope fractionation was very low (about 1.1‰) [6]. However, in the presence of AZ, the algal cells mainly used the HCO 3 − generated from the slow (uncatalyzed) interconversion of CO 2 at low added HCO 3 − concentrations. The slow interconversion between CO 2 and HCO 3 − produced about 10‰ of the stable carbon isotope fractionation [5]. Therefore, δ 13 C val-ues in the presence of AZ at low added HCO 3 − concentrations were about 9‰ less than those in the absence of AZ, regardless of algal growth rate or cell size [8]. Algal growth rate and cell size may affect isotope fractionation [8]. The δ 13 C value is inversely correlated with algal growth rate or cell size [8]. The difference in the δ 13 C values of C. reinhardtii with and without AZ was approximately 9‰. This value could be regarded the absolute contribution of CAex. However, greater differences in the δ 13 C values than 9‰ (mean 11.4‰) were observed in C. vulgaris. A small, additional difference (mean 2.4‰) in the δ 13 C values was recorded in C. vulgaris above that produced by CAex. The additional difference might reflect the lower algal growth rate and smaller cell size when C. vulgaris was cultured in media that contained AZ. A linear or stoichiometric relationship existed between the δ 13 C values in the algal cells and the total DIC in culture media lacking AZ at low added HCO 3 − concentrations. In natural water bodies, the concentration of HCO 3 − is much lower than 2.0 mmol L −1 [9]. Thus, we also deduced that the difference in carbon isotopic fractionation is large between algae with high CAex activity and that without CAex activity. The δ 13 C value of added HCO 3 − was more negative than that of the control (0 mmol L −1 added HCO 3 − ) ( Table 1). At high concentrations of added HCO 3 − (8.00-20.00 mmol L −1 ), the algal cells mainly used the added HCO 3 − in the presence or absence of AZ. Therefore, the more negative the algal δ 13 C values, the more the algal cells used the added HCO 3 − regardless of the growth restrictions. In medium lacking AZ, growth of the two algal species was inhibited by the DIC sources because of the high pH of the culture media. The HCO 3 − concentration was too high for algal growth in these conditions. The algal cells prioritized uptake of the light 12 C isotope, which resulted in a negatively skewed δ 13 C value and higher carbon isotope fractionation. Furthermore, carbon isotope fractionation increased with increasing pH and HCO 3 − concentration in the medium. Thus, a linear relationship was observed between the δ 13 C values in algal cells and the total DIC in culture medium lacking AZ at high added HCO 3 − concentrations (8.00-20.00 mmol L −1 ). Growth of the two algal species was not inhibited by the DIC sources at high added HCO 3 − concentrations (8.00-20.00 mmol L −1 ) in the presence of AZ because of the moderate pH of the culture medium. The algal cells produced little carbon isotope fractionation during the growth period because of the unrestricted DIC. Therefore, the δ 13 C values of the algal cells with AZ were higher than those in medium lacking AZ. The algal growth rate and cell size in C. reinhardtii were higher than those of C. vulgaris at 16.00 and 20.00 mmol L −1 added HCO 3 − . Therefore, the δ 13 C values of C. reinhardtii were lower than those of C. vulgaris at 16.00 and 20.00 mmol L −1 added HCO 3 − . Dosage effect of acetazolamide on the stable carbon isotope signature Low AZ concentrations (0-0.1 mmol L −1 ) promoted growth of C. reinhardtii, whereas high AZ concentrations (0.5-1 mmol L −1 ) slightly inhibited growth. In the concentration range tested, AZ exhibited no significant effect on the growth of C. vulgaris. The effect of AZ on the stable carbon isotope fractionation of C. reinhardtii was similar to that of C vulgaris (Figure 2). The two algal species showed positively skewed δ 13 C values because of the slight inhibition of CAex activity by AZ at low concentrations, and negatively skewed δ 13 C values because of the high inhibition of CAex activity by high AZ concentrations. These results indicated AZ has a dose-dependent effect on algal stable carbon isotope fractionation. Conclusions Extracellular carbonic anhydrase can significantly influence stable carbon isotope fractionation of C. reinhardtii and C. vulgaris under normal growth conditions. The CAex of C. reinhardtii and C. vulgaris contributes approximately 9‰ of the stable carbon isotope fractionation, which was derived from the difference between the smaller fractionation from the catalyzed conversion of CO 2 to HCO 3 − and the larger fractionation from the uncatalyzed, slow interconversion. Moreover, CAex had a dose-dependent effect on algal stable carbon isotope fractionation, which could cause a large deviation in the predicted paleoenvironmental CO 2 concentration based on the algal carbon isotope composition.
2019-03-21T13:07:25.562Z
2012-03-01T00:00:00.000
{ "year": 2012, "sha1": "875b9068f3196692d2c2cab78449c7fa3e7eeb6d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11434-011-4861-9.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "e3bf5aef133940a81a11c01242b18b34a2991a30", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
229216330
pes2o/s2orc
v3-fos-license
Socio Demographic Characteristics Of Pregnant Women Who Are Experiencing Nausea Vomiting In Rural Areas Of Banyumas Regency . Nausea vomiting in pregnancy or commonly referred to as morning sickness is a common complaint in the first trimester, although it can also occur in the second trimester or all trimester. The purpose of this study was to determine the characteristic of pregnant women who experience morning sickness in rural areas. Respondents in this study were pregnant women who experienced morning sickness during July-September 2018 in rural areas of Banyumas District, Central Java Province, Indonesia. This study have used quantitative descriptive design.The results of univariate analysis of 61 pregnant women involved, 77% age of mothers were at low risk, 39,3% were junior high school education, 82% were not working, 60% were pregnant in the first trimester, 61,7% were multigravida, 55.7% did not have a history of nausea and vomiting, and 73,8% in the category of mild nausea and vomiting. It can be concluded that pregnant women who experience nausea and vomiting in rural areas are the majority of those who are of low risk, do not work, are pregnant for the first time, have early pregnancy, and with mild nausea and vomiting. Introduction The Pregnancy begins with the conception or meeting of eggs and sperm. During pregnancy physical and psychological changes occur in pregnant women as a form of pregnancy adaptation. Changes that occur during pregnancy cause various discomforts felt by pregnant women, one of which is nausea and or vomiting, known as morning sickness. Although referred to as morning sickness, symptoms of nausea and vomiting can occur throughout the day and often are early symptoms of pregnancy (1). The time and duration of nausea and vomiting are different in some pregnant women, some experience non-stop nausea with or without vomiting and some experience day or night and often last throughout the day. Morning sickness can begin to be felt at the beginning of the second week of pregnancy or at 8 or 12 weeks' gestation and disappear when gestational age reaches the twentieth week, but 20-30% of pregnant women continue to experience symptoms for more than 20 weeks of gestation until delivery (2,3). The cause of morning sickness is still unclear but its high frequency shows that nausea and vomiting are normal events in early pregnancy (4). Possible causes are a result of increased levels of human chorionic gonadotropin (hCG) and estrogen and with thyroxine, prostaglandin E2, and prolactin as additional supporting factors (2). Apart from that emotional condition where mothers in the first trimester experience mood swings, ambivalence or rejection of pregnancy also contributes (5,6). In several studies, the incidence of nausea and vomiting in pregnant women was associated with increased pregnancy hormones, older maternal age, type of work, level of education, smoking behavior, infant sex, and stress levels (2). In addition, reproductive history such as increased gravidity, parity, and a history of abortion are also said to increase the risk of nausea and vomiting. Chan et al. (2) also conveyed the duration of symptoms of long-term nausea and vomiting, which is longer than 4 months and is usually more common in younger women, women with multiple pregnancies, and in multigravidas. The results of several studies have considered demographic characteristics, maternal and psychosocial factors that can cause nausea and vomiting in general. Therefore, the aim of our study was to determine the prevalence and characteristic features of pregnant women who experience nausea and vomiting specifically in rural area populations. Research methods This study has received ethic approval from the Ethics Commission of the Faculty of Medicine Unsoed. This study uses descriptive analysis method with a quantitative approach. Data for this study were collected from July to October 2018 in rural populations in 3 sub-districts with 6 working areas of health centers in Banyumas Regency. During this period there were 61 pregnant women with nausea and vomiting who examined the condition of their pregnancy to Baturraden I and II Health Centers, Sumbang I and II Health Centers, and Kembaran I and II Health Centers stated they were willing to become respondents. Demographic variables (such as maternal age, gestational age, gravida, parity, history of nausea vomiting, abortion history, education, employment and economic status) were reported to be associated with the incidence of nausea and vomiting in pregnant women (2,7,8) taken using a questionnaire filled out by the respondents themselves. While the variable severity of vomiting nausea was measured using Unique Quantification of Emesis and Nausea (PUQE) -24 scoring system developed by Ebrahimi, Maltepe, Bournissen, & Koren (9) and has been translated into Indonesian and used in previous research (10). Descriptive statistics were used to analyze the description of each socio-demographic variable and the severity of nausea and vomiting. Result The results of descriptive statistical analysis of several demographic variables and severity of nausea and vomiting can be seen in Table 2. below. Table 1. it can be seen that the majority of pregnant women who experience nausea vomiting in rural areas are those aged between 20-35 years or in the category low risk as many as 47 respondents (77 %) with 39.3% of respondents educated last junior high school or as many as 24 respondents. The majority of mothers who experience nausea and vomiting also have a non-working status that is as many as 50 respondents (82%) and followed by more than 50% of respondents have low income (34 respondents). Mothers who experience nausea vomiting on average with gestational age in the first trimester (60%) and second trimester (35%) and the majority are multigravida (61.7%). More than half of respondents (55.7 %) have no history of previous nausea and vomiting and 73.8% or as many as 45 respondents experienced mild nausea and vomiting. Discussion Age is one of the risk factors for NVP, where NVP occurs more at a younger age (11), while increasing age is associated with a decrease in nausea and vomiting in pregnancy (12). The age of respondents in this study was mostly at low risk gestational age. The results of this study are not different from previous studies where the average age of pregnant women who experience NVP is in the age range of 20-35 years. Kugahara and Ohashi (13) reported that the average age of pregnant women experiencing nausea and vomiting was 25.2 years while the average age of pregnant women suffering from NVP reported by Matok et al. (14) is 29.7 years. In the results of the Suwarni study (15), 80% of mothers were aged 21-35 years. In this study, most pregnant women who experience nausea and vomiting were at 9-12 weeks' gestation. Lacroix's (16) study found that the majority of pregnant women, about 90%, experience nausea vomiting at 8 weeks' gestation and peak at 11-13 weeks. At 14 weeks' gestation, 50% of pregnant women have started to reduce vomiting, but 90% continue to experience nausea and vomiting until the age of 22 weeks. Some studies have found a link between the production of hCG (human chorionic gonadotropin) and the incidence of nausea and vomiting. The Kugahara & Ohashi study (13) divides the gestational age of mothers who experience morning sickness to 4-7 weeks, 8-11 weeks, 12-15 weeks and 16-19 weeks. He said that at the age of 16-19 weeks the mother had reduced her vomiting. In this study there were also third trimester pregnant women who still experience nausea vomiting, which is as much as 5%. The incidence of nausea and vomiting reported in the third trimester is significantly lower than in the first or second trimester of pregnancy and along with increasing gestational age the symptoms of severity of nausea and vomiting diminish. This is in accordance with that expressed by Gadsby et al. (12) where the symptoms of nausea and vomiting decrease after 20 weeks' gestation. However, on the contrary, Chou et al. (12) who have conducted a prospective longitudinal study of 91 pregnant women measured using Index of Nausea, Vomiting, and Retching (INVR) reported that NVP in the second and third trimesters was significantly lower than the first trimester, so the average value of INVR in the second trimester is not much different from the third trimester. The highest education level of respondents is junior high school. According to data from Central The Bureau of Statistics province of Central Java (12), the average length of education of the population of Central Java is 7.27 and the population of Banyumas Regency is 7.4. The results of this study indicate that respondents had higher education than the average population of Central Java and especially Banyumas Regency, but were lower than the average length of education of the Indonesian population at 8.32 years (17). The results of this study are in line with the research (10) where 40% of respondents were educated in junior high school. It is different from the research of Suwarni (15) and Heitmann, Nordeng, Havnen, Solheimsnes, & Holst (18) where the highest education level of respondents is in the category of higher education. This difference is possible because of the research area of Suwarni (15) and Heitmann et al. (18) located in urban areas while this research is located in rural areas. NVP is associated with low income levels and mothers do not work (16). In line with this study, it was found that the majority of respondents did not work and more than half had income below the regional minimum wage (UMR). UMR of Banyumas Regency amounted to Rp 1,589,000, and the family income of respondents was a majority below that figure. The researcher analyzes that more than 50% of the respondents are below the poverty line and are between 17.52% of the poor population of Banyumas Regency and 14.88% of the population in Central Java with an income level of Rp 322,489 per month (17,19). However, the results of this study are different from findings from the Heitmann et al. (18) who reported that 80% of mothers who experienced NVP were working mothers and only 7.4% did not work. Piwko's, Koren, Babashov, Vicente, & Einarson (20) reported that NVP caused an increase in economic burden. The results of this study found that mothers with majority NVP did not work and family income was below the UMR. This can make the picture that mothers who experience nausea vomiting in rural areas will have an increasingly heavier economic burden. Pregnant women who have a history of nausea and vomiting in previous pregnancies will be at risk of developing nausea and vomiting in subsequent pregnancies compared to those who do not have a history of nausea and vomiting. This statement is in line with Trogstad, Stoltenberg, Magnus, Skjaerven, & Irgens's research (21) which states that 15.2% of mothers with a history of nausea and vomiting in previous pregnancies are at risk of experiencing nausea and vomiting in subsequent pregnancies. However, this statement is not in line with what was found in this study, where among 61 mothers who experienced nausea and vomiting, more than 50% of mothers had never experienced nausea and vomiting before. Meanwhile Fejzo, Macgibbon, Romero, Goodwin, & Mullin (22) found 57 multigravida mothers, 81% of whom reported that they experienced severe nausea and vomiting in their second pregnancy. Likewise with the results of this study where of 61 mothers who experienced nausea vomiting, 61.7% of them were multigravida mothers. But the results of a recent study conducted by Nurmi, Rautava, Gissler, Vahlberg, and Polo-Kantola (23), concluded that in the majority of pregnant women with a history of nausea and vomiting, there is no recurrence in subsequent pregnancies. Even so the risk of recurrence cannot be predicted for each individual (pregnant woman). The severity of NVP in pregnant women varies. The results of this study the majority of respondents experienced mild NVP and none experienced severe NVP. In contrast to the research of Heitmann et al. (18) who reported that of a total of 712 respondents, the majority of 61.66% of respondents experienced moderate NVP, as many as 29.5% of respondents experienced severe NVP and only 0.09% of respondents experienced mild NVP. In line with Heitmann's study, the average NVP in the Matok et al. (14) of 9 which means that the average respondent experiences moderate nausea and vomiting. The study both used the PUQE-24 score in categorizing NVP. Conclusion The conclusion that can be drawn from this study is that mothers who experience nausea vomiting in rural areas are mostly low educated followed by low economic status. This makes the burden of life more severe so that appropriate treatment is needed to overcome nausea and vomiting in pregnant women in rural areas.
2020-11-26T09:04:23.237Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "94465741fde5ff34f69938529a9a08b9aac2e35a", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2020/14/shsconf_icore2020_01003.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d30f0e9ee81d30c3137d6f297ea86df9b2437892", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
248255297
pes2o/s2orc
v3-fos-license
Anaerobic Digestate from Biogas Plants—Nuisance Waste or Valuable Product? : Biogas production in waste-to-energy plants will support the decarbonization of the energy sector and enhance the EU’s energy transformation efforts. Digestates (DG) formed during the anaerobic digestion of organic wastes contain large amounts of nutrients. Their use for plant fertilization allows for diversifying and increasing the economic efficiency of farming activities. However, to avoid regional production surpluses, processing technologies allowing the acquisition of products that can be transported over long distances are required. This study therefore aimed at determining the effect of applied methods of DG treatment on the chemical composition of the resulting products and their effect on the yields and chemical composition of plants. The following digestate-based products (DGBPs) were tested: two different digestates (DGs), their liquid (LF) and solid fractions (SF) and pellets from DGs (PDG), and pellets form SFs (PSF). Results from the experiment show that during SF/LF separation of DGs, >80% of nitrogen and 87% of potassium flows to LFs, whereas >60% of phosphorus and 70% of magnesium flows to SFs. The highest yields were obtained using untreated DGs and LFs. The application of DGs and LFs was not associated with a leaching of nutrients to the environment (apparent nutrients recovery from these products exceeded 100%). Pelletized DG and SF forms can be used as slow-release fertilizer, although their production leads to significant nitrogen losses (>95%) by ammonia volatilization. Introduction The European Union (EU) has promoted a waste-to-energy (WTW) initiative to minimize waste and greenhouse gas (GHG) emissions, and increase renewable energy production [1]. Examples of WTW technology are thermal treatment (incineration), pyrolysis, gasification, mechanical biological treatment, the biological drying process as a source of refuse-derived fuel or solid-recovered fuel, and anaerobic digestion (AD) [1]. Among the technologies mentioned, AD is very promising and has many advantages. AD is a biological process for converting organic waste into biogas [2], which has recently become a promising source of renewable energy. The key advantage of AD is that it can handle a wide range of organic waste forms, especially wastes with high moisture content (60-99%) [1]. These types of waste are particularly difficult to manage and recover energy from with the use of other technologies. In biogas plants, manure, slurry, agricultural residues, energy crops, by-products from the agri-food industry or wastewater treatment plants, and other organic wastes are subject to anaerobic digestion. Most biogas production in the EU (76%) comes from plants grouped under the term "methanation of non-hazardous waste or raw plant matter ("other biogas")" [3]. Biogas is currently most often utilized in Combined Heat and Power (CHP) units consisting of gas engines co-generating heat and electricity. value currently exist. However, most emerging technologies require significant financial investments and are thus not always economically viable. Due to these and other technical challenges, the predominant treatment of DG is still its use as fertilizer. The use of DG for crop nutrition gains from the rapid increase in mineral fertilizer prices. These lead to an increase in the cost of cultivation and, consequently, food prices. Farmers are dependent on alternative sources of nutrients for their crops. This increases the demand for DG and the products resulting from DG treatment. Accordingly, there is still a need for research on the fertilizer efficiency of the different products obtained from DG. Most of the publications on biogas production focus on the evaluation of the effectiveness of anaerobic digestion, ignoring the issue of digestate management. Furthermore, the search for methods for the rational use of digestate is as important as the search for methods to increase the efficiency of biogas production. Due to the considerable amount of available digestate, its treatment must be optimized to avoid negative effects on the environment, such as excessive ammonia emission, leaching of N and P to waterbodies, or an increase in the content of heavy metals in plants. This study therefore aimed to determine the effect of different methods of DG treatment on the chemical composition of the resulting products, and their resulting effect on the yields and chemical composition of plants. Additionally, the article presents a methodology for calculating the flow of nutrients to the liquid and solid fractions during the separation of raw digestate, which should be considered as a significant added value of the work. This is especially important from an engineering point of view. Estimating the chemical composition of the LF of the DG is important at the stage of selecting the technology and designing the biogas plant. The LF can be returned to the fermenter in order to dilute the solid substrates. The inflow of an excessive amount of potassium ions contained in the LF of the DG may lead to salinity and inhibition of the biological process of methane fermentation. The calculation formulas proposed in the work will allow the assessment of the chemical composition of the LF without its laboratory analysis. This will enable quick balancing and selection of the volume of the LF that can be returned to the fermenter, and the volume that should be utilized by other methods. Collection of Digestate-Based Produtcts The study was conducted using digestate obtained from anaerobic digestion conducted in two fermenters with a volume of 140 L. In the first fermenter (F1), stillage (rye) and maize silage were used as substrates, and in the second (F2), pig slurry and maize silage were used. A detailed description of the fermentation and substrates is presented in [23]. The obtained digestate samples were separated into the solid and liquid fraction. The solid fraction was dried on an oil drying floor up to 85% dry weight, and pelletized on a 30 kW matrix pelletizer (Testmer, Reguły, Poland) ( Figure 1). The obtained pellets were approximately 10-20 mm long and 0.6 mm in diameter ( Figure 2). Due to the partition of nutrients during digestate separation affecting the fertilizer value of the obtained products, the study also involved drying and pelletizing digestate. Pot Experiment The pot experiment was conducted in an experimental greenhouse of the Warsaw University of Life Sciences. Soil samples (0-25 cm soil layer) were collected in Skierniewice (51°96'48'' N, 20°15'92'' E). The research covered soil fertilized with nitrogen, phosphorus, and potassium fertilizers without liming since 1923. The soil can be described as Luvisol (FAO 2006). The pots were filled with 15 kg of soil and mixed with different DGBPs. DGBPs were applied in a dose corresponding to 170 kg N ha −1 (0.68 g N pot −1 ). Doses of particular nutrients and heavy metals introduced to the soil with DGBPs are presented in Table 1. Pot Experiment The pot experiment was conducted in an experimental greenhouse of the Warsaw University of Life Sciences. Soil samples (0-25 cm soil layer) were collected in Skierniewice (51 • 96'48" N, 20 • 15'92" E). The research covered soil fertilized with nitrogen, phosphorus, and potassium fertilizers without liming since 1923. The soil can be described as Luvisol (FAO 2006). The pots were filled with 15 kg of soil and mixed with different DGBPs. DGBPs were applied in a dose corresponding to 170 kg N ha −1 (0.68 g N pot −1 ). Doses of particular nutrients and heavy metals introduced to the soil with DGBPs are presented in Table 1. The fertilizer effect of the analyzed DGBPs was compared to that on control objects, which were not subject to fertilization. The experiment was arranged as a completely randomized design with four replications. The location of the pots was randomized daily. The test plant was maize of the Bosman cultivar cultivated for green forage. The pots were irrigated with distilled water up to a constant moisture at 60% water-filled pore space. Water was applied to the entire surface of the pots. The experiment was conducted in controlled growth conditions that included a day/night cycle of 16/8 h, with a day/night temperature of 25/19 • C and artificial lighting to complement daylight. After biomass harvest of maize, samples were weighed before and after drying (in an oven set at 60 • C) to determine their fresh and dry matter. Estimation of the Distribution of DM, FM, and Nutrients Flowing from Digestates into Solid Fraction (SF) and Liquid Fraction (LF) The distribution of dry matter (DM), fresh matter (FM), and nutrients flowing from digestates into solid fraction (SF) and liquid fraction (LF) were calculated as fol-lows according to the formulas: where: DM dstLF -distribution of DM (dry matter) into LF; DM dstSF -distribution of DM into SF; FM dst(LF or SF) -distribution of FM into LF or SF; N dst(LF or SF) -nutrients flow into LF or SF; N cont -nutrient content in LF or SF; FS DM -DM content in FS; DG DM -DM content in DG; LF DM -DM content in LF. Estimation of Nutrients Use Efficiency Apparent fertilizer nutrient recovery was calculated as follows according to the formula by Cavalli et al. [24]: where: FO-fertilized object; CTR-control object. Analytical Procedures All analytical tests were carried out in a laboratory belonging to the Division of Agricultural and Environmental Chemistry, Agricultural Institute, Warsaw University of Life Sciences-SGGW. Sampled DGBPs were dried at 60 • C using drier PREMED (Marki, Poland) to estimate Total Solids content (TS). The dried and ground plant material and DGBPs were mineralized in HNO 3 Statistical Analysis One-way analysis of variance (ANOVA) was carried out to determine statistically significant differences between treatments (at p < 0.05). The mean values were compared in a Tukey's (HSD) multiple-comparison test. Relationships between the nutrient dose and maize yield, and the macronutrients and heavy metal content in maize, were evaluated using multiple regression with backward selection of variables. Statistical analyses were carried out using Statistica PL 13.3 software (Tulsa, OK, USA). Chemical Characteristics of Digestate and Products Obtained from Its Treatment All analyzed products obtained from DGs were characterized by alkaline reaction (pH above 7.0). Digestates not subject to processing (DG1, DG2) and liquid fractions (LF1, LF2) were characterized by low content of total solids (TS) (approximately 5.4 and 3.1%, respectively), and relatively low concentration of nutrients per 1 kg FM (fresh matter) (Tables 2 and 3). Due to higher content of TS, the remaining products contained higher amounts of nutrients (mean content of TS in SF1 and SF2 was approximately 16.0%, and in pellets more than 95%). DG1 and DG2 were characterized by low content of C (on average approximately 20 g C kg −1 FM). The obtained data showed that during LF/SF separation, the SFs mainly become enriched in carbon (C) ( Table 4). The highest C content was determined in pellets obtained from DGs and SFs (on average approximately 332 g C kg −1 FM). N T content in DG1 and DG2 was approximately 4.85 and 3.35 g N T kg −1 FM, respectively. This indicates that the type of organic materials used for anaerobic digestion affects the chemical composition of the resulting digestate [25]. More than 80% of nitrogen flows from DG to LF, while only 20% flows to the solid fraction. This shows that, in DG, the dominant form of nitrogen was soluble ionic forms (NH 4 + -N). The reported results are in agreement with other studies [26,27]. In the analyzed digestates (DG1, DG2), the share of NH 4 + -N in N T reached an average of approximately 70%. As confirmed by other authors [28], this suggests that digestate has a very high fertilizer potential with a high amount of plant-available N. Mechanical separation caused an increase in the NH 4 + -N/N T ratio in the liquid fraction (Table 2), with an average increase of approximately 97%, and a decrease in the solid fraction that averaged approximately 56%. The lowest NH 4 + -N/N T ratio was measured in the pellets, averaging approximately 6.7% ( Table 2). The comparison of the content of NH 4 + -N in PDG and PSF with DG and SF indicate a decrease of about 96% in the PDG and PSF. This suggests intensive losses of ammonia with water vapor in the course of drying preceding the pelletizing process. The alkaline reaction of digestate-based products (DG and SF) could have favored the release of NH 3 . The volatilization of NH 3 during the processing of digestate to pellets was addressed in the study by Valentinuzzi et al. [29]. Ammonia negatively affects human health and leads to air quality degradation. NH 3 emission and its further transformations constitute a significant indirect N 2 O emission pathway in agricultural systems [30]. Drying of DG and SF may thus contribute to the global climate warming. According to Pan at al. [31], NH 3 emission results in 0.1−0.16 million tons of indirect N 2 O-N emission per year. A potential solution may be the application of scrubbers with acid that would bond with the released ammonia in the course of processing digestate-based products. Another practice to minimize NH 3 emissions is decreasing the pH of SF by adding acid. Such solutions have been successfully used to reduce NH 3 emissions from slurry [32]. The rate of nutrients released from organic matter depends on its susceptibility to the processes of mineralization, determined by the ratio of carbon to nitrogen compounds (C:N). Soil N immobilization after anaerobic digestate application has been previously reported for products with the C:N ratio exceeding 25−30 [27]. All the analyzed digestate-based products were characterized by a narrow C:N ratio (ranging from 2.5 for LF 1 to 24.2 for SF 2) (Table 2). Thus, their mineralization in the soil and release of plant nutrients was fast. However, Valentinuzzi et al. [29] reported that the C:N ratio is not an accurate indicator to predict N mineralization in soil treated with anaerobic digestates. According to the authors, that kind of products contain organic matter with lower biodegradability in the soil. In the case of a very narrow C:N ratio, however, higher nitrogen losses are probable, as previously observed by Möller and Müller [33]. According to Sosulski at al. [34], the magnitude of nitrogen losses through leaching corresponds with the fertilization system, and was highest in the mineral-organic system. The application of mineral forms of nitrogen decreased the C:N ratio, increasing leaching of nitrogen. Phosphorus is a depleting resource. Therefore, great attention is paid to the search for alternative sources [35]. Compared with P contents of wastewater or urine [36], the P content in digestate is relatively high. The N:P ratio is an important indicator for the assessment of the fertilizing properties of digestate-based products. Low N:P ratios (i.e., ≤2) in digestate may indicate P deficiencies that should be supplemented with mineral fertilizers containing P [29]. A high excess of P in relation to N may lead to an increased risk of run-off or leaching of P from soil to surface water bodies. Among the analyzed forms of digestate, in DGs and FCs, the N:P ratio was considerably higher than 2. Due to the lower P content, fertilization with the remaining forms, i.e., SFs, PSFs, and PDGs, did not increase the risk of phosphorus losses from the soil. In the analyzed digestates DG1 and DG2, P content reached 0.44 and 0.81 g kg −1 FM, respectively (Table 3). Thus, during separation of SF/LF, more phosphorus was supplied to SFs (approximately 57% from DG1 and 69% from DG2, Table 4). Literature data [12] indicate that, during SF/LF separation, only 30% of total amount of phosphorus flows to SF. Our results further show a high content of phosphorus in pellets obtained from DGs (9.35 and 12.52 g kg −1 FM for PDG1 and PDG2, respectively). Due to the partitioning of P between SF and LF, pellets obtained from SFs contained less P than pellets from DGs (PDG). Pellets PSF1 contained approximately 8% less P than PDG1. In PSF2, P content was approximately 15% lower than in PDG2. Differences in potassium (K) content in the analyzed pellets were even more evident than differences in phosphorus content. K content in PSF1 was more than 44% lower than in PDG1, and in PFS2 approximately 34% lower than in PDG2 (Table 3). In digestate, potassium primarily occurs in an unbound ionic form that during separation mainly flows to the liquid fraction. In our study, 87% of K contained in DG1 flows to LF1 (Table 4). Such large flows of K to LF may disqualify the possibility of returning LF to the fermenter in order to dilute the solid substrates. Such an engineering solution is proposed [37], but as the conducted research shows, it may lead to excessive salinity and inhibition of methane fermentation. Potassium recovery from LF is difficult, because this nutrient forms soluble salts that cannot be precipitated from solution. Moreover, membrane technologies can be used to a limited extent [38]. More advanced treatment methods are too expensive considering the amount of LF produced in the biogas plant [37]. Hence, it can be concluded that LF, as a nitrogen-and potassium-rich, liquid, fast-acting fertilizer, is the best eco-friendly and cost-effective solution. Mg and Ca content in the analyzed digestate-based products was lower than the content of the remaining macroelements (Table 3). Magnesium content was the lowest in LF (averaging 0.07 g kg −1 FM) and, as expected, the highest in pellets (from 5.1 g kg −1 FM in PDG1 to 10.7 g kg −1 FM in PDG2). During LF/SF separation, more magnesium flows to SF (approximately 73% from DG1 and 87% from DG2). The opposite dependency was observed in the case of Ca. During separation, more Ca flows from DGs to LFs (Table 4). Due to this, no considerable differences were recorded between Ca content in DGs and LFs. Results indicate that the various forms of digestate may be a valuable source of nutrients for plants. The potential of digestate to harm the environment and human health, however, is a matter of concern [29]. An important indicator used to assess the agronomic quality of digestates is the content of heavy metals. Contents of heavy metals (HMs) in the analyzed digestate-based products were low ( Table 3). The lowest content of HMs was determined in LFs and DGs. Mean content of HMs in the LFs (averaging LF1 and LF2) was approximately 15.2 mg Zn; 3.7 mg Cu; and 6.6 mg Mn kg −1 FM. Mean content of HMs in the DGs (averaging DG1 and DG2) was 22.2 mg Zn; 6.4 mg Cu; and 9.5 mg Mn kg −1 FM. Content of Zn and Mn in SFs was more than twice as much, and Cu more than three times higher, than in DGs. The obtained results correspond with the scientific literature. Exemplarily, Tambone et al. [12] report for DG: Zn content of 13.5 mg kg −1 FM, Cu 4.2 mg kg −1 FM; for the LF: 10.1 mg Zn kg −1 FM and 3.0 mg Cu kg −1 FM; and for SF: 69.9 mg Zn kg −1 FM and 22.1 mg Cu kg −1 FM. In our study, during SF/LF separation, more Zn and Mn flowed from DG to LF ( Table 4). The distribution of Cu to LF and SF was divided evenly between SF and LF. The highest content of HMs was observed in pellets of both SFs and DGs. Crop Yields The results from this study showed that the use of digestate increased maize yields, and the form of digestate was a factor determining their size. Such fertilization effects have been observed in previous research studies with maize and other plants [39,40] (Table 5). The literature provides considerable data on the fertilizer value of digestates. Significant yield potential of digestate had also been demonstrated by Szymańska et al. [41]. According to Riva et al. [28], digestate application resulted in a maize yield as high as that obtained by using urea. Meanwhile, Greenberg et al. [42] reported that the use of digestate from AD resulted in lower aboveground crop biomass production than the application of mineral fertilizer. Lošák et al. [43] reported that the yield potential of digestate is higher when it is used in combination with mineral phosphate fertilizers. The average yield of maize in our study ranged from 252.75 g FM pot −1 on the control object to 447.50 g FM pot −1 on the object treated with DG2. For maize plants treated with DGBPs, considerably higher yields were obtained than for the control. However, considerably lower maize yields were obtained when fertilized with the pelletized form of DGs and SFs. This suggests that this form of digestate is rather suitable as a slow-nutrient-release (mid-and long-term) organic fertilizer. According to Dahlin et al. [44], pellets from digestate should find application in the private garden sector. A considerably greater (short-term) yield-generating effect was observed after fertilization with PDG1 and PDG2 than with digestate solid fraction pellets (PSF1 and PSF2). It appears unjustified to dry digestate for the purpose of retaining nutrients that easily flow to the liquid fraction during DG separation. The difference in the yields of maize between PSF1 and PSF2, and between PDG1 and PDG2, averaged approximately 38 g FM pot −1 (11%). Relatively high crop yields were obtained on soils treated with unprocessed digestate DG2 and liquid fraction obtained from that DG (LF2). On average, the maize yields obtained under these treatments exceeded the control yield by 75%. In summary, study results highlight that irrespective of substrates used for the production of biogas, an evidently better yield-generating effect is provided by unprocessed digestate and the liquid and solid fraction of digestate than SFs and DGs pellets. Regression analysis ( Table 6) showed that among the applied nutrients, only the dose of NH 4 + -N has a statistically significant relationship with maize yield, indicating that yields mainly benefitted from these nutrients. This confirms that different forms of DGs containing an active form of nitrogen are a suitable alternative to mineral nitrogen fertilizers. Results suggest that digestate processing techniques should especially consider the retention of mineral N in the fertilizer mass. Heavy metal (HM) contents in the tested products had no significant effect on yields. Chemical Composition of Crops The application of digestates and digestate-based products affects the chemical composition of crops [33]. The lowest nitrogen content was found in plants growing on the control object (14.57 g N kg −1 DM) ( Table 5). Nitrogen content in plants growing on objects fertilized with LFs was more than 11% higher than on the control object. Higher nitrogen content was determined in plants fertilized with DGs and SFs than those fertilized with other forms of digestate. Nitrogen content in plants on these objects was 17% and 25% higher in comparison to nitrogen content in plants from the control object. Fertilization with digestate pellets increased nitrogen content in the plants to the lowest degree (by approximately 8% in comparison to control). Similar to our results on the nitrogen content, the phosphorus content in plants was lowest for the control (1.40 g P kg −1 DM). The highest P content was determined in plants fertilized with pellets (PDG1, PDG2, PSF2, and PSF1). It was approximately twice as high as on the control object. P content in plants on objects DGs, LFs, and SFs was significantly higher (by approximately 33−50%) than on the control object. As expected, mean potassium content (20.31 g K kg −1 DM) in maize was higher than that of nitrogen, phosphorus, calcium, and magnesium (16.60 g N, 2.33 g P, 2.13 g Ca, 0.94 g Mg kg −1 DM, respectively). Potassium contents strongly depended on the form of the applied digestate ( Table 5). The lowest potassium content was determined for plants growing in the soil fertilized with DG2 (14.75 g K kg −1 DM). Potassium content in maize on objects fertilized with DGs, LFs, and SFs was significantly lower than that on the control object. Fertilization with PSFs and PDGs considerably increased potassium content in plants by approximately 22-31% in comparison to the control object. The magnesium content in plants varied from 0.73 to 1.22 g Mg kg −1 DM. Only fertilization with DG2, LF1, and SF2 significantly increased Mg content in maize. On the remaining experimental objects, Mg content in maize was approximate to that on the control object. The calcium content in plants varied from 1.40 to 3.12 g Ca kg −1 DM. Only on objects fertilized with LFs, was significantly higher Ca content in maize in comparison to the control object determined. The use of digestate and digestate-based products for fertilization raises concerns about the deterioration of the quality of biomass, especially in the context of the content of heavy metals (HMs) [45,46]. Among the analyzed HMs, Mn amounts in plants were highest. The average content of manganese in plants was higher than zinc and copper content (52.80 mg Mn, 17.34 mg Zn, and 2.51 mg Cu kg −1 DM, respectively). On all objects treated with DGBPs, Zn content in maize was significantly lower than the content of that HM in maize sampled from the control object (Table 5). Moreover, Mn content in maize on the majority of fertilizer objects was lower than on the control object. Only in maize fertilized with DG2 and SF2, was the Mn content approximately similar to that in plants growing on the control object. Copper content in plants varied from 1.96 mg Cu kg −1 DM on object DG1 to 3.22 mg Cu kg −1 DM on object PDG2. Only the application of pellets obtained from DGs (PDG1, PDG2) significantly increased Cu content in maize in comparison to control. On the remaining objects, Cu content in plants did not significantly differ, or was even significantly lower than in plants from the control object (Table 5). In summary, study results show that fertilization with digestates and digestate-based products mostly decreased the content of manganese, zinc, and copper in fertilized plants in comparison to the content of these elements in the control object. This may be caused by the chelating effect of organic matter contained in tested products, thereby decreasing the bioavailability of HMs for plants. Only fertilization with pellets from unprocessed digestate increased copper content in plants. Our experiment suggests that, among the analyzed macronutrients (N, P, K, Mg, Ca) and heavy metals (Zn, Cu, Mn), only the concentrations of Mg and Mn in plants did not show a significant linear relation with the dose of these components provided by the different forms of digestate (Table 7). Apparent Macronutrients and Heavy Metals Recovery The apparent macronutrients recovery (ANR) by plants from the tested products were dependent on their form and the type of nutrients (Figure 3). Highest ANR values were determined on objects fertilized with LFs. This results from the fact that this digestate fraction primarily contains soluble forms of nutrients readily available for plants. For the majority of macronutrients, ANR on these objects considerably exceeded 100%. This points to intensive uptake of nutrients from the soil resources by a greater mass of plants than that obtained on the control object. High ANR values were also recorded on objects fertilized with DGs (approximately 100% or higher). It can therefore be concluded that the fertilizer use of DGs and LFs will not be associated with a leaching of nutrients to the environment. However, results from this pot experiment in field trials require further validation in field experiments. The apparent macronutrients recovery (ANR) by plants from the tested products were dependent on their form and the type of nutrients (Figure 3). Highest ANR values were determined on objects fertilized with LFs. This results from the fact that this digestate fraction primarily contains soluble forms of nutrients readily available for plants. For the majority of macronutrients, ANR on these objects considerably exceeded 100%. This points to intensive uptake of nutrients from the soil resources by a greater mass of plants than that obtained on the control object. High ANR values were also recorded on objects fertilized with DGs (approximately 100% or higher). It can therefore be concluded that the fertilizer use of DGs and LFs will not be associated with a leaching of nutrients to the environment. However, results from this pot experiment in field trials require further validation in field experiments. Considerably lower ANR was recorded on objects fertilized with pellets, i.e., PDGs and PSFs, particularly in reference to Ca and Mg. On these objects, the highest values were reached by apparent potassium recovery (approximately 90%). This was similar to that obtained on objects fertilized with DGs. Apparent N recovery reached 73.3% and 53.4%, respectively, for PDG and PSF treatments. This suggests that maize uses nitrogen contained in these products very efficiently. Corréa et al. [47] Considerably lower ANR was recorded on objects fertilized with pellets, i.e., PDGs and PSFs, particularly in reference to Ca and Mg. On these objects, the highest values were reached by apparent potassium recovery (approximately 90%). This was similar to that obtained on objects fertilized with DGs. Apparent N recovery reached 73.3% and 53.4%, respectively, for PDG and PSF treatments. This suggests that maize uses nitrogen contained in these products very efficiently. Corréa et al. [47] obtained only 30 to 35% of apparent nitrogen recovery of urine-N in grass cultivation. P recovery form fertilizers is usually low. It is one of the causes of its accumulation in the soil and run-off, or leaching of P from the soil to surface waterbodies. In the conducted experiment, very high values of apparent P recovery from PDGs (45%) and PSFs (40%) were obtained. In studies conducted by Sarvi et al. [48], apparent P recovery was lower than in our research, reaching approximately 23%. Apparent Zn and Cu recovery from different forms of digestate was very low (Figure 4). Only on objects fertilized with LFs did the AHMsR value exceeded 10%. Only apparent Mn recovery was high, particularly on objects fertilized with LFs and DGs. Appl apparent nitrogen recovery of urine-N in grass cultivation. P recovery form fertilizers is usually low. It is one of the causes of its accumulation in the soil and run-off, or leaching of P from the soil to surface waterbodies. In the conducted experiment, very high values of apparent P recovery from PDGs (45%) and PSFs (40%) were obtained. In studies conducted by Sarvi et al. [48], apparent P recovery was lower than in our research, reaching approximately 23%. Apparent Zn and Cu recovery from different forms of digestate was very low ( Figure 4). Only on objects fertilized with LFs did the AHMsR value exceeded 10%. Only apparent Mn recovery was high, particularly on objects fertilized with LFs and DGs. Conclusions Biogas production has become more popular and regionally concentrated in recent decades, creating areas with high digestate surpluses compared with crop lands and pas- Conclusions Biogas production has become more popular and regionally concentrated in recent decades, creating areas with high digestate surpluses compared with crop lands and pastures. The situation creates a need for the development of digestate processing technologies that allow the acquisition of valuable digestate-based products. Different processing technologies can be employed to produce nutrient-rich products. Mechanical separation of DGs leads to the separation of fresh matter and nutrients into the liquid and solid fraction. More than 80% of nitrogen and 87% of potassium flows from DGs to the LFs, whereas more than 60% of phosphorus and 70% of magnesium flows to SFs. All tested DGBPs were relatively valuable by-products that should be used as fertilizers due to their richness in plant-available nutrients. The non-treated digestate (DG) and liquid fraction (LF) may have the advantage to deliver nutrients to plants more rapidly that the pelletized form (PDGs and PSFs). The nutrients used in the form of DGs and LFs were fully consumed by the maize (apparent nutrients recovery exceeded 100%). This means that fertilization with these products does not lead to losses of soil nutrients. Pelletized forms of digestate can be applied as a slow-release organic fertilizer. This type of fertilizer has recently been promoted due to its lower negative impact on the natural environment. However, the conducted study showed that its production could lead to significant nitrogen losses (more than 95%) by ammonia volatilization.
2022-04-20T15:14:31.810Z
2022-04-17T00:00:00.000
{ "year": 2022, "sha1": "9bb20e5582439a7098adfd2d7e0cfaf681a02118", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/8/4052/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d6cc574740647510f75b9878b736558f57881216", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
87414149
pes2o/s2orc
v3-fos-license
Preharvest Peroxyacetic Acid Sprays Slow Decay and Extend Shelf Life of Strawberries . Strawberry is an important fruit crop in Florida. Yearly losses can be attributed to pre-and postharvest decay incited by Botrytis cinerea P. Micheli ex Pers. and postharvest decay resulting from primarily Rhizopus stolonifer (Ehrenb. ex Fr.) Vuillemin. In this study, the sanitizer peroxyacetic acid (100 m L (cid:2) L – 1 ) was sprayed on flowers and developing strawberries 1, 2, and 3 d preharvest. Most of the time, fruit sprayed 3 days before harvest had significantly less decay than fruit sprayed 1 day preharvest or not sprayed when stored at 18 8 C. Strawberries sprayed in the field with peroxyacetic acid and then coated postharvest with 1% chitosan coating had reduced decay compared with fruit only treated preharvest with peroxyacetic acid (PAA) for up to 12 days in storage. Sensitivity of B. cinerea hyphae and conidia to PAA was shown by the presence of a zone of inhibition using the disc assay method. The United States is the largest producer of strawberries (U.S. Dept. of Agriculture, 2005) with Florida second only to California in strawberry production.Strawberries have high levels of antioxidants (Wu et al., 2004) and are under increasing demand by consumers.Strawberries are extremely fragile and perishable necessitating minimal handling after harvest (Mitcham and Mitchell, 2002).For this reason, strawberries are harvested and packed in the field directly into retail clamshell containers that are delivered to the supermarket.Any treatment to reduce decay of strawberries would best be done as a preharvest operation to fit the current industrial harvesting and handling practices.Postharvest decay treatments to strawberries would only be accepted by the industry if decay reduction and resulting shelf life extension were very significant, justifying a change in the current harvesting and handling operation.However, postharvest treatments could increase shelf life of processed strawberries for the cut fruit industry.In most cases, any postharvest handling of strawberries leads to injury, which provides increased opportunity for wound pathogens and enhances decay. Strawberry losses resulting from diseases are often difficult to quantify because of plant cultivar and cultural practices, which vary with locality, handling, storage, and marketing (Maas, 1980).Fungi are significant pre-and postharvest decay organisms for strawberries.Botrytis berry rot (causal organism, Botrytis cinerea) causes both pre-and postharvest disease.It often initiates infection in the field at the flowering or young fruit stage, often remaining latent until postharvest (Blacharski et al., 2001;Maas, 1980; U.S. Dept. of Agriculture, 2005).Botrytis is a facultative saprophyte producing a repeating cycle of asexual spores on senescent tissues, diseased flowers or fruit that are dispersed to young plant tissues by rain, wind, or insects (Blacharski et al., 2001;Maas, 1980). More important as postharvest than preharvest pathogens are Rhizopus stolonifer and Mucor spp., Zygomycetes, commonly found in soil.These fungi are wound parasites and can become established on ripe fruit within 12 h (Maas, 1980).Although preharvest applications of fungicides have been shown to increase yields and decrease postharvest decay incited by B. cinerea, R. stolonifer and related opportunistic organisms are not well-controlled by preharvest fungicidal sprays.Most fungicides are residual on the fruit/plant, and because they are still present, fungi can acquire resistance to them rendering them ineffective (Maas, 1980;Maas and Smith, 1972).Additionally, synthetic fungicides are not acceptable for the organic market. After harvest, refrigeration is most commonly used to slow decay in strawberries and maintain quality (El Ghaouth et al., 1991;Maas, 1980;Nunes et al., 2002).Most fungi-cides cannot maintain strawberry quality without the aid of refrigeration (Blacharski et al., 2001).In addition to preharvest treatments, postharvest applications of films and coatings such as chitosan act as antimicrobial agents while maintaining fruit quality (El Ghaouth et al., 1991, 1992, 1997;U.S. Dept. of Agriculture, 2005).For organic fruit, use of acidic vapors, food additives, and water dips offer some protection from decay (Karabulut et al., 2004;Park et al., 2005;Sholberg et al., 2000). While postharvest surface treatments may delay decay, keeping spores from developing on plant tissues while in the field is most efficacious (Blacharski et al., 2001;Maas, 1980;Maas and Smith, 1972;U.S. Dept. of Agriculture, 2005).Our primary objective was to lengthen shelf life of strawberries by using the nonresidual commercial disinfectant, peroxyacetic acid (PAA) as an antimicrobial preharvest spray to reduce postharvest decay.Postharvest coatings were also applied to the fruit surface to enhance the antimicrobial control of the preharvest application of PAA.This disinfectant/sanitizer is soon to be approved for the organic market and has been shown to be effective against postharvest decay when applied postharvest on mango and citrus (Narciso, 2005;Narciso and Plotto, 2005). Experimental strawberry plants, Fragaria •ananassa Duchesne, variety 'Strawberry Festival', were located at the Florida Strawberry Growers Research and Education Center in Dover, Fla. Plants were in a commercial field in a double-row bed with 30.5-cm spacing between rows.Water and fertilizer were provided through drip tape after initial overhead irrigation after transplanting.The rows were %74 m long and each row was divided into six blocks with 54 to 57 plants in each block.A buffer area of %0.61 m at the start and the end of each row was used to better isolate experimental plants from open areas.Studies on strawberry plants in this field began in Jan. 2006. Spraying.Before spraying, ripe strawberries were harvested from all plants in the experimental areas including control (nonsprayed) blocks.This left only flowers and very young strawberries, which would result in fruit that were synchronized in ripening.Commercial PAA (OxiDate; BioSafe Systems, Glastonbury, Conn.) was mixed on-site (100 mLÁL -1 ) in 8-L hand-sprayers (Chapin, Batavia, N.Y.).Spraying took place 3 d, 2 d, and 1 d (3S, 2S, and 1S) before harvesting the fruit from the experimental rows (e.g., 3S fruit were harvested the morning of the fourth day after spraying, leaving plant in ''contact'' with the spray for 3 d).Three days before the first harvest, the first blocks of each of two experimental rows was sprayed (3S) with 14 L of PAA per one block area using a heavy mist setting and completely covering all surfaces and all parts of the plant.Plants were not resprayed during the course of each experiment.Handheld plastic barriers were used to prevent spray from drifting to unsprayed areas.The next day, the spray protocol was repeated and again on the third day.Three days after the initial spray, all the ripe strawberries in the experimental spray and nonsprayed areas were harvested.The strawberries were picked with gloved hands and placed directly into polyethylene terephthalate 325-mL vented clamshell containers (Pactiv Corp., Lake Forest, Ill.), 10 fruit per container, six to 15 containers per treatment, similar to commercial operations.The clamshells were packed in plastic crates and taken back to the Citrus and Subtropical Products Laboratory in Winter Haven.The strawberries were stored at 5 °C overnight to remove field heat and then moved to an 18 °C storage room with 95% relative humidity (RH).Temperature and humidity were monitored by dataloggers (Dickson Pro Series; Dickson, Addison, Ill.).Decay was evaluated every few days; strawberries were considered decayed when 30% or more of the surface was covered with lesions or there was visible mycelium.This process of spraying and evaluating decay was repeated three times in Jan., Feb., and Mar. 2006. Determination of microbial load on immature fruit.A study of the effect of PAA on the microbial load of developing fruit was made concurrently with the Mar.2006 spray experiment.On the first day of spraying (3 d before harvest), immature strawberry samples (green to white color stage) were randomly picked from each of the three experimental blocks (spray areas) and placed into sterile WhirlPak bags (Nasco, Modesto, Calif.), five fruit per bag, four bags per experimental block.The bags were placed in a cooler and taken back to the laboratory where they were weighed.After weighing, 99 mL of sterile phosphate buffer (pH 7.2) was added to each bag and the strawberries and the buffer were manually agitated for 2 min to remove the microflora from the surface of the fruit.The buffer was analyzed for presence of any microorganisms removed from fruit surfaces by using the protocol described by Narciso and Plotto (2005). Weather.Averages of weather parameters during the duration of each study such as air temperatures (60 cm above the surface of the soil), precipitation, wind speeds, and solar radiation were evaluated to better understand differences in field results.Weather data were obtained from the Florida Automated Weather Service, Dover Station (State of Florida, 2006). Postharvest treatments.In Mar.2006, experimental blocks were sprayed 3 d preharvest with a 100 mLÁL -1 solution of PAA following the protocol previously described.On harvest day, %300 strawberries from plants sprayed 3 d earlier were picked and placed in a clean container.Three hundred fruit were also picked from the corresponding no-spray (control) group of plants and placed in a separate container.Strawberries were taken back to the laboratory, sorted, and stored at 5 °C until treated. Previously sprayed and nonsprayed stored strawberries were divided into four groups of 10 per treatment.Postharvest treatments were manually sprayed onto fruit using 250-mL misters (Fisher Scientific, Atlanta).There were five treatments applied to strawberries harvested from sprayed and nonsprayed sections of the field: no-treatment, distilled water, 50 mLÁL -1 PAA (a formulation of PAA rated for postharvest use; StorOx; Bio-Safe Systems, Glastonbury, Conn.), chitosan (0.1% in 0.5% glacial acetic acid; France-Chitine, Orange, France), and sodium propionate (0.5%; Avocado Research Chemicals, Ltd., Lancashire, U.K.).The strawberries were spread on plastic mesh (commercial mesh size 0.9 • 1.3 cm) stretched between 30.5 • 30.5-cm polyvinyl chloride frames to allow the treatments to drain and the fruit to dry.After drying, strawberries were placed into containers as described previously, 10 fruit per container.The fruit were stored at 18 °C at 95% RH.Decay was logged as previously described. Determination of pathogen sensitivity to peroxyacetic acid and chitosan.To test the effect of PAA on B. cinerea and R. stolonifer spores, the disc assay method was used.Spores were collected from plates of B. cinerea or R. stolonifer.Organisms were grown on potato dextrose agar for 5 to 7 d at 25 °C.Spores were removed from the colony surface with a solution of sterile water and 0.1% Tween 20 while gently rubbing the plate surface with a sterile glass rod.Spores were filtered through three layers of cheesecloth and adjusted to %3.0 • 10 5 spores/mL with a hemocytometer (Hausser Scientific, Horsham, Penn.).Two hundred fifty microliters of inoculum of either B. cinerea or R. stolonifer was placed on the surface of potato dextrose agar plates and evenly spread with a sterile glass rod.Four sterile filter paper discs (10.5 mm) (Ace Glass, Vineland, N.J.) were placed in a container with a solution of 100 mLÁL -1 PAA and swirled for 30 s.The discs were drained, removed with sterile forceps, and placed on the surface of the inoculated plates.Plates were incubated at 25 °C for 10 to14 d. Determination of sensitivity of Botrytis to chitosan coating.Effect of chitosan on growth of B. cinerea was determined using the same method described previously.The discs were placed in 0.1% chitosan in 0.5% glacial acetic acid for 30 s and placed on plates coated with the B. cinerea inoculum.The chitosan buffer was also tested. Statistical analysis.The Wilcoxon ranksum test for difference in medians and equal variance (t test) and the Kruskal-Wallis test were used to determine significance between decay rates of the different spray groups and the average decay of the nonspray group.Tests were based on data distribution (Number Cruncher Statistical System, Kaysville, Utah; and SAS System Software Version 9.1; SAS Institute, Cary, N.C.) with P # 0.05 designated as significance of difference. Results and Discussion Designations are S for sprayed NS for nonsprayed strawberries.The 3S fruit were sprayed 3 d before harvest, 2S fruit were sprayed 2 d before harvest, and 1S fruit were sprayed the day before harvest. In January, two experimental groups were picked within 3 d of each other and designated as harvest 1 and 2. Decay rates over the postharvest storage period (days after harvest) are summarized for the two January harvests.For harvest 1, 16 d postharvest, 3S strawberries had significantly (Figs. 1 and 2A) less decay (20%) than NS strawberries (50%), whereas 2S and 1S strawberries had 35% and 63% decay, respectively, and were not significantly different from the average NS decay rate of 50%.However, 17 d after harvest, both 3S and 2S strawberries had significantly less decay (37% and 42%, respectively) than the NS group (60%).Twenty days postharvest, all sprayed groups had decay that was not different from the NS group.Data from harvest 2 in January showed similar results.Thirteen days postharvest, 3S and 2S spray strawberries had significantly (Fig. 1) less decay (12% and 21%, respectively) than the 1S (43%) or the NS group (48%).Seventeen days postharvest, only the 3S group had less decay than the NS group.For both January trials, those strawberries sprayed 3 d (3S) before harvest had slightly less decay than the 2S group and significantly slower rates of decay than the 1S and NS groups. For the combined January data, the 2S and 3S strawberries exhibited less decay for 16 to 20 d in storage compared with the 1S and NS fruit, with the 3S group being most resistant to postharvest decay.Weather parameters for these trials showed temperatures at 60 cm above the surface of the soil with an average high of 22.6 °C and average low temperature of 5.9 °C with negligible rain and intermittent sun (163 wÁm -2 ). In February, there were two experimental groups picked from different sectors of the commercial field: Expt. 1 contained interior blocks protected from open spaces and Expt. 2 contained exterior blocks at the edge of the field.Decay rates over the postharvest storage period (days after harvest) are summarized for the two field locations for February in Figure 2B.Data for Expt. 1 (blocks from the interior of the field) showed no significant difference in decay between sprayed and nonsprayed strawberries until 13 d after harvest (Fig. 1).After 13 d, 3S and 2S groups had less decay (67% and 48%, respectively) than the 1S (80%) and the NS (81%) groups (Fig. 1).After 17 d, all groups had decay greater than 90%. For the combined February data, the 2S and 3S strawberries exhibited less decay for 9 and 13 d in storage compared with the 1S and NS fruit.Weather data for this time period showed temperatures with an average high of 20.0 °C and average low of 16.0 °C at 60 cm above the soil surface with negligible rain and intermittent sun (159 wÁm -2 ).Wind speeds, as would have affected the exterior block, were between 0.9 and 1.8 mÁs -1 .Although the actual decay rates were different between the January and February experimental groups, the strawberries that were sprayed 3 d before harvest had the slowest decay rates followed by the 2S group.Those strawberries sprayed the day before harvest did not show any difference in decay rates when compared with the nonsprayed group. Data for March show decay rates for all three spray groups (3S, 2S, and 1S) were significantly less than those of the NS group (Figs. 1 and 2C).Nine days postharvest, percent decay of 3S, 2S, and 1S was 9%, 10%, and 13%, respectively, whereas the NS group was 28%.At 17 d postharvest, decay for the sprayed fruit was still less (3S = 80%, 2S = 82%, and 1S = 85%) than the NS group (94%).Temperatures for March at 60 cm above the soil surface ranged from an average high of 25 °C to a low of 10 °C.One cold night (4.6 °C) offset a general increase in temperatures.Precipitation was minimal but there were several sunny days (244 wÁm -2 ). In summary, data for the months of January and February show that strawberries sprayed 3 d preharvest had reduced decay when compared with strawberries sprayed 1 d preharvest or not sprayed, and all sprayed treatments had reduced decay compared with those not sprayed in March. Microflora on immature fruit.To understand what seemed to be a residual effect of PAA on strawberry decay organisms, green fruit were daily assessed for surface microflora populations.Data show a continuing decline in microbial populations after the initial spray when compared with the initial nonsprayed plants (Fig. 3).Microorganisms on the surface of the immature strawberries were significantly reduced in the 3S, 2S, and the 1S groups up to 3 d postspray (Fig. 3). PAA is volatile, breaking down to release oxygen and acetic acid, but as a compound is not residual on fruit surfaces.The data from the immature strawberry study suggest that it continues to reduce microbial populations after initial application.This would indicate that over time, the number of organisms on the fruit surfaces decrease as a result of cell death when exposed to PAA.Sublethal cells would be unable to make repairs while remaining on the now acidified environment of the strawberry surface.Immature strawberries showed a continued decline in the microbial population after spraying, which corresponds with ripe strawberry studies.Strawberries sprayed 3 or 2 d preharvest had significantly lower rates of decay than fruit sprayed just before harvest or not sprayed, likely as a result of a reduction in microorganisms and, subsequently, their growth.Determination of pathogen sensitivity to peroxyacetic acid.Evidence of sensitivity of B. cinerea hyphae and conidia to PAA was shown by the presence of a zone of inhibition (%1 cm) around each of the discs after 5 d growth.After 10 d, the inhibition area was still obvious, although B. cinerea hyphae were beginning to move closer to the discs (Fig. 4).R. stolonifer was not as sensitive as B. cinerea to the presence of PAA.After 10 d, R. stolonifer growth in PAA plates was almost as dense as in the control.The indication that PAA had any effect on R. stolonifer was decreased sporulation over parts of the plate that had exposure to PAA (Fig. 5).The disc assay study served as an indicator of the possible reduction of growth of B. cinerea and R. stolonifer by PAA when applied on strawberries in the field. Postharvest treatments.To determine if postharvest antidecay treatments could enhance the decay reduction obtained with the preharvest PAA sprays, strawberries sprayed with PAA 3 d preharvest and strawberries from corresponding nonsprayed blocks were harvested, brought to the laboratory, and treated with postharvest antidecay compounds or coatings, including a lower (approved for postharvest application) concentration of PAA (Table 1).The preharvest PAA-sprayed strawberries with the postharvest spray treatment had generally less decay than nonsprayed fruit with postharvest treatments, except for sodium propionate, after 12 d (Table 1).This was significant for ''no postharvest treatment'' day 6; postharvest water treatment, day 12; and the postharvest chitosan treatment, days 6 to 12. Chitosan coating on presprayed fruit significantly reduced decay (17.5% decay) for 8 d longer than the control (no preharvest spray or postharvest treatment, 62.5% decay) (Table 1).Potato dextrose plates containing 10 d B. cinerea cultures and filter discs with chitosan or its buffer showed no difference in fungal growth when compared with the control plates (Fig. 4), indicating that chitosan did not have a direct effect on the pathogen but may have protected the fruit by eliciting a plant defense response (Kendra and Hadwigr, 1984).Other studies have reported chitosan to damage fungal hyphae (El Ghaouth et al., 1997). Studies by other workers have also shown that chitosan is effective in extending the shelf life of strawberries (El Ghaouth et al., 1991;1992;Park et al., 2005).Data in Table 1 show that on some strawberries, exposure to PAA preharvest before further treatment reduced postharvest decay.PAA reduces microflora populations on the fruit.As an additional postharvest treatment, however, PAA has no effect or may even be damaging to ripe strawberries.If trichomes of the strawberries were damaged, it would result in increased infection.The high acidity of the combined pre-and postharvest PAA treatments may have damaged these structures.Discs with PAA in B. cinerea plates maintained areas of reduced or no growth (zones of inhibition) even after 10 d (Fig. 4). Significance of results.All strawberries in these experiments were held at abusive temperatures (warmer than commercial storage) (Mitcham and Mitchell, 2002) to accelerate decay and simulate possible temperature abuse in transit or in consumer kitchens.Many studies have shown that cooling after harvest and in storage is important for extending shelf life (El Ghaouth et al., 1991;Maas, 1980;Nunes et al., 2002).In this study, at temperatures above storage optimum, strawberries sprayed 3 d preharvest had significantly less decay than strawberries sprayed 1 d before harvest or nonsprayed at some point of time in storage.The majority of decay in the stored fruit in this study was incited by B. cinerea followed by R. stolonifer.These organisms are the most problematic postharvest pathogens on strawberries (Blacharski et al., 2001;Bristow, 1986;Maas, 1980;Maas and Smith, 1972).Studies suggest that B. cinerea gains entrance into the strawberries in the field, remains latent, and causes decay after harvest (Bristow, 1986;Maas, 1980).Suggested controls include preharvest fungicide sprays at the prebloom, flowering, or young fruit stage (Blacharski et al., 2001;Maas, 1980;Maas and Smith, 1972).The prophylactic activity of the fungicide decreases the spores of B. cinerea that can invade young tissues.R. stolonifer is more difficult to control with field sprays because it is a wound pathogen and ripe fruit offer a good substrate (Maas, 1980). PAA is a strong oxidizer and reduces microbial spore populations on fruit surfaces (Narciso, 2005).When PAA was sprayed on flowers and young fruit, spore numbers were reduced on surfaces, so fewer spores germinated and infected young tissue, reducing decay in fruits that were allowed to ripen (Fig. 3).Plants that were sprayed 3 d preharvest had only flowers and very young fruit (all ripe strawberries were harvested before the initial spraying).Our storage data show that decay was generally significantly reduced when PAA was sprayed on flowers and young fruit when compared with PAA sprays on ripe (1S groups) or nonsprayed strawberries (Figs. 1 and 2).Differences in results in onset of decay for storage studies from January through March could be attributed to changes in disease pressure in the field and the aging strawberry plants stressed by the increase in nighttime temperatures. Other studies have also shown that B. cinerea and R. stolonifer spread in storage with fruit-to-fruit contact (Maas, 1980).In our clamshells, we found disease development on one or two stored strawberries that spread from the point of contact until all fruit in the clamshell were involved.A preharvest treatment to reduce spores and a postharvest antimicrobial treatment to reduce in-storage spread of decay organisms would seem an ideal system to lengthen shelf life of these fragile fruit.At a 100 mLÁL -1 solution, PAA was not phytotoxic on leaves, flowers, or fruit.Pollinating insects were not deterred from flowers just sprayed (data not shown).Because U.S. Environmental Protection Agency field allowances for PAA are higher than what we used in this study, future work will involve testing increasing concentrations of preharvest PAA applications for better postharvest decay control.Times of applications, as well as assessing ripening strawberries for both their microbial loads and the effectiveness of PAA on these loads, will be further studied.Peroxyacetic acid could be used to complement other methods of decay control presently in use by reducing the use of fungicides in the field if they were alternated with PAA sprays. Studies with postharvest treatments on strawberries previously sprayed with PAA showed variable results.In most cases, the addition of a coating or surface treatment did not lengthen storage time of the fruit with the exception of the chitosan coating.The activity of the PAA and surface treatments needs further analysis to identify combined pre-and postharvest treatments that will most effectively maintain strawberry quality. Fig. 1 . Fig. 1.Chart showing significance between decay rates in sprayed and control (unsprayed) strawberries during 17 to 24 d in storage.NS means no significance and X shows significance between the sprayed and nonsprayed populations of the same day: 3S = sprayed 3 d preharvest; 2S = sprayed 2 d preharvest; 1S = sprayed 1 d preharvest; NS = not sprayed. Fig. 2 . Fig. 2. Decay rates of strawberries in clamshells stored at 18 °C for up to 24 d: 3S (sprayed 3 d preharvest); 2S (sprayed 2 d preharvest,); 1S (sprayed 1 d preharvest); NS = not sprayed.Data ± SE (if error bars are not visible then are under the symbol) are a summary of duplicated experiments in (A) January for two harvest dates (20 and 23 Jan. 2006) and (B) 24 Feb. 2006 for two areas of the field and a (C) single experiment on 10 Mar.2006.Six to 12, 11 to 15, and 14 to 17 clamshells with 10 strawberries each for January, February, and March, respectively, were counted per treatment in each experiment.
2019-03-31T13:46:17.380Z
2007-06-01T00:00:00.000
{ "year": 2007, "sha1": "19aef62d0e9fdaf110416e45c9bd9b66dd983f7d", "oa_license": null, "oa_url": "https://journals.ashs.org/downloadpdf/journals/hortsci/42/3/article-p617.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "54e8c0748e597309f6a88346abd2d5e92eb31369", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
202142913
pes2o/s2orc
v3-fos-license
Source separation and localisation via tensor decomposition for distributed arrays : This study focuses on the problem of power spectra separation and localisation of multiple sources using distributed arrays. First, the array structure and signal model are discussed. By cross-correlating the multi-channel received signals in time-domain, a third tensor is constructed. Then, utilising the multi-dimensional characteristic, the tensor is decomposed to separate the array manifold matrix and the power spectra matrix through alternating least square (ALS) method. Finally, the sources are located using the relative x - y plane relationship between the distributed arrays and the direction of arrival (DOA), which can be estimated by spectrum analysis of each column of the array manifold matrix. The effectiveness and superiority of the proposed method is demonstrated by simulation results. Introduction Source separation and localisation are widely used to solve the problem of spectrum management in cognitive radio and localise the active emitter in electronic warfare and radar electronic countermeasure [1][2][3][4]. Spectrum sensing is the first step towards situation awareness. The radio frequency environment can be mapped out to highly efficient reuse in space and time by identifying the power spectra and locations of the active emitters, including enemy radars or jammers. There is a lot literature on spectrum separation. A parallel frequency bin detection method is proposed in [1]. Exploiting the frequency-domain sparsity, compressive sensing is used in [2,3]. Whereas most works address in reconstructing the Fourier spectrum of the received signals, but in cognitive radio and passive localisation applications only the power spectrum (PS) is necessary. Thus, there is no reason to reconstruct the time-domain signals. Utilising that the PS is the Fourier translation of the autocorrelation function, a finite number of autocorrelation lags can be estimated using received array signals. In [4], the separation of multiple received mixed spectra is treated as non-negative matrix factorisation (NMF) problem. However, the NMF is not unique in general; hence, the identification of the mixed spectra cannot be guaranteed. Moreover, the most existence works consider single received station, which only gives the direction of arrival (DOA) of the each source, not the x-y plane location. In this paper, we configure a distributed network with multiple stations to separate and localise multiple sources by estimating the PS and DOAs (respect to each station). The array structure and signal model are given in Section 2. In Section 3, first, a third tensor is constructed by cross-correlating the multi-channel received signals in time-domain. Then, utilising the multidimensional characteristic, the tensor is decomposed to separate the array manifold matrix and power spectra matrix through alternating least square (ALS) method. Finally, the sources are localised using the relative x-y plane relationship of the distributed arrays and DOAs. Simulations are reported in Section 4. We conclude in Section 5. The contribution of this paper lies in formulating the problem of joint power spectra separation and localisation of multiple sources as a problem of tensor decomposition, which benefits from crosscorrelating the multi-channel received signals in time-domain. The identification of both spectrum and DOAs of each source can be guaranteed, as the tensor model is unique under fairly mild conditions. Array configuration and signal model Consider a distributed network configuration given in Fig. 1. There are P widely separated stations with index p = 1, 2, …, P, and each station has N closed spaced sub-arrays with index n = 1, 2, …, N, and each sub-array is an uniform linear array (ULA) with M received antennas. Assuming that there are K uncorrelated narrowband sources in far-field, the received signal of the nth sub-array in the pth station is where x (p) (n, t) ∈ C M × 1 , β n, k (p) is the path loss and phase shift from the kth source to the nth sub-array in the pth station, ] T is the received steering vector of the kth source to the nth sub-array in the pth station, θ n, k (p) is the DOA of the kth source to the nth sub-array in the pth station, d denotes the distance between neighbouring received antennas, λ is the wavelength, s k (t) is narrowband transmitted signal of the kth source. z (p) (n, t) is the noise term. The N subarrays in each station are closed spaced, which means θ n, k can be replaced by Then (1) can be written as Our aim is to estimate power spectra and the DOAs of all K sources, denoted by } with k = 1, 2, …, K and p = 1, 2, …, P. Temporal correlation To estimate power spectra and the DOAs of all K sources, we propose to formulate the problem in the temporal correlation domain. The K sources are uncorrelated, so the cross-correlation of the m 1 th and m 2 th antennas of the nth sub-array in the pth station is . δ(x) denotes the Kronecker delta function, i.e. δ(x) = 1 when x = 0 and δ(x) = 0 otherwise. Assuming that the noise at each antenna is white Gaussian, both temporally and spatially, with zeros mean and variance σ 2 , i.e. z (p) (n, t) ∼ N(0, σ 2 ) for all p and n. Then, (3) can be expressed as Applying discrete time Fourier transform on (4), we have The frequency axis in (5) can be discretised as ; ⋯; G n (P) ] ∈ C PQ × F and g n = vec(G n ), stacking g n one after another, we have where (1) ; A (2) ; ⋯; A (P) ] ∈ C PQ × K , B = [β 1 , β 2 , …, β K ] ∈ R NP × K , and β k = vec(B k ), B k is the received path loss matrix with the β n, k (p) 2 on the nth row and the pth column. Z ∈ C FQP × NP is the noise term. Parameter matrix estimation via tensor decomposition Definition 1: The n-mode matrix unfolding of an N-order tensor X ∈ C I 1 × I 2 × ⋯ × I N , denoted by X n ∈ C I n × (I n + 1 ⋯I N I 1 ⋯I n − 1 ) , contains the (i 1 , i 2 , …, i N )th element of X at the position (i n , j), where On the contrary, given X n , X is constructed. Each third-order tensor can be recognised as a data cube, and its matrix unfoldings consist of the slices of the cube along different directions. Using definition 1, a third-order tensor H can be formed by H in (7) [5]. The signal tensor H and its matrix unfoldings are given in Fig. 2 The minimise problem in (9) can be solved via ALS method [7], which successively estimates one of the three matrix unfoldings assuming the other two are known in least squares (LS) principle. Given  , B, update Given Ŝ, B, update Given  , Ŝ, update In (10)-(12), the pseudo inverse of the Khatri-Rao product is calculated by [5] It can be seen that the estimated matrix unfolding Ŝ contains the power spectra of sources, and  contains the DOAs. Remark 1: Using line search to accelerate the convergence of ALS. Mostly, ALS needs a large number of iterations before converging. The slowness in convergence can be due to the large size of the tensor, or to the bad starting values. Line search is an effective solution proposed to cope with the problem of slow convergence. Some line search methods can be used to speed up ALS for searching the global minimum very quickly. Refer to [8,9] for detail. DOA estimation and source localisation Let â k = [â 1, k ; â 2, k ; ⋯; â P, k ] be the kth column of  , where â p, k = [α −(M − 1) , …, 0, …, α (M − 1) ] T ∈ C Q × 1 and α = e j2πdsin(θ p, k )/λ . The DOA of the kth source respected to the pth station can be estimated via spectrum analysis of â p, k . In this paper, root-MUSIC [10], one of the classical spectrum analysis methods, is used. Calculate the noise subspace U of the covariance matrix R = â p, k â p, k H via eigen decomposition. Then the qth coefficient of polynomial equals the sum of the (q − Q)th diagonal of G = UU H with q = 1, 2, …, (2Q − 1). Then we have θ k (p) = asin[ρλ/(2dπ)], where ρ is the root of the polynomial that is nearest and inside the unit circle. The (x, y) location of the kth the source can be estimated using the relative triangular relationship of the P stations and the DOAs θ k (p) , where p = 1, 2, …, P. Steps of the proposed algorithm The proposed algorithm mainly includes three steps: (i) according to the temporal correlation domain, get the signal matrix H; (ii) Utilising the multi-dimension structure, construct the tensor H, and decompose the tensor by ALS; and (iii) Estimate DOAs from manifold matrix and localise the sources. The outline of the proposed algorithm is summarised in Algorithm 1 (Fig. 3). Simulations In this section, we use computer simulations to demonstrate the effectiveness of the proposed algorithm. Fig. 4 shows the estimated power spectra of the sources by the proposed method. The results are obtained by averaging 50 trials, to enable easy visual assessment of estimate variance, including leakage from one source to the others. We see that the proposed method identifies the power spectra of all three sources fairly well. Case 2: Fig. 5 shows the accuracy of the estimation power spectra and localisation versus SNR using the proposed method under different number of stations P = 2, 3, 5. Under P = 5, the (x, y) locations of the stations are (−20, 0)Km, (−10, 0)Km, (0, 0)Km, (10, 0)Km, and (20, 0)Km. Under P = 3 and P = 2, the first 3 and 2 stations are used, respectively. The other parameters are the same as Case 1. The number of Monte-Carlo trials is 1000. The rootmean-square error (RMSE) of the estimated power spectra denoted by RMSE PS and the RMSE of the estimated (x, y) location denoted by RMSE x-y are adopted as the performance measure, defined as It can be seen in Fig. 5, the proposed method can jointly separate PS and localise the (x, y) position of multiple sources. RMSE PS and RMSE x-y both reduce as the SNR increases. The proposed method yields reasonable RMSE PS of PS even under lower SNR, e.g. < Conclusion The problem of joint power spectra separation and localisation of multiple sources for distributed arrays has been considered in this paper. Utilising the temporal correlation domain, this problem is formulated as a third-order tensor decomposition problem. The tensor and its matrix unflodings are discussed. ALS method is used to solve the tensor, which results the manifold of each array and the power spectra matrix. The DOAs of each source respected to all arrays can be estimated by imposing root-MUSIC method on the estimated array manifold. Simulations illustrated the accuracy and efficacy of the proposed techniques.
2019-09-10T09:09:33.679Z
2019-07-10T00:00:00.000
{ "year": 2019, "sha1": "6421c0756089014db130bdcde55f62557be71495", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1049/joe.2019.0045", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "39db8331ca93c359d342c0553db678ba626ded01", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
239768399
pes2o/s2orc
v3-fos-license
Ultra-relativistic electron beams deflection by quasi-mosaic crystals This paper provides an explanation of the key effects behind the deflection of ultra-relativistic electron beams by means of oriented quasi-mosaic Bent Crystals (qmBC). It is demonstrated that accounting for specific geometry of the qmBC and its orientation with respect to a collimated electron beam, its size and emittance is essential for an accurate quantitative description of experimental results on the beam deflection by such crystals. In an exemplary case study a detailed analysis of the recent experiment at the SLAC facility is presented. The methodology developed has enabled to understand the peculiarities in the measured distributions of the deflected electrons. This achievement constitutes an important progress in the efforts towards the practical realization of novel gamma-ray crystal-based light sources and puts new challenges for the theory and experiment in this research area. This paper provides an explanation of the key effects behind the deflection of ultra-relativistic electron beams by means of oriented 'quasi-mosaic' Bent Crystals (qmBC). It is demonstrated that accounting for specific geometry of the qmBC and its orientation with respect to a collimated electron beam, its size and emittance is essential for an accurate quantitative description of experimental results on the beam deflection by such crystals. In an exemplary case study a detailed analysis of the recent experiment at the SLAC facility is presented. The methodology developed has enabled to understand the peculiarities in the measured distributions of the deflected electrons. Also, this achievement constitutes an important progress in the efforts towards the practical realization of novel gamma-ray crystal-based light sources and puts new challenges for the theory and experiment in this research area. In recent years significant efforts of the research and technological communities have been devoted to design and practical realization of novel gamma-ray Crystalbased Light Sources (CLS) that can be set up by exposing oriented linear, bent or periodically bent crystals to beams of ultrarelativistic positrons or electrons [1,2]. Brilliance of radiation emitted in a crystalline undulator LS by available beams in the photon energy range 10 0 -10 1 MeV, being inaccessible to conventional synchrotrons, undulators and XFELs, greatly exceeds that of laser-Compton scattering LSs and is higher than predicted in the Gamma Factory proposal to CERN [3]. Manufacturing of CLSs will have significant impact on many research areas in physics, chemistry, biology, material science, technology and medicine, being a subject of current European projects 'N-LIGHT' [4] and TECHNO-CLS [5]. So far oriented crystals exposed to beams of charged particles have been already utilised in a number of applications for beams manipulation, such as steering, bending, extraction and focusing, see [2,6] and references therein. These and other newly emerging applications in this research area require high quality crystals (bent or periodically bent) and collimated beams of charged ultrarelativistic particles of different energies. Construction of novel CLSs is a challenging task involving a broad range of correlated research and technological activities [1,2]. During the last decade a number of papers published in high-impact journals [7][8][9][10][11][12][13] on channeling and channeling radiation experiments with bent crystals at different facilities (SLAC, CERN, MAMI). This paper reports on the important progress in this field providing an explanation of the key effects arising by deflection of ultrarelativistic electron and positron beams by means of oriented 'quasi-mosaic' Bent Crystals (qmBC). It is demonstrated that account for specific geometry of qmBC and its orientation with respect to a collimated beam of projectile particles, the beam size and emittance is essential for the quantitative description of the experimental results on the beam deflection by such crystals. Manufacturing of crystals of different desired geometry is an important technological task in the context of their applications in the gamma-ray CLSs and the aforementioned experiments. The systematic review of different technologies exploited for manufacturing of crystals of different type, geometry, size, quality, etc is given in [1,2,6]. A short summary of several relevant approaches that have been utilized to produce bent crystals is provided in Supplemental Material (SM). The high-quality qmBCs structures with desirable and fully controllable parameters have been manufactured for the aforementioned channeling experiments by the following means [14][15][16]. When a moment of force is applied to a crystalline material, some secondary curvatures may arise within the solid [17]. A well known secondary deformation is the anticlastic curvature with radius R a that occurs in a medium subjected to two moments. In particular, it occurs in the perpendicular direction with respect to the primary curvature. When the two curvatures are combined, the deformed crystal acquires the shape of a saddle. In contrast to an amorphous medium physical properties of crystals may be strongly anisotropic. Another type of the deformation caused by anisotropic effects is the 'quasi-mosaic' (QM) curvature [18,19]. QM bent crystals belong to a class of bent crystals featuring two curvatures of two orthogonal crystallographic planes. In order to understand the effects arising during channeling of charged particles through qmBC one should consider the geometry of such a crystal and its orientation with respect to an incident beam. This geometry is shown in Fig. 1. For the sake of clarity the case of planar channeling is addressed below. Consider a crystal whose planes, which are parallel to the (xy) plane, experience anticlastic bending with the curvature radius R a . The center O of the curvature lies on the z axis, which runs through the crystal center. The QM bending deforms the crystal planes parallel to the (xz) plane. In what follows it is assumed that R a and the QM bending radius R qm greatly exceed the crystal FIG. 1. Geometry of the anticlastic and QM bending of a crystal plate of thickness L and its orientation with respect to an incident beam (shaded rectangle). The crystal thickness, the anticlastic Ra and QM Rqm radii shown in the picture are scaled to meet the values indicated in Refs. [8,14]. In the experiment [8] the y direction was chosen along the 111 axis. Further explanations are given in the text. thickness L. These conditions were met for the qmBC samples used in the experiments [7][8][9][10][11][12][13]. The QM bending angle is defined as follows To start with, let us assume an ideally collimated narrow beam (i.e. that of zero divergence and zero beam size in the y direction, σ φ , σ → 0) incident on the crystal along the z direction. For a planar channeling the beam size and divergence in the x direction do not play important role and thus are not considered below. At the crystal entrance, the angle θ e between the beam direction and a tangent line to the QM bent plane depends on the beam displacement h along the y-axis: being the displacement for which the entrance angle θ e = 0, i.e. the tangent line is parallel to the z axis. A probability of a particle to be accepted into the channeling mode becomes significant if θ e does not exceed Lindhard's critical angle θ L . Then, using (2) one finds the maximum value of ∆h so that the channeling condition is met for the particles with h within the interval h 0 ± ∆h max . At the crystal exit, the angle θ s between the tangent line and the beam direction is related to h via Hence, the projectiles that are accepted at y = h and channel through the whole crystal are deflected by the angle lying within the interval θ s (h) ± θ L . The particles that enter having ∆h < 0 can experience either volume capture or volume reflection [20,21] in the crystal. The geometry analysis for these regimes is given in SM. The particles that enter with ∆h > ∆h max are neither accepted nor experience the volume reflection but experience multiple scattering which becomes closer to the scattering in the amorphous medium as ∆h increases. Consider now a Gaussian beam, with width σ > 0 and divergence σ φ > 0, that is incident on the crystal being centered at y = h. For a beam centered at h most of its particles enter the crystal having the transverse coordinates lying within the interval from h − σ to h + σ and the corresponding incident angles θ e . Therefore, the distribution of deflected particles becomes a superposition of different propagation scenarios discussed above. Below in the paper we demonstrate that it is important to know the values of σ and σ φ as well as of R a quite accurately to be able to interprete results of the experiments on beam propagation through oriented qmBC crystals. In what follows we focus on the analysis of the experiment at SLAC [8], although the physics discussed and the conclusions drawn are applicable to other aforementioned experiments with oriented qmBC. In the experiment, a 60 µm thick Si(111) qmBC was exposed to a 6.3 GeV electron beam. To deduce the values of σ and σ φ one can rely on the following description provided in the cited paper: (i) ". . . a beam width of < 150 µm (1σ) in the vertical and horizontal plane", and (ii) "The beam divergence was inferred . . . to be less than 10 µrad". The QM bending radius of the (111) planes was quoted as R qm = 15 cm. It was mentioned that some measures had been taken "to reduce the anticlastic deformation" although the explicit value of R a was not indicated. Indirectly, one can estimate R a basing on the data presented in [14]. This paper, cited in Ref. [8], discusses the QM bending of Si(211), i.e. it refers to a different geometry in which the (111) planes experience the anticlastic bending rather than the QM one. For this geometry the value R a = 366 cm on the centre of the sample was measured. In our simulations we considered R a as a parameter varied within the interval 100 − 300 cm. Using the aforementioned value of R qm in (1) one finds θ qm = 400 µrad. Fixing R a and taking into account that for a 6.3 GeV electron Lindhard's critical angle is 80 µrad [8] one calculates h 0 and the maximum displacement ∆h max . Numerical modeling of the channeling and related phenomena beyond the continuous potential framework can be carried out by means of the multi-purpose software package MBN Explorer [24][25][26] and a supplementary special multitask software toolkit MBN Studio [27]. The MBN Explorer was originally developed as a universal computer program to allow multiscale simulations of structure and dynamics of molecular systems. MBN Explorer simulates the motion of relativistic projectiles along with dynamical simulations of the crystalline environment [25]. The computation accounts for the interaction of projectiles with separate atoms of the environment, whereas a variety of interatomic potentials implemented supports rigorous simulations of various media. Overview of the results on channeling and radiation of charged particles in inear, bent and periodically bent crystals simulated by means of MBN Explorer can be found in [1,2,6,26]. To model propagation of particles through qmBCs further development of the algorithm for the atomistic simulations of the crystalline media has been performed in this work. The implemented algorithm enabled simulations of a qmBC defined through a transformation of the unperturbed crystalline medium by three curvatures (primary, anticlastic and QM), positioning of the qmBC with respect to the beam direction and the relativistic molecular dynamics in such environment. The results reported below have been obtained by means of this newly implemented algorithm. Open circles with errorbars stand for the experimental data [8]. Both dependences are normalized to the unit area. Diamonds represent the DYNECHARM++ [28] simulations as they are shown in figure 3 in [8]. The main outcome of numerical analysis carried out in this Letter in connection with the SLAC experiment is shown in Figure 2, which compares the current simulations with the experimentally measured intensity of the deflected electron beam as well as with the result of the DYNECHARM++ simulations. The latter intensities were obtained by digitalizing the data, which are presented in arbitrary units in Fig. 3 in [8], followed by the background (ca 1.4 a.u.) subtraction. The resulting experimental values were rescaled to provide the unit area within the interval −0.3 . . . 0.55 mrad of the deflection angle. The ratio experiment-to-DYNECHARM++ was kept as in the original figure. The simulated and measured angular distributions have the characteristic pattern of the two well pro-nounced peaks interlinked by an intermediate region. The left peak in the vicinity of θ s = 0 describes a fraction of particles propagating though the qmBC in the forward direction. These particles experience multiple scattering resulting in broadening of the initial distribution of the beam particles. Small shift of the peak towards negative angles is due to the volume reflection of the particles from the bent planes. As discussed in SM this effect becomes more pronounced at the entrance points within the region −h 0 < h < h 0 . The right peak is formed by the particles accepted to the channeling regime at the entrance and deflected to the angle θ s (h) according to Eq. (5). Our simulations have shown that the position of the channeling peak is determined by the value h corresponding to the beam center at the entrance point and the width of the peak is determined by the distribution of θ e (h) for the particles of the beam and by Lindhard's angle. The peak is also influenced by the dechanneling process that is responsible for the formation of the distribution of the deflected particles in the region between the two peaks. As mentioned, the angular distribution is very sensitive to the choice of the beam size σ, bending radius R a and the entrance coordinate h. The current simulations presented in Fig. 2 correspond to a particular set of these parameters: σ = 75 µm, R a = 300 cm and h = 675 µm. It has been established that these values provide close agreement with the experimentally measured distribution. We noted that in Ref. [8] the exact value of σ has been specified whereas the values of R a and h as well as their impact on the profile of the distribution have not been mentioned at all. Same refers to the results of the DYNECHARM++ simulations. Figures 3 and 4 illustrate the impact of variation of σ, R a and h on the the angular distribution. The symbols with error bars stand for the experimental data obtained as described above. Figure 3 shows the distribution for a beam with σ = 150 µm incident on the crystal bent with different anticlastic radius as indicated. In the left panel, each simulation refers to the beam centered at h = h 0 and thus most of the accepted particles are deflected by the angle θ qm resulting in the channeling peak centered at about 0.40 mrad, which is less than in the experiment (ca 0.44 mrad). The peak intensity increases with R a in accordance with the geometrical arguments discussed above. Indeed, for R a = 100 cm the maximum displacement ∆h max = 80 µm is nearly two times less than σ resulting in a small fraction of the accepted particles. Since ∆h max ∝ R a (see Eq. (4) and Fig. S2 in SM) then for R a = 300 cm the value of ∆h max exceeds σ leading to the higher intensity. The qmBC geometry provides also a qualitative explanation of the changes occurring to the left peak. For the smallest radius, the inequality ∆h max < σ suggests that large number of particles enters the crystal having the transverse coordinate (i) larger than h 0 + ∆h max , and (ii) lower than h 0 − ∆h max . The Intensity (normalized to S=1) Experiment R a =100 cm, h=h 0 =200 µm R a =200 cm, h=h 0 =400 µm R a =300 cm, h=h 0 =600 µm Intensity (normalized to S=1) Experiment R a =100 cm, h=240 µm R a =200 cm, h=480 µm R a =300 cm, h=720 µm former particles contribute mainly to the amorphous-like distribution whereas the latter ones can undergo the volume reflection giving rise to the intensity at θ s < 0. As R a increases the numbers particles of both types decreases making the peak narrower and less intensive. Aiming at bringing the channeling peak position closer to the measured one another run of simulations has been performed with the same values of σ and R a but different set of initial coordinates of the beam center. The distributions shown in Fig. 3 right refer to h > h 0 that correspond to θ s = 0.44 mrad for each R a indicated. It is seen that although the channeling peaks are shifted to the right they, simultaneously, loose the intensity. Apart from this, the left peaks become more powerful being centered at θ s = 0 due to the increase in the number of particles moving in the forward direction at the expense of the volume-reflected ones. All these modifications can be explained in terms of the qmBC geometry. Two panels in Fig. 4 correspond to two sets of R a and σ. In each panel, the simulations have been performed for different values of the beam center h at the entrance. Vertical lines in Fig. S1 in SM allow one to compare the h values indicated with the boundaries h 0 and h 0 + ∆h max . The left panel presents a case study in which ∆h max = 160 µm is comparable to the beam size so that for any entrance point within [h 0 , h 0 + ∆h max ] a large fraction of the particles is not accepted resulting in a noticeable decrease of the right peak. The curve with h = 400 µm corresponds to the case h = h 0 when half of the beam enters the crystal having ∆h < 0. In this domain the volume reflection can occur shifting the main maximum towards negative angles. As h increases the numbers of both channeling and volume reflected particles decrease leading to the shift of the both maxima to the right as well as to the change in their heights. At h = 600 µm, which corresponds to ∆h > ∆h max , most of the beam particles do not comply with the channeling condition but experiencing multiple scattering as in amorphous medium. As a result the main peak becomes more powerful being centered at θ s = 0. To increase the channeling fraction one can rely on a larger value of the anticlastic radius and on a narrower beam. For R a = 300 cm, Fig. 4 right, the quantities h 0 and ∆h max are 600 and 240 µm, respectively. The latter value together with the reduced beam size (σ = 75 µm) suggest that a much bigger fraction of the particles can be accepted provided the condition 0 < ∆h < ∆h max − σ is met. The best agreement with the experiment has been found for h = 675 µm (open circles). This dependence is shown in Fig. 2 in the form of a histogram. The quantitative analysis of the angular distribution of ultrarelativistic electrons deflected by oriented qm-BCs presented in our paper demonstrates the good agreement with experimental data reported in [8]. It has been achieved by accounting for (i) the specific geometry of such crystals and their orientation with respect to the projectile beam and (ii) the realistic beam size and divergence. Remaining discrepancies can be attributed to the uncertainty in concrete values of the beam characteristics and of the entrance coordinate h of the beam center as well as to the effects not included into the current simulations (e.g., quantum effects in multiple scattering in crystals [29]). It is highly desirable that such information is provided when presenting the experimental data since it allows for its independent unambiguous theoretical and computational validation. Important issue concerns also accurate measurement and computational analysis of the characteristics of radiation that accompany passage of ultra-relativistic projectiles through oriented crystals. Such knowledge is essential for better planning of accelerator-based experiments and for full interpretation of their results. The work was supported in part by the DFG Grant (Project No. 413220201) and by the H2020 RISE-NLIGHT project (GA 872196). We acknowledge helpful discussions with Andrea Mazzolari, Vincenzo Guidi, Hartmut Backe and Werner Lauth. Frankfurt Center for Scientific Computing (CSC) is acknowledged for providing computer facilities. Intensity (normalized to S=1) Experiment h=400 µm h=480 µm h=600 µm R a =200 cm, σ=150 µm
2021-10-26T01:17:02.173Z
2021-10-25T00:00:00.000
{ "year": 2021, "sha1": "b7c837ea7d996e05cc05ff3e420f02572f2c6a7c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2110.12959", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b7c837ea7d996e05cc05ff3e420f02572f2c6a7c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235641501
pes2o/s2orc
v3-fos-license
Metrology applications using off-axis digital holography microscopy Off-axis digital holography microscopy (DHM) systems have evolved during these last two decades from research to commercial instrumentation. They are used in many research laboratories and production facilities as metrology instruments in a large variety of applications including dimensional, surface topography, birefringence, oxide patterns thickness, and vibration characterization. The unique non-scanning quasi-instantaneous acquisition specificity of DHM opens new 4D metrology possibilities for observation of non-static scenes, operation in noisy environments, high throughput screening, and for providing fast feedback during manufacturing processes using artificial intelligence for decision making. These aspects are discussed and illustrated in this paper with the presentation of several applications to technical samples. First scientific publications and evolution of off-axis digital holography microscopy (DHM) The first simultaneous reconstructions of both quantitative phase and intensity maps out of a single hologram acquired in off-axis configuration has been demonstrated in two seminal publications by Cuche et al in 1999 [1,2]. The principle of this full field imaging method is shown for lens-less optical set up, as well as for reflection and transmission microscopy. The authors foreseen in the conclusion of these papers many commercial applications in both bio-imaging and material sciences. More than 20 years later, the importance of DHM is in particular recognized by dedicated sessions at several major international scientific conferences and by an important increase in the number of scientific publications. A search on Google scholar for the exact expression 'digital holographic microscopy' lists more than 1000 publications in 2020 against only 11 in 1999. This present paper focus on applications. It reports a few out of the latest cutting edge DHM metrology usage of commercial systems, implemented at industrial and academic laboratories for R&D purpose, and in production facilities for quality control. It focuses on technical objects, does not include applications to biological samples, or tomographic set ups [3][4][5][6][7][8][9] as most of their applications are demonstrated on bio-samples as well. It concentrates on concrete applications fitting and underlining DHM unique specificities. It does not discuss off-axis DHM new approaches [10][11][12][13][14][15][16], or the reconstructing methods and the diversity of set ups. They are covered by review papers, for instance [17][18][19][20]. DHM metrology specificities In the field of material sciences DHM are essentially three-dimensional (3D) optical profilometers. For measurement of static and quasi still objects at the scale of a few seconds, they are in direct competition with other 3D optical profilometers technologies. They have in particular a large common range of measurable samples with scanning white light interferometers (SWLI) and confocal microscopes [21]. These latter systems were already mature and well established on the market at the time of the earlier DHM development, making the commercial spread of this later more difficult. They are used mostly for shape and surface topography (roughness) characterization of static samples and scenes. Nevertheless, DHM has made its way to the market by exploiting one of its essential differentiation with respect to these alternate systems: information is grabbed quasi instantaneously, with a single camera frame. It does not necessitate any scanning, when alternative techniques require a lateral, a vertical, and/or a phase scanning mechanism. A first advantage of this specificity is that DHM compares sample heights with a precise and perfectly stabilized wavelength, rather than to a scanning distance, minimizing the sources of measurement non-linearities and calibration drifts. A full metrological evaluation of DHM is out of the frame of this paper, but several metrology evaluation elements are provided in section 3. A second advantage is that multiple information can be multiplexed in a single hologram as well without scanning. Indeed, DHM enables not only to record a single wavelength information in a hologram, but to record simultaneously information at several wavelengths [17], or at several polarizations [22][23][24][25]. Section 3 shows how multiplexing information at several wavelengths enables to enlarge the absolute measurable height range to dimensions larger than a single wavelength range. In section 4, multiple wavelength information is exploited for transforming phase and intensity maps into geometrical maps. Indeed, if this transformation is straightforward in case of homogeneous and reflective samples, it is not the case for measurements of non-homogeneous samples, or of transparent thin dielectric structured layers deposed on a reflective substrate. Multiplexing several polarization recording provides birefringence characterization of material that can be expected by design, inherent to the material, or induced by stress. It will be illustrated in section 5 with meta-surface characterization by DHM. The third and perhaps most recognized advantage of the absence of scanning mechanism is that with DHM, holograms are acquired, and therefore 3D topographies are measured at camera frame rate. Moreover, only a very short time duration is necessary to acquire information, enabling quasi-instantaneous measurements. This time duration is equal either to the camera exposure time, or to the sample illumination duration when using pulsed light source. Comparatively to classical photography, it enables to avoid or reduce image blurring by 'freezing' the object or photographer movements during the image acquisition time duration. Dynamical 3D measurement will be discussed in section 6. These two last specificities, i.e. quasi instantaneous acquisition and camera rate acquisition, have opened a full new range of applications not possible using alternative 3D profilometer technologies. As developed at the end of this section, they are indeed necessary in many situations intrinsic either to the sample, or to the experimental configuration, or to the environmental conditions in which the sample is. Samples cannot always be stopped for being measured. It is the case for instance for on-line quality control, for very fast scanning of large areas, and for dynamical tribology [30][31][32][33]. Measurement needs to be captured 'on-flight' , when the sample is in motion. To avoid a related blurring, image distortion, or any loss of resolution, measurement needs to be quasi instantaneous. In particular the displacement of the samples during the measurement time needs to be smaller than the lateral resolution. Mechanical vibrations are unavoidable in many measurement environments, especially in clean room and manufacturing facilities. The other main source of environment disturbance is air turbulence produced by temperature gradients resulting in local inhomogeneities of air refractive indices, resulting in measurement distortions. They are particularly relevant when measuring at cryogenic temperature [49] or when measuring material phase transitions at high temperature, or within a heated tribometer [30]. With DHM, the potential blurring effect of both contributions is minimized as long as the acquisition time is short compared to the disturbance time scale. Moreover, in the case of turbulences, averaging over a time-sequence of acquisitions enables to minimize or even suppress measurement distortions. These three aspects are illustrated in section 6 with several 4D metrology (3D + time) DHM application examples. The last section of this paper concerns the applications of measurement and analysis of vibrations that are essential for the characterization of MEMS, crystal, and many micro devices [38][39][40][41][42][43][44][45][46][47][48]. In this field, DHM enters in competition with laser Doppler vibrometers (LDV) [50]. Similarly to the comparison with alternative 3D optical profilometers, LDV is a scanning technology. It does not measure surface topography, but a vibration velocity at a single spot area on the surface of a sample. Displacements are retrieved from the velocity measurement by a time integration. The full surface is then characterized by scanning the sample in both lateral directions. DHM measures time-sequence of 3D topographies and extract precisely from them displacement (or vibration) maps. Indeed, each successive acquisition measures simultaneously over the full field of view, providing at each time-point typically a million of data-points. Displacement velocities can be then calculated by time derivative, preventing the measurement to suffer from a drift linked with an integration procedure. This unrivaled wealth of information reveals very quickly and efficiently MEMS response to any excitation signal with unprecedented details. Basis of DHM metrology In the nineteenth century, James Clerk Maxwell was one of the first to suggest using the wavelength as a natural gauge for length. In 1950, the standard meter in the new International System of Units (SI) referred to the wavelength corresponding to the krypton-86 (605 780 nm). The use of a precise wavelength as a reference for the measurement of lengths is well established as the ideal measurement method. As a non-scanning technology, DHM refers purely to wavelengths for height measurements. By using ultra stable interferometric filters for selecting a precise wavelength band of a relatively broad spectrum laser source, DHM operating wavelengths are precisely controlled and perfectly stable compared to other sources of noise. The measured height values do not depend on any scanning calibration, precise positioning, absence of long term drift, repeatability of interferometric piezo-controller, or on any motorized displacement. Indeed, it has been demonstrated in [51] that the height measurement precision is only limited by the signal to noise ratio (SNR) of the hologram. This latter depends on the camera specifications, as well as the transmittance or reflectance properties of the sample, that affects the exposure time, and consequently the acquisition SNR. Single wavelength measurements To illustrate the high repeatability of DHM, figure 1 shows the measurement of a VLSI Model SHS-1800 QC standard. The certified mean step height is 179 nm and its expanded uncertainty is 2 nm. Using a DHM R2200 by Lyncée Tec SA [52], operating in single wavelength mode at 666 nm, and equipped with a 2× microscope objective, the mean step height and its uncertainty are calculated following metrology practices [21]. A sequence of 50 holograms is acquired providing 50 topography maps. The step mean height is evaluated individually for each acquisition with the relation: where area1, area2, and area3 are drawn in figure 1. The step mean height is obtained by averaging the 50 measured heights. The measurement precision is determined by calculating the standard deviation of the 50 mean height measurements. Measured step height is 179.15 nm ± 0.03 nm. It lies perfectly in the range of the step certification and shows the high accuracy of DHM. . Topographic measurement of VLSI certified step (4.463 ± 0.059 µm) using DHM R2200 [52] operating at three wavelengths (λ1 = 666 nm, λ2 = 794 nm, λ3 = 675 nm), equipped with a 2× microscope objective. Grey levels encode surface height. The step height and repeatability evaluated from the three colored areas using equation (1) and two sequences of 50 dual wavelength holograms are 4473.73 ± 0.05 nm. Multiple wavelength measurements The unambiguous measurement vertical range of single wavelength DHM is limited when unwrapping procedure cannot be applied. Combination of several wavelengths, enables to create synthetic wavelengths (long beating frequency). It increases this unambiguous measurement range [17,53]. In this later reference, dual wavelength DHM R2100 [52] operating at 680 nm and 760 nm and SWLI measurements are compared and the similarity of both measurements in terms of value, as shown in figure 2. The difference between the two results lies mainly in the fact that the DHM data were acquired almost instantaneously, while SWLI one required a scan. Using simply a synthetic wavelength provides a larger measurement vertical range, but consequently decreases the measurement accuracy. Nevertheless, by combining properly the information at different wavelengths, multiple wavelength measurement has the same vertical resolution as when operating at a single wavelength [54]. The procedure can be generalized to a larger number of wavelengths. For instance, on the DHM R2200 [47] used in figure 3, the combination of the information at three wavelengths (λ 1 = 666 nm, λ 2 = 794 nm, λ 3 = 675 nm), allows to compute phase images as two synthetic wavelengths Using the procedure defined in [54], the topography within the range corresponding to the highest synthetic wavelength (Λ 1 ) is computed with a resolution similar to a single wavelength measurement: so large vertical steps can be measured by DHM with a very high precision. The information at more than two wavelengths can be acquired in the same hologram, but practically, for many samples geometries, cross talks between the information at different wavelengths produce artifacts. Therefore, to the detriment of quasi-instantaneous acquisition, it is often preferable to limit multiplexing of information at two wavelengths in a single hologram. Measurements acquired at three (or four) wavelengths will necessitate in such cases two holograms. Figure 3 presents the measurement of a VLSI Model SHS-4.5 QC standard, with a certified step height mean value of 4.469 µm and expanded uncertainty of 0.059 µm. As for the single wavelength example here above, the measurement precision is evaluated from the acquisition of 50 measurements. In this case, two sequences of 50 holograms are necessary, the first using λ 1 and λ 2 , and the second using λ 1 and λ 3 . They are acquired using a DHM R2200 equipped with a 2× microscope objective. The measured mean height and standard deviation determined using equation (1) are 4473.73 nm ± 0.05 nm. The precision of the measurement is similar on this 4.45 µm step to the one of the 179 nm one presented in figure 1. It illustrates that multiple wavelength measurements have a similar resolution than a single wavelength measurement, but for a much higher step. Here again, accuracy of the measurement is well within the standard certification values. The standard deviation is very small compared to the step height. Achieving an high resolution on a such a relatively large step is unique in 3D optical profilometry, as generally a long scanning range is synonym of larger incertitude. DHM keeps an interferometric resolution on large ranges. Measurements with vertical range larger than the depth of focus When increasing vertical range, sample height is often larger than the objective depth of focus. DHM is not an infinite focus technology in the same sense as confocal and SWLI technologies that somehow slice optical sections of sharp focus. Indeed, as DHM captures the complex wave front, the wave field can be propagated and the focus can be made sharp on any location on the surface of the sample, providing an infinite focus, or extended depth of focus (EDOF) [55][56][57][58][59]. An example of a micro-lens sample measurement using a DHM R1000 [52] operating t λ = 682.5 nm, mounted with a 50×, 0.75 NA objective is shown in figure 4. Data are validated by a comparison with AFM measurements. When applying EDOF, both are in very good agreement. Thin transparent layers metrology Hologram reconstruction provides phase and intensity maps. Metrology generally expects a geometrical information on the surface topography. For homogeneous samples, conversion of a phase map into a topography is straightforward. But when sample material is not unique, or not homogenous, the transformation from optical to geometrical information is no longer straightforward. In the case of layers of semi-transparent materials and reflective layers, often encountered in micro-technology, multiple reflections occur at the different interfaces. Nevertheless, the resulting intensity and phase maps depend on a limited number of parameters: the refractive indices and thicknesses of the different layers. Using several wavelengths, the problem is similar to spectral reflectometry, where the reflected amplitude versus the wavelength spectrum is exploited to determine film thicknesses by fitting procedures. Usually, the measurement is performed on a laser sport, which size defines the lateral resolution. It can be scanned laterally point by point to provide a 2D map. A reflectometry measurement using the basic same principle can be also applied in DHM by measuring reflected wave-fronts (both amplitude and phase) at different wavelengths in order to evaluate layer thicknesses of different layers. The principle is demonstrated in [60,61] for SiO2 patterns deposed on Si wafer, for SIMS characterization with three layers (Au-SiO2-Si) and for thin water surfaces. Figure 5 shows, for the two first cases, that measurements using a mechanical profilometer and DHM R2200 [52] are identical. Such a comparison is not possible on the water example, as the nature of the samples does not allow the use of a mechanical method. Polarization metrology Liquid crystal displays, optical telecommunication devices, MOEMS, photonics crystals, are a few examples of samples for which birefringence is a central and key property that need to be characterized. Birefringence analysis is also used for analysis of forces and stress analysis during manufacturing. It can be measured using DHM, both in transmission and reflection configuration [22][23][24][25]. DHM has been used in particular for characterization of meta-surfaces [62,63]. In figure 6, the polarization of the reference arm of DHM R1000 with a 100× magnification is adjusted to left-handed circularly polarization to interfere only with the meta-surface diffused wavefront. Indeed, a meta-surface, or a meta-device, is a substrate structured with subwavelength-scale patterns in the horizontal dimension. They modulate the behaviors of electromagnetic waves in the 3D space. As many of these devices are active, or as the processes investigated happen quickly, the ability of DHM to record instantaneously information at several polarization is essential for birefringence measurements. 4D metrology The three 4D metrology examples of this section illustrate the needs of in-situ, controlled environments, and real time measurements. In-situ and real-time chemical etching In situ monitoring and controlling etching processes is an important need in micro and nano structuring of materials and thin films. Conventional methods are mainly laser end point detection and optical spectrometers. Both of them monitor thickness or composition of etched layers and materials, but neither of them provides in-situ real time 3D topography measurement of the etching process with sub-micron lateral resolution. In figure 7, a metallic sample is coated with a polymer resist patterned with two trenches of different width. It is placed in liquid electrolyte and current is applied to perform electrochemical etching. The measurements are performed with a DHM R-2200 [52] through the transparent window of the etching chamber during the process. The objective has a magnification of 20× and a working distance of 10.8 mm. Both etching depth and surface roughness are monitored with interferometer resolution in real-time and in-situ by DHM, without need to stop the process and to take the sample out of the etching chamber [29]. Although the DHM used for this application has a camera operating at 195 frames per second, the time scale of this application is relatively slow. But the presence of bubbles and gradients of material associated with the etching process requires a short acquisition time of 100 µs for this application. Ball-on-disk vacuum tribometer with real time and in situ measurement of the wear track by digital holographic microscopy Ball-on-disk tribometers are test instruments designed for precise and repeatable wear testing. Continuous monitoring of the wear track is essential to detect when important events, such as material removal happens. DHM has been combined with a ball-on-disk vacuum tribometer and enables real-time and in-situ measurements of the evolution of the wear track under various temperatures (up to 800 • C), vacuum, and atmospheric conditions (figue 8). It measures at 34 frames per second, with an acquisition time of 100 µs. The system can operate up to a linear speed of 10 cm s −1 with a 40× magnification objective. This one has a working distance of 30 mm, providing enough clearance to perform the measurement through optical port. This one is coated to reflect IR to prevent any damage on the objective when the sample is heated at 800 • C. The system was tested and validated by correlating measurements of the wear track measured by the DHM with the ones characterized using SEM and confocal microscopes [30,31]. It solves the problem of taking the sample out of the chamber and replacing it in the exact same position relative to the ball-on-disk. Surface topography measurements simultaneously to laser texturing Interference lithography enables complex surface structuring of azobenzene-containing films for creation of surface relief patterns with varying heights. For understanding and controlling their formation dynamics and response to different types of light fields, a lithography set up has been combined with a DHM-R2100 [10,47]. It enables real time, in-situ observation, and control of surface-relief grating formation on azobenzene-containing films, as shown in figure 9. The DHM measurements, performed at 195 frames per second, with an acquisition time of 0.5 ms, and a 20× objective with a 3 mm working distance, have been validated using an atomic force microscope. The applications of this section 6 illustrate how real-time topography measurements can be exploited to give a quasi-instantaneously feedback to the control system of a process, that can be etching, mechanical ablation, laser structuring, polishing, or many other micro-or nano-manufacturing processes. It may include artificial intelligence for taking decisions and adjusting the process parameters, such as its time duration, laser beam intensity and shape, intensity of an electrical or mechanical action, among others. Despite the technology evolution, cameras have, and will always have, a limited maximum acquisition frame rate, especially when a certain imaging resolution of at least one million of pixels is expected. A second limitation lies in the exposure time that necessary decreases when frequency increases. To preserve the same amount of light collected at each acquisition independently of the acquisition frequency, illumination intensity must be increased in inverse proportion with the camera shutter duration decrease. Eventually, laser power cannot be increased above a given threshold to prevent sample damages. When the movement of a sample can be repeated identically over time, the laser pulsed stroboscopic synchronization technique provides a solution to these technical limitations. This approach enables the use of standard, non-high-speed cameras with higher imaging quality than high speed sensors, and of lasers with low power, and compatible with reasonably priced tabletop DHM systems. The acquisition principle is shown in figure 10. The excitation signal represented in this example is a so-called burst, composed by two periods of a sinus wave, followed by a constant voltage period. This signal is repeated over time. Laser pulse trains are precisely synchronized with this signal, with well controlled delays applied for the successive holograms acquisition. The number of samples per excitation period can be in this way precisely controlled. There is no need for a high-speed camera and integration of multiple laser pulses for each sample ensures optimal hologram illumination without the need of a high-end powerful laser source that may arm samples. The output of a stroboscopic acquisition scheme is a time-sequence of 3D topographies that can be exploited as is or further processed for investigating time and frequency responses. Traditional Bode plots and Fourier transform has been used by the author of this paper in [47] to analyze the response (displacements) of an individual small area of the surface of the analyzed samples. This approach is generalized in this section by replacing individual area analysis by the calculation of full field amplitude and phase maps. These representations are interesting because they are also the ones used by the finite element simulation programs used to predict the properties of MEMS. Phase and amplitude vibration maps are extracted from the time sequences of topographies. The amplitude map displays for each pixel the difference between the minimum and the maximum of the vibration, and the phase map displays the difference of phase between the excitation signal and the response of the device. It is illustrated in figure 11 for a micro mirror excited by a sine waveform at 19 and 491 kHz. According to Nyquist-Shannon theorem, and assuming sinusoidal vibrations of the sample, three samples per period are sufficient to extract vibration maps on the example of figure 11. The vibration time-sequence can be restored from the two vibration maps, i.e. the phase and amplitude of the vibration for each pixel. With this approach and representation, vibrations are characterized with an optimal number of recording, and a minimal need of data storage. For investigating the frequency response of a microsystem, a first solution consists in exciting it with a sine waveform and sampling a full period of the excitation period, and to sweep step by step the excitation frequency over the relevant frequency range. Frequency resolution is given by the sweeping step. An alternative to this approach is to excite the system with a waveform encompassing multiple frequencies (chirp, burst, transient), and to perform a discrete Fourier transform (DFT) of the time-sequence to retrieve the microsystem response in the frequency domain. Frequency resolution depends on the number for samples per period and maximum sampling frequency [47]. This second solution is illustrated in the figure 12 for an ultrasonic transducer, with membrane diameter of 120 µm, measured with a DHM R2100 [52], and a water immersion objective 20× with a working distance 3.5 mm. It generalizes the conventional single point frequency analysis into mega-pixel vibration amplitude maps calculated for each DFT bins/channels. In this section, the presented applications show that off-axis DHM provides a very efficient analysis both in the time and in the frequency domain. Time sequences of topographies are measured at precise phases of the MEMS excitation signal, and enable to calculate amplitude and phase vibration maps. Data are determined over the full field of view, providing mega pixel digital resolution and diffraction optical resolution. Such specifications enable investigation of complex structures and resonant modes difficult to address using a LDV scanning system measuring a limited number of spots over a grid. Conclusions Off-axis DHM technology is no longer a research topic in itself. Latest developments lie mainly in the technological improvements of electro-optical components, such as higher speed and improved quality cameras. DHM systems have been consolidated into commercial systems and they are used daily for research in academic and industrial laboratories and for quality control in production environments. Their functionalities cover a large range of metrology applications, including dimensioning, calibration, polarization, and semi-transparent layers investigations. With their 4D ability, they provide a new insight on many phenomena investigated previously only by performing endpoint measurements. The off-axis DHM evolution lies now in the new metrological application developed by DHM users. Latest trends include fast interpretation of measurements to provide feedback to manufacturing and material processing, involving artificial intelligence to control manufacturing and processing tools, machine learning for decision making, and integration of complementary measurement modalities. Data availability statement The data that support the findings of this study are available upon reasonable request from the authors.
2021-06-26T20:02:10.941Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3591e6ac9ced1995d27c703fb252cb091391b976", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2515-7647/ac0957", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3591e6ac9ced1995d27c703fb252cb091391b976", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
14849251
pes2o/s2orc
v3-fos-license
Subcutaneous Immunotherapy Improves the Symptomatology of Allergic Rhinitis Introduction The relevance of allergic rhinitis is unquestionable. This condition affects people's quality of life and its incidence has increased over the last years. Objective Thus, this study aims to analyze the effectiveness of subcutaneous injectable immunotherapy in cases of nasal itching, sneeze, rhinorrhea and nasal congestion in allergic rhinitis patients. Methods In the present study, the same researcher analyzed the records of 281 patients. Furthermore, the researchers identified allergens through puncture cutaneous tests using standardized extracts containing acari, fungi, pet hair, flower pollen, and feathers. Then, the patients underwent treatment with subcutaneous specific immunotherapy, using four vaccine vials for desensitization, associated with environmental hygiene. The authors analyzed conditions of nasal itching, sneeze, rhinorrhea, and nasal congestion throughout the treatment, and assigned them with a score ranging from zero (0), meaning absence of these symptoms to three (3), for severe cases. The symptoms were statistically compared in the beginning, during, and after treatment. Results In this study, authors analyzed the cases distribution according to age and the evolution of symptomatology according to the scores, comparing all phases of treatment. The average score for the entire population studied was 2.08 before treatment and 0.44 at the end. These results represent an overall improvement of ∼79% in symptomatology of allergic rhinitis in the studied population. Conclusion The subcutaneous immunotherapy as treatment of allergic rhinitis led to a reduction in all symptoms studied, improving the quality of life of patients, proving itself as an important therapeutic tool for these pathological conditions. Allergic Rhinitis and its Impact on Asthma (ARIA) project classifies allergic rhinitis as a risk factor for the development of asthma, alerting to its impact on quality of life and high social costs. [5][6][7] The ARIA project also proposed a new classification to allergic rhinitis severity, replacing the terms perennial and seasonal rhinitis with mild, moderate or severe intensity, persistent or intermittent. In the United States, it is estimated that 30 million people suffer from allergic rhinitis, causing high absenteeism, which corresponds to more than 3.8 billion dollars per year in financial costs. 3,8 In addition, there is evidence that allergic rhinitis is frequently undertreated, mainly in its moderate and severe/intense persistent forms. 6,9 The management of patients with allergic rhinitis involves proper pharmacological therapies, including allergen immunotherapy. 8,10,11 Subcutaneous injection with allergen-specific immunotherapy (SIT) is indicated for patients with refractory symptoms, being considered the only treatment capable of modifying the course of allergic rhinitis and asthmas. However, less than 5% of allergic patients have undergone immunotherapy, mainly due to the long term for treatment and allergy side effects, which demonstrates the complexity of this therapy. Moreover, different authors show that the actual beneficial effects and security of immunotherapy remain unclear. 2,[11][12][13][14][15] One option for such cases could be the use of interleukin 5 . This cytokine relates to the suppression of the allergens' synthesis, demonstrating the possible clinical efficiency of immunotherapy. 14,16 Thus, the use of this therapy in respiratory allergies can be an attempt at inactivation of allergen-specific Th1 and Th2 cells, decreasing the production of IgE in B lymphocytes, modulating the immune response. 10 Objective Therefore, the aim of this retrospective study is to analyze the effectiveness of an injectable immunotherapy in cases of nasal itching, sneeze, rhinorrhea, and nasal congestion in allergic rhinitis patients. Materials and Methods In the current study, the authors analyzed 281 patient records, independent of seasons, at the beginning and end of treatment, attended to over 11 years, of both genders, aged 3 to 69 years old, with a clinical diagnosis of allergic rhinitis and bronchial asthma associated, without other apparent allergic etiologies. The researchers diagnosed patients with positive puncture cutaneous tests, using standardized extracts containing acari, fungi, pet hair, flower pollen, and feathers. After diagnosis, the patients received specific desensitizing vaccines of Aler-gofar® (purified allergens, Rio de Janeiro, RJ, Brazil) at a private practice in the city of Jundiaí, São Paulo State, Brazil. The study was approved by Ethics Committee of the Faculty of Medicine of Jundiaí (process number 127/2007-Jundiaí, São Paulo, Brazil). The identity of all patients was preserved. The allergic rhinitis symptoms analyzed in this study were: itching, sneezing, watery rhinorrhea, and nasal congestion. The same researcher and examiner, in the same office, quantified these conditions according to signs and symptoms proposed by some authors, and modified for this report throughout the entire study period. The scoring was as follows: Zero (0) ¼ absence of symptom; 1 ¼ mild symptoms: occasional itching and sneezing, nasal rhinorrhea, and/or secretion sensation in the throat and/or occasional nasal congestion; 2 ¼ moderate symptoms: itching and sneezing several times per day, rhinorrhea several times per day and/or frequent throat clearing, and nasal congestion with buccal breathing; 3 ¼ severe/intense symptoms: itching and sneezing interfering with daily activities, constant nasal rhinorrhea, and coughing and/or speech alteration, buccal breathing with interference of sleep, and damage in sense of odors due to nasal congestion. The researchers obtained five mean scores per symptom for each patient: at the beginning of treatment, and at the end of the first, second, third, and fourth vaccine dose. Any subsequent booster treatments were disregarded. The researcher performed skin prick tests in the forearm of all patients. Equipment for orotracheal intubation and ventilation were always available. In this analysis, the authors observed patients' reactions to house dust mites (Dermatophagoides farinae, Dermatophagoides pteronyssinus, Blomia tropicalis, Aleuroglyphus ovatus, Suidasia pontificiae, and Tyrophagus putrescentiae), fungus/ spores, pet hair, flower pollen, wool, and feathers. The histamine was used as a positive control and the response to saline solution (0.9%) as a negative control. Any others forms were defined as positive responses. The responses in relation to histamine were also classified as mild, moderate, and severe/ intense, similar to those described in literature. 17 Patients were included according to the following inclusion criteria: 1) age over 3 years; 2) clinical symptoms compatible with those for allergic rhinitis/asthma; 3) disease that had not been responsive to conventional treatments, including environmental control; 4) positive skin tests; 5) possibility of having received specific desensitization treatment; 6) vaccines received of the same origin; 7) underwent only subcutaneous treatment; 8) use of four vials of allergen extracts re-suspended in aluminum hydroxide at increasing concentrations. The study's exclusion criteria were: 1) younger than 3 years old; 2) patients with uncertain diagnosis (with mildly allergic rhinitis); 3) good response to conventional treatments; 4) discontinued treatment; 5) patients who did not attend the clinical visits; and 6) patients hypersensitive to the vaccine components; 7) rhinitis due to other causes. The sample can be considered representative of the studied population, as it takes into account similar socio-economic levels of good standing, good housing conditions, access to health services, and appropriate nutrition. All treated patients received detailed written recommendations for environmental control and hygiene, food for a dye-free diet and an acaricidal solution containing benzyl benzoate to control acari, all of them standardized to avoid influence over the outcome. During treatment, patients were not allowed to use drugs, such as: steroidal anti-inflammatory, acetylsalicylic acid, antihistamines, oral decongestants, or corticosteroids, except in cases of acute episodes or when prescribed and monitored by the main researcher. All patients received instructions to report the use of any medication during therapy and answered questions concerning this in the periodical reassessment visits. The applied vaccine was always the Alergofar® (Rio de Janeiro, Brazil). The total period of treatment was 14 months. The first vial (doses) contained a weak concentration of allergens (0.008 skin reactivity units [SRU]) administered at intervals of 7 days (8 increasing doses of 0.1 to 0.8ml). The second vial contained a medium concentration of allergens (0.08 SRU) applied at intervals of 10 days (8 increasing doses of 0.1 to 0.8ml). The third vial contained a strong concentration of allergens (0.8 SRU) and was administered at intervals of 14 days (8 increasing doses of 0.1 to 1.0ml). The fourth vial contained an extra-strong concentration (8 SRU) and was administered at intervals of 21 days divided into 9 doses (0.1, 0.2, 0.3, 0.5, 0.6, 0.8, 1.0, 1.0, and 1.0ml). The patients were consistently monitored for 15-30 minutes after each administration. 18 They underwent reassessment after the end of each vaccine vial. In case of an acute episode of rhinitis exacerbation, the researchers administered oral antihistamines for a few days. According to literature, this common approach does not alter the results or the evaluation of treatment efficacy. Moreover, for control purposes, the researchers always evaluated the patients after administering this drug. 19 Statistical Analysis The authors compared results statistically during the entire treatment and reported the mean, median, and values range. They applied the Wilcoxon test to evaluate the difference between the symptom scores (nasal itching, sneezing, rhinorrhea, and nasal congestion) before, during and after vaccine therapy. A level of significance of 5% was adopted. Data were analyzed using the SAS 9.1 software (USA). Results The population studied was of 281 patients, including 167 (59.4%) males and 114 (40.6%) females, totaling 8,992 applications performed. There was no significant difference in relation to gender. Ages ranged from 3 to 69 years old, with a mean in relation to "n" of 17.4 AE 11.7 years. Approximately 50% of the sample was younger and over 50% was older than 14.4 years (median), as seen in ►Table 1. In the results, it is also possible to observe the incidence of each symptom of allergic rhinitis at four levels of intensity in the population studied (n ¼ 281) before treatment with specific desensitizing vaccines. ►Fig. 1 shows mean symptom scores before treatment. The overall mean score corresponds to the sum of all individual symptom scores divided by the number of patients studied (n ¼ 281), and then divided by four, which represents the number of symptoms evaluated during each stage of desensitization treatment. The mean scores at the end of vaccine therapy are shown in ►Figs. 2, 3, 4, and 5, respectively. ►Table 2 summarizes the mean score of each symptom of allergic rhinitis before treatment and at the end of immunotherapy. The authors observed significant differences among the four symptoms studied between the beginning and the end of immunotherapy (p < 0.05; Wilcoxon test). With respect to itching, there were significant differences (p < 0.05) found in all stages of treatment, except between the second and third vial (p ¼ 0.225). The mean initial score (1.89 AE 1.20) was significantly higher than the final score (0.35 AE 0.69; p < 0.001). There were also significant differences (p < 0.05) pertaining to sneezing in all stages of treatment, except between the second and third vial (p ¼ 0.196). The mean initial score (2.27 AE 0.97) was significantly higher than the final score (0.51 AE 0.78 p < 0.001). Rhinorrhea scores also differed significantly (p < 0.05) between all stages of treatment, except between the first and second vial (p ¼ 0.347) and between the second and third vial (p ¼ 0.2154), but the mean initial of score (1.84 AE 1.15) was significantly higher than the final score (0.37 AE 0.68, p < 0.001). The scores obtained for nasal congestion also differed significantly (p < 0.05) in all stages, except between the first and second vial (p ¼ 0.658) and between the second and third vial (p ¼ 0.327). The mean initial score (2.41 AE 0.97) was significantly higher than the final score (0.54 AE 0.85; p < 0.001). The comparison of total score obtained in combination with the four symptoms, showed significant differences (p < 0.05) in all stages of treatment, with the mean initial score (8.41 AE 2.63), being higher than the final score (1.75 AE 2.03; p < 0.001). Discussion In the present study, the researchers did not observe significant differences in relation to gender. The mean age of the patients was 17.4 AE 11.7 years (range of 3-69 years), with $50% of the sample younger and over 50% older than 14.4 years old (median). The majority of patients were children and adolescents. According to literature, the immunotherapy for allergic rhino-conjunctivitis and allergic asthma is more effective in children and young adults than in older adults. 10 The researchers used standardized diagnostic and therapeutic procedures for all patients and analyzed the records, ensuring the study's confidentiality and criteria. Skin tests are recognized as effective and precise tools for the etiological diagnosis of allergic rhinitis. 5,10,14-17 Confirming this, a study that included 117 patients with persistent rhinitis demonstrated positive reactions to Dermatophagoides farinae (78%), Dermatophagoides pteronyssinus (75%), and Blomia tropicalis (77%). 20 These tests must be interpreted 15 to 20 minutes after puncture, in an interval that should not be exceeded since skin reactions tend to fade over time. 17 Anergic patients, or those under the effect of some medications, such as systemic decongestants, cold medicines, and antihistamines, may show negative responses to all allergens tested, including histamine. Systemic or topical corticosteroids do not alter the result of these skin tests. In addition, in applying these tests, the use of physiological saline is recognized as a negative control and must be compared with all the allergens tested. 17 Lastly, desensitization treatment has been and should always be indicated for patients with symptoms refractory to conventional treatments and with the combination of environmental hygiene to reduce exposure to the allergens. 2 In the present study, an acaricidal solution containing benzyl benzoate was prescribed for environmental hygiene, to reduce the population of mites according to literature. As for desensitizing vaccines, they do not interact with systemic and topical antihistamines, disodium cromoglycate, or corticosteroids because they are not conventional drugs, but extracts of allergens. Furthermore, there are no restrictions to subsequent complementary surgeries, such as anatomical deformities correction of nasal septum and/or hypertrophy of the nasal conches. 10,14,15 In general, this allergen immunotherapy consists of the treatment of allergic disease through the administration of gradually increasing doses of allergen. Currently, this is considered a more efficient form of immune tolerance induction, compared to that described in 1911. 21 This study reports vaccine concentrations as SRU (Standard Reactivity Unit), a standard unit considered ideal for the purpose. The first vial of vaccine contained a weak concentration of allergens (0.008 SRU), the second presented a medium concentration (0.08 SRU), the third presented an elevated concentration (0.8 SRU), and the fourth presented a very elevated concentration (8 SRU). The researchers recorded alterations in symptoms at the end of each vaccine vial, excluding sporadic doses. 22 The equivalence of SRU/milliliter, microgram/milliliter (µg/ml), and International Units (IU), allow for the comparison with other studies, similar to: 1) mild concentration: contains 0.008 SRU ¼ 0.00625 µg ¼ 0.01 IU; 2) moderate: 0.08 SRU ¼ 0.0625 µg ¼ 0.1 IU; 3) Strong: 0.8 SRU ¼ 0.625 µg ¼ 1 IU; 4) very elevated: 8 SRU ¼ 6.25 µg ¼ 10 IU. According to international standards, the minimum concentration at the end of treatment must be 4 IU/ml, equivalent to 2.5 µg/ml. In the present study, researchers used 2.5 times this concentration, plus the minimum quantity recommended at the end of treatment. This treatment should be applied subcutaneously; intradermal or intramuscular applications are inadequate and can reduce the efficacy of desensitization treatment. In this respect, a study proposed the injection of allergens in minor doses into the lymph nodes with a short-term treatment. 13 These factors are important in subcutaneous immunotherapy (SCIT), 23 as well as the quality of the allergen extract 24 and time of action. However, the duration of allergen effects is mainly related to individual characteristics, similar to those described in literature, which show rates ranging from 0-50%. 25 Nonetheless, most studies consider this allergy therapy safe, despite some reports of a potential risk of anaphylaxis, 12 episodes of asthma, urticaria, angioedema, 13 and erythema multiforme. 26 A prospective, multicenter, placebo-controlled trial was conducted in patients submitted to depigmented allergen extract. The patients received four injections of increasing doses at weekly intervals followed by monthly addition dosage, totaling 5,923 doses. In this case, five patients presented local reactions and 27 presented systemic reactions. 27 Some researchers also suggest reducing the dose in cases of local or systemic reaction 18 and excluding asthmatic patients, since they are particularly vulnerable to adverse reactions. 19 In the present study, there was no reaction observed in samples. The present study, however, did not exclude asthmatic patients. In fact, it included 63 patients with this condition. The authors did exclude one patient because he presented bronchospasm after each dose applied, even at higher dilutions. The responsible researcher and an experienced nurse applied the injections and, according to literature, consistently had intubation and ventilation equipment available. 19 In the present study, the patients were controlled and monitored for 15 to 30 minute after each dose administration to detect immediate adverse reactions. No systemic reactions Fig. 6 Incidence of each symptom of allergic rhinitis at four levels of intensity in the population studied (n ¼ 281) before treatment with specific desensitizing vaccines. occurred after 8,992 applications; only some mild local reactions were observed but did not require interventions, which indicate eminent tolerability and assurance of treatment. Differently, others studies show the occurrence of reactions after treatment, as well as the need for frequent drug intervention in 0.13% of cases. 2 In the present results (►Fig. 6), most of the patients studied had severe symptoms, which were mainly sneezing and nasal congestion, followed by itching and rhinorrhea (►Fig. 1). After the first dose, nasal congestion was the symptom with the greatest reduction (►Fig. 2). Followed by rhinorrhea and nasal congestion in the second dose (►Fig. 3), whereas after the third dose, the authors observed improvement of all symptoms (►Fig. 4). Final data on the improvement of symptoms were demonstrated after the last vaccine dose (►Fig. 5). These findings indicate two important qualitative moments in symptoms improvement during this immunotherapy: one after the first dose and the other after the fourth. Similarly, other studies have shown improvement of symptoms after this treatment. 2,7 Immunotherapy has also been used to treat different cases, leading to reduced symptoms and in the need for medications, aside from a substantial improvement in quality of life. It is indicated to patients that cannot avoid exposure to allergens and in situations where pharmacologic therapy has not rendered positive results. Specific immunotherapy to treat allergic rhinitis in elderly patients was efficient and had no collateral effects. In addition to the clinical benefit, there was also improvement in the cutaneous test. 2,19,22,[28][29][30][31] Moreover, with respect to the controversy about the season in which the study is initiated or conducted, this cannot be considered a bias factor in the evaluation of symptoms, because all the patients included in the present report were followed in a continuously during treatment, refuting, for example, the seasoned report of 120 patients concretely allergic to grass and rye pollen. 32 Finally, ►Table 2 shows the comparison of mean scores before and after treatment, also demonstrated by ►Figs. 1 and 5. The authors calculated the mean score obtained from the four main rhinitis by dividing individual scores by four, in that the maximum score was three. This resulted in a score of 2,086 in the beginning of treatment and 0.440 after the last vaccine dose, which corresponds to an overall symptom improvement of 79% in patients with allergic rhinitis with or without asthma. The authors also obtained intermediate scores during treatment, demonstrating the progressive improvement of symptoms. Significant differences (p < 0.05) were observed for all comparisons performed. The mean initial score (8.41 AE 2.63) was higher than the final score (1.75 AE 2.03) (p < 0.001). Thus, the study shows that specific immunotherapy is a relevant approach in blocking the progression of rhinitis and asthma, mainly in selected cases. 4,18,33 Conclusion Subcutaneous immunotherapy demonstrated efficacy in decreasing the symptoms of itching, sneezing, rhinorrhea, and nasal congestion in patients with allergic rhinitis, proving to be an important therapeutic tool against this pathological condition.
2016-05-04T20:20:58.661Z
2015-10-07T00:00:00.000
{ "year": 2015, "sha1": "508c76e3d281b26a109a06551038c733d79a9ddc", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0035-1564437.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f03ae2576b8c5f440d6dd45b4820840c42b9a7f2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4426769
pes2o/s2orc
v3-fos-license
An asymmetric explosion as the origin of spectral evolution diversity in type Ia supernovae Type Ia Supernovae (SNe Ia) form an observationally uniform class of stellar explosions, in that more luminous objects have smaller decline-rates. This one-parameter behavior allows SNe Ia to be calibrated as cosmological `standard candles', and led to the discovery of an accelerating Universe. Recent investigations, however, have revealed that the true nature of SNe Ia is more complicated. Theoretically, it has been suggested that the initial thermonuclear sparks are ignited at an offset from the centre of the white-dwarf (WD) progenitor, possibly as a result of convection before the explosion. Observationally, the diversity seen in the spectral evolution of SNe Ia beyond the luminosity decline-rate relation is an unresolved issue. Here we report that the spectral diversity is a consequence of random directions from which an asymmetric explosion is viewed. Our findings suggest that the spectral evolution diversity is no longer a concern in using SNe Ia as cosmological standard candles. Furthermore, this indicates that ignition at an offset from the centre of is a generic feature of SNe Ia. When a carbon-oxygen WD reaches a critical limit known as the Chandrasekhar mass (∼ 1.38 M ⊙ ), its central density and temperature increase to a point where a thermonuclear runaway is initiated. The thermonuclear sparks give birth to a subsonic deflagration flame, which at some point may make a transition to a supersonic detonation wave that leads to the complete disruption of the WD 11,12 . The thermalization of γ-rays produced from the decay of freshly synthesized radioactive 56 Ni powers the transient source, known as a SN Ia 13,14 . The relationship between the luminosity and the decline-rate parameter (∆m 15 (B), which is the difference between the B-band magnitude at peak and that measured 15 days later) is interpreted to be linked to the amount of newly synthesized 56 Ni (refs. 15,16). SNe Ia displaying a nearly identical photometric evolution can exhibit appreciably different expansion velocity gradients (v Si ) as inferred from the Si II λ6355 absorption feature 10 thus raising a nagging concern regarding the 'one-parameter' description. Late phase nebular spectra can be used to trace the distribution of the inner ejecta 19 . Beginning roughly half a year after explosion, as the ejecta expands, its density decreases to the point where photons freely escape. Photons originating from the near/far side of the ejecta are detected at a shorter/longer (blue-shifted/red-shifted) wavelength because of Doppler shifts. For SNe Ia, 3 emission lines related to [Fe II] λ7155 and [Ni II] λ7378 are particularly useful, as they are formed in the ashes of the deflagration flame 19 . These lines show diversity in their central wavelengths -blue-shifted in some SNe Ia and red-shifted in others (see Fig. 1c) -which provides evidence that the deflagration ashes, therefore the initial sparks, are on average located off-centre. The wavelength shift can be translated to a line-of-sight velocity (v neb ) of the deflagration ashes. Figure 2 shows a comparison betweenv Si and v neb for 20 SNe Ia. Details regarding the data are provided in SI §1. Although the diversities in these observables were discovered independently, This finding strongly indicates that HVG and LVG SNe do not have intrinsic differences, but that this diversity arises solely from a viewing angle effect. Figure 3 shows a schematic picture. If viewed from the direction of the off-centre initial sparks, an SN Ia appears as an LVG event at early phases and shows blue-shifts in the late-time emission-lines. If viewed from the opposite direction, it appears as an HVG event, and shows red-shifts at late phases. The number of HVG SNe is ∼ 35% 10 of the total number of HVG and LVG SNe. To explain this, the angle to the observer at which an SN changes its appearance from an LVG to an HVG is ∼ 105 − 110 • , measured relative to the direction between the centre and the initial sparks. km s −1 . These ranges are shown as arrows in Fig. 2, and provide a good match to the observations. Figure 4a shows an example of a hydrodynamic model in which the thermonuclear sparks were ignited off-centre in a Chandrasekhar-mass WD 6 (an alternative way of introducing global asymmetries is double detonations in sub-Chandrasekhar-mass WDs 20 ). Although this model has not been fine-tuned to reproduce the present finding, it does have the required generic features. The density distribution is shallow and extends to high velocity in the direction opposite to the initial sparks ( Fig. 4b). Initially, the photosphere is at high velocity if viewed from this direction, as the region at the outer, highest velocities is still opaque. Later on, the photosphere recedes inwards faster in this opposite direction, owing to the shallower density gradient. As a result, the SN looks like an LVG if viewed from the offset direction, but like an HVG SN from the opposite direction ( Fig. 4c), as in our proposed picture (Fig. 3). Our finding provides not only strong support for the asymmetric explosion as a generic fea-ture, but also constraints on the still-debated deflagration-to-detonation transition. In this particular simulation, the change in appearance (as an HVG or an LVG SN) takes place rather abruptly around the viewing direction of ∼ 140 • . Owing to the offset ignition, the deflagration flame propagates asymmetrically and forms an off-centre, shell-like region of high density deflagration ash. The detonation is ignited at an offset following the deflagration, but tries to expand almost isotropically. However, the angle between 0 • and 140 • is covered by the deflagration ash, into which the strong Competing Interests The authors declare that they have no competing financial interests. Correspondence Correspondence and requests for materials should be addressed to K.M. (email: keiichi.maeda@ipmu.jp). Figure 1 Comparison . This region is rich in stable 58 Ni with a small amount of radioactive 56 Supernova Sample and Notes for Individual SNe Supplementary and [Ni II] λ7378 are broad and mutually blended. These indicate that the ejecta structure (density, temperature, and composition) of faint and bright SNe Ia is intrinsically different from normal ones. This is a reason why our sample is mainly composed of normal SNe Ia. Luckily, omitting a large fraction of faint/bright SNe Ia is not important for our present analysis, since we are interested in the spectral diversity beyond the one-parameter ∆m 15 (B) description, which is a problem only in normal SNe Ia. For normal SNe Ia, we categorize HVG SNe and LVG SNe according tov Si , with the division line at 70 km s −1 day −1 . According to itsv Si , SN 2004dt is an HVG. However, its peculiar observational features suggest that it is an outlier, and the origins of its high velocity gradient and the negative v neb are likely different than the one for other HVGs (see Fig. 2 caption). One of the peculiar features of SN 2004dt appears in its late-time spectra. Supplementary Fig. 1 Other Observational Constraints In abundance 'tomography' 16,21,66−68 , a temporal sequence of spectra of individual SNe Ia are used to infer the distribution of different elements through the SN ejecta, assuming the density structure of a spherically symmetric explosion model 13 . From this type of analysis, it has been indicated that the abundance distribution is generally a function of ∆m 15 (B). The difference between the HVG and LVG SNe, not related to ∆m 15 (B), is mainly on the extent of the Si-rich layer 16 and on the photospheric velocity 66 , which are explained by our proposed scenario (main text). On the other hand, the spatial extent of the 56 Ni-rich region does not seem to be dependent on whether it is a HVG or a LVG SN 16 . This could provide a constraint on the ejecta asymmetry. In the offset explosion model, the spatial extent of the 56 Ni region, as well as the density structure at the outer edge of that region, are not sensitive to the direction, despite the initially large asymmetry in the ignition 6 . This stems from the nature of the propagation of the detonation wave as described in the main text; unlike the deflagration flame, the detonation tries to expand isotropically, producing roughly spherically distributed 56 Ni. This region is not sensitively affected by the existence of the deflagration ashes, which is essential in determining the structure of Si-rich region. As a result, the spatial extent of the 56 Ni-rich region is mainly controlled by the different amounts of 56 Ni produced in the explosion, and the viewing angle dependence could add some diversity at most as a secondary effect 6 ; this is consistent with the observational indications 16 . The asymmetric distribution of the outermost layer may imprint its signature in polarization measurements, which may be linked to the velocity gradient 69 . The polarization of the Si II line is correlated with ∆m 15 (B) 22 , but only for LVG SNe ( Supplementary Fig. 2). HVG SNe generally show larger polarization than LVG SNe 70,71 but they clearly do not follow this trend. A global one-sided asymmetry as in the present interpretation would produce relatively low continuum polarization and relatively high line polarization at Si II λ6355 72 . In our proposed scenario the global asymmetry is of smaller degree than in an extremely asymmetric model producing Si II polarization of ∼ 1% 72 , thus this is likely not a major contributor to the observed polarization. Alternatively, it has been suggested, based on the correlation between the Si II polarization and ∆m 15 (B), that the observed Si II polarization could be a measure of the thickness of the outer layer above the 56 Ni-rich region, in which the local inhomogeneity, e.g., a few relatively dense blobs, is assumed to be a source of polarization. In this interpretation, the large Si II polarization in HVG SNe could be a consequence of an extended outer layer in the direction opposite to the initial sparks. Viewing-Angle Effect on the Light Curve It has been suggested that if the ejecta are asymmetric, ∆m 15 (B) is dependent on the direction to the observer 5 . Supplementary Fig. 3 shows the comparison between ∆m 15 (B) and v neb for SNe Ia. In A question for this interpretation is whether any indication of such an effect is seen in the data. Supplementary Fig. 3 shows that there is no clear (but perhaps a marginal) correlation between ∆m 15 (B) and v neb . According to the prediction 5 of the viewing-angle effect on ∆m 15 (B) for models similar to the one shown in the main text, the observed ∆m 15 (B) could vary by ∼ 0.4 mag depending on the direction to the observer (schematically shown in Sup. Fig. 3). This is smaller than the intrinsic variation in ∆m 15 (B) for different M( 56 Ni), and thus such an effect is difficult to notice in Supplementary Fig. 3, as is consistent with the low correlation in the present data. Any marginal correlation between ∆m 15 (B) and v neb may already hint that such an effect is indeed there, but a larger number of SNe Ia is necessary to test this possibility with statistical significance. The lines correspond to three hypothesized explosion configurations which are mutually different in M( 56 Ni) and therefore in the intrinsic luminosity.
2010-06-30T15:25:46.000Z
2010-06-30T00:00:00.000
{ "year": 2010, "sha1": "ae7b7b71a3278a62f2173699a14d78a2aca9a1d4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1006.5888", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ae7b7b71a3278a62f2173699a14d78a2aca9a1d4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
57445700
pes2o/s2orc
v3-fos-license
Management of cut-throat injuries Introduction Neck injuries are potentially dangerous and require emergency treatment. Th e location of the injury can predict risk and management. Open or incised injuries or those resembling incised injuries in the neck infl icted by sharp objects such as razor, knives, or broken bottle pieces or glasses that may be superfi cial or penetrating in nature may be described as ‘cut-throat injuries’ (CTIs) [1–3]. Th is may result from accident, homicide, or a suicide attempt. CTIs are potentially life threatening because of the many vital structures in this area. Th ere may be a possibility of severe hemorrhage from damaged major blood vessels, air embolism, or airway obstruction. Th e common causes of CTIs in this part of the world are suicide attempts. Family problems, psychiatric illness, unemployment, and poverty may be the triggering factors in suicide attempts. Th e motives for homicide may include land-related disputes, sexrelated crimes, familial disharmony, etc. Exposed hypopharynx and or larynx, hemorrhage, shock, and asphyxia from aspirated blood are the common causes of death following a CTI. It is known that appropriate measures can save lives in the majority of cases [3]. Prevention of these complications depends on immediate resuscitation by securing the airway by tracheostomy or intubation. Th e value of tracheostomy in the management of CTI has been highlighted in the literature [4,5]. Prompt control of external hemorrhage, blood replacement, and prompt intervention or operative treatment should be performed when indicated. All patients who have attempted suicide should undergo a psychiatric evaluation. Th is is because the act of suicide is a sign of an underlying mental illness and there may be a possibility of a second attempt. Victims of homicidal CTIs need psychological support to overcome the trauma to their psyche, which may remain long after the neck wounds have healed [6]. Management of cut-throat injuries Zafarullah Beigh, Rauf Ahmad Introduction Neck injuries are potentially dangerous and require emergency treatment. Th e location of the injury can predict risk and management. Open or incised injuries or those resembling incised injuries in the neck infl icted by sharp objects such as razor, knives, or broken bottle pieces or glasses that may be superfi cial or penetrating in nature may be described as 'cut-throat injuries' (CTIs) [1][2][3]. Th is may result from accident, homicide, or a suicide attempt. CTIs are potentially life threatening because of the many vital structures in this area. Th ere may be a possibility of severe hemorrhage from damaged major blood vessels, air embolism, or airway obstruction. Th e common causes of CTIs in this part of the world are suicide attempts. Family problems, psychiatric illness, unemployment, and poverty may be the triggering factors in suicide attempts. Th e motives for homicide may include land-related disputes, sexrelated crimes, familial disharmony, etc. Exposed hypopharynx and or larynx, hemorrhage, shock, and asphyxia from aspirated blood are the common causes of death following a CTI. It is known that appropriate measures can save lives in the majority of cases [3]. Prevention of these complications depends on immediate resuscitation by securing the airway by tracheostomy or intubation. Th e value of tracheostomy in the management of CTI has been highlighted in the literature [4,5]. Prompt control of external hemorrhage, blood replacement, and prompt intervention or operative treatment should be performed when indicated. All patients who have attempted suicide should undergo a psychiatric evaluation. Th is is because the act of suicide is a sign of an underlying mental illness and there may be a possibility of a second attempt. Victims of homicidal treatment. Aich et al. [7] studied 67 cut-throat cases; 47 were males and 20 were females, between 7 and 73 years of age (mean 28.82±11.38 years). Th e majority of victims were young adults [41 (61.19%)] between 21 and 30 years of age, 52 (77.61%) were from a rural community, and 53 (79.10%) belonged to the low socioeconomic class. In our study of 26 patients, the majority were males (88%) from rural areas (61%), and patients in the age group 36-50 years were most vulnerable, similar to the results of the above-mentioned study. Adoga et al. [8] published a Assessment of patients with CTI begins with the ABCs of resuscitation, that is, checking the airway, and evaluating the patient's breathing and circulation. Resuscitation of individuals should be commenced immediately. When the victims present to hospital, the anesthesiologist secures an uncompromised airway and ensures that the patient is breathing and the otorhinolaryngologist assesses the injury and surgically repairs the severed tissues with the aim of restoring breathing, swallowing, and phonation. The psychiatrist provides adequate care and supervision during and after the surgical repair of severed tissues. Materials and methods Th is retrospective study was carried out in the Department of Otorhinolaryngology and Head Neck Surgery, Government Medical College, Srinagar, J&K, India, and included 26 patients with CTIs who were brought to our department for treatment. Informed consent was obtained from the relatives of all patients for this study. Th is study was approved by the institutional ethics committee. All patients were resuscitated; depending on the condition of the patient, tracheostomy was performed when required and blood transfusion was administered in patients who had severe bleeding. After stabilizing the vitals, the wound of the patients was examined and depending on the condition of the wound, primary or secondary repair was performed. Subsequently, the cause of the CTI was enquired. Patients who made suicidal attempts were referred to the psychiatrist for evaluation. Results Th e results are displayed in Tables 1-5. Discussion CTIs are reported scarcely in the medical literature. CTIs and associated deaths are not uncommon in our society. Th ere are reports in the medical literature of CTIs from West Africa on the complication and principles of management of such wounds, with an emphasis on the forensic implications [2]. An article on open neck injuries stressed on surgical airway problems [5]. In our study, 26 CTI patients were brought to the Department of ENT and Head Neck Surgery, Government Medical College, Srinagar, for [6]. Nock et al. [14] concluded that mental disorders predict suicidal behaviors similarly in both developed case series of three patients with CTIs; all three of these patients had attempted suicide. In terms of the cause of injury in our study, attempted suicide was the cause in 58%, homicide in 38%, and accidental in 3% of patients. Mohanty et al. [9] studied 588 suicide victims, fi nancial burden (37%) and marital disharmony (35%) were the principal reasons for suicide attempts. In our study, the causes of suicide were psychiatric illness, unemployment, and family troubles, which was in agreement with the above study. Th e causes of homicidal injuries were landrelated disputes and sex-related crimes. One patient had an accidental CTI because of a fall on broken glass. Modi and Pandy [10] observed that in India, suicidal wounds of the throat are rare. In contrast, CTIs were reported to be caused by suicide attempts in the majority of cases in western studies [11,12]. In our study males with CTI were more in number then females with CTI. As CTI is a major neck injury, most of the victims were sent to the nearest available medical facilities as early as possible. Majority were referred to the tertiary hospital for appropriate intervention within 24 h. Poor communication, inadequate fi rst-aid knowledge and facilities, and lack of skilled manpower in peripheral centers were responsible for delayed presentation to hospital. Very few had been managed properly outside. A number of victims presented with an open wound and active bleeding. Onotai and Ibekwe [13] concluded that CTIs require a multidisciplinary approach and can be managed with a better prognosis if patients present early to the hospital and receive prompt attention. In our study, the majority of patients (65%) had injury in the center of the neck and a cut was the most common type of injury. Five patients had only skin and soft tissue injury, 19 patients had skin, soft tissue, and larynx/pharynx injury, one patient had injury to the skin, soft tissue, and major vessel (external jugular vein), and one patient had injury to the skin, soft tissue, larynx/pharynx, and major vessel (left common carotid and internal jugular vein). In our study, primary repair of the wound was performed in 24 patients, out of whom nine patients required tracheostomy in view of upper respiratory obstruction. Secondary repair of the wound was performed in two patients; both these patients had a necrotic wound and upper respiratory obstruction. Tracheostomy was performed in both these patients. In all patients who had attempted suicide, psychiatric evaluation was sought. Th is was because the act of suicide is a sign of an underlying mental illness and there may be a possibility of a second attempt. A study reported 25% of patients as having made a second attempt at suicide Cut-throat injury during repair. Fig. 1 Suicidal cut-throat injury. and developing countries. Our study showed that 66% of attempted suicide cases had some form of psychiatric ailment; 33% had major depression. Two patients had shaizophrenia which includes a patient with history of 3 suicidal attempts. Venkatachalam et al. [15] reported on a case of a penetrating cervical tracheal injury because of 'chain snatching' in a young female. Th e young female patient presented to the Emergency Department with a bleeding neck wound. Orotracheal intubation was performed after resuscitation, indicating a transected trachea. Th ere was no injury to the major vessels or nerves; thus, the wound was debrided and closed in layers and a tracheostomy tube was placed through the transected trachea. Postoperatively, the patient was ventilated for 72 h, after which she recovered completely. In our study, 20 patients out of 26 achieved full recovery without any permanent defect, four patients recovered with some permanent defect (three patients had hoarseness of voice and one patient had upper airway stenosis/web formation), and two patients died, of whom one 15-year-old male with a homicidal injury had a stab injury on the left side of the neck, with injury to the left common carotid and the internal jugular vein; in this patient, 6 U of blood were transfused and major vessels were ligated. He developed hemiplegia and hypotension in the postoperative period. Subsequently, the patient went into shock and died on the second day of admission to the hospital. Th e second patient who died had an injury because of attempted suicide; this patient was a 50-year-old man with a cut in the center of the neck. He died because of cardiac arrest on the second day of admission. Hospital stay was prolonged in tracheostomized patients and patients in whom additional wound care was needed (Table 4).
2019-01-23T16:49:48.208Z
2014-07-01T00:00:00.000
{ "year": 2014, "sha1": "9daf2e89a4fcfb39720a1c6155df31d99e0616ce", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4103/1012-5574.138493", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ddf48ee0ad4d39fb8774bb9a6f6c1909b43b30fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248542307
pes2o/s2orc
v3-fos-license
Development of a Genomic Instability-Derived lncRNAs-Based Risk Signature as a Predictor of Prognosis for Endometrial Cancer Endometrial cancer (EC) ranks fourth in the incidence rate among the most frequent gynaecological malignancies reported in the developed countries. Approximately 280,000 endometrial cancer cases are reported worldwide every year. Genomic instability and mutation are some of the favourable characteristics of human malignancies such as endometrial cancer. Studies have established that the majority of genomic mutations in human malignancies are found in the chromosomal regions that do not code for proteins. In addition, the majority of transcriptional products of these mutations are long non-coding RNAs (lncRNAs). In this study, 78 lncRNA genes were found on the basis of their mutation counts. Then, these lncRNAs were investigated to determine their relationship with genomic instability through hierarchical cluster analysis, mutation analysis, and differential analysis of driving genes responsible for genomic instability. The prognostic value of these lncRNAs was also assessed in patients with EC, and a risk factor score formula composed of 15 lncRNAs was constructed. We then identified this formula as genome instability-derived lncRNA-based gene signature (GILncSig), which stratified patients into high- and low-risk groups with significantly different outcome. And GILncSig was further validated in multiple independent patient cohorts as a prognostic factor of other clinicopathological features, such as stage, grade, overall survival rate. We observed that a high-risk score is often associated with an unfavourable prognosis in patients with EC. 1. Endometrial cancer (EC), which is the most frequently reported gynaecological malignancy, ranks fourth in terms of the incidence rate in developed countries. Approximately 280,000 cases are reported worldwide every year [1]. EC majorly affects postmenopausal women, and the incidence rate spikes are observed in women aged between 55 to 65 years old [1]. Clinically, 80% of patients with EC present with abnormal vaginal bleeding, which benefits the early diagnosis and treatment and has led to an improvement of the 5-years survival rate of EC patients [2]. However, there are 20% of cases presented with metastasis of pelvic cavity and lymph node, and about 10% of cases presented with distant metastasis at diagnosis [3]. The prognosis varies according to the stage of EC. The 5-years survival rate of EC patients at stage I was 80%-90%, but it declined to about 20% in EC patients at stage IV [4]. Hence, novel strategies are warranted to assess the prognosis of patients with EC and evaluate the clinical outcomes. Genomic instability and mutation are common characteristics in human malignancies. [5]. Genomic changes occur through several pathway such as single or minority nucleotide mutations and acquisition or loss of a whole chromosome, probably leading to Ivyspring International Publisher abnormal division, multi-nucleation, and trimeric mitosis [6,7]. Different types of human malignancy exhibit different somatic mutation spectrums, corresponding to different numbers of gene mutations, indicating the tissue-specific or cell-specific tumourigenic mechanisms [8,9]. In addition, as an evolutionary marker of human malignancy, genomic instability occurs mainly due to the mutation of DNA repair genes, which in turn promotes the progression of human malignancy and has been regarded as a key prognostic factor [10][11][12]. Hence, intensive study of the molecular features of genomic instability in various types of malignancies and investigating their clinical significance are essential. Several genomic mutations in human malignancies are found in the chromosomal regions that do not code for proteins. In addition, a majority of transcriptional products of these mutations are long non-coding RNAs (lncRNAs) [13]. Evidence accumulated during the past few decades suggests the involvement of lncRNAs in gene regulation, proliferative capability, migratory behaviour, and genome stability. These multi-functional regulatory activities make lncRNAs a valuable signature factor for human malignancies [14]. Notably, lncRNAs associated with gene changes can promote tumour growth and affect genomic stability. For instance, a novel lncRNA CCAT2 containing the rs6983267 SNP, whose expression level is abnormally high in microsatellite stable (MSS) colorectal cancer, has been shown to promote cancer progression, metastatic behaviour, and chromosomal instability [15]. Another study that performed somatic copy number changes (SCNAs) of lncRNAs showed that the lncRNAs of genomic changes or localized changes targeting genes for tumourigenic lncRNAs [16]. In addition, cancer related lncRNAs have been shown to contribute to increased genome instability and malignant behavior [17]. Conversely, some lncRNAs including NORAD, CUPID1, CUPID2, and DDSR1 facilitate the repair of DNA damage and exhibit genome stability [18][19][20]. Although lncRNAs play a key role in the regulation of genome stability, the clinical significance and underlying mechanism of lncRNAs related to genomic instability (GILncRNAs) in EC were not completely understood. In this study, we retrieved the lncRNA data and mutation data of patients with EC from the human malignancy genome atlas (TCGA) database. In addition, we assessed the prognostic value of the established GILncSig associated with genomic instability in EC. It is hypothesized that GILncSig has the potential to be utilized as a prognosis predictor in patients with EC. Overall, this study intended to assess the value of GILncSig as an independent prognostic predictor and provide an alternative assessment of genomic instability and human malignancy-related mortality risk. Data retrieval and handling The transcriptional profiles, clinical data, and somatic mutation profiles of patients with EC were obtained from the TCGA database (https://portal. gdc.human malignancy.gov/). The expression levels of lncRNAs and mRNAs in EC samples were extracted from the transcriptional data. The lncRNAs from the expression profile were extracted, the expression values of lncRNAs with the same Symbol were averaged, and the genes whose expression level was less than 30% were removed. Then, we integrated of the expression data and mutation data to obtain the intersection sample information. Finally, the expression matrix of 499 samples and 3527 lncRNAs was obtained for subsequent analysis. Screening of lncRNAs Related to Genome Instability To identify genome instability-associated lncRNAs, a hypothesis mutator-derived computational frame combining lncRNA expression profiles and somatic mutation profiles in a tumour genome: (i) the cumulative number of somatic mutations for each patient was computed; (ii) patients were ranked in decreasing order of the cumulative number of somatic mutations; (iii) the top 25% of patients were defined as genomic unstable (GU)-like group, and the last 25% were defined genomic stable (GS)-like group; (iv) expression profiles of lncRNAs between the GU group and GS group were compared using significance analysis of the ʻLimmaʼ package of R software to analyse GILncRNAs with different expression levels, where the threshold was |logfc| > = | log 1.3 | and the P value was <0.05. Construction of the lncRNA-mRNA network and functional enrichment of mRNA Based on the interactions data from the RNAInter database (http://www.rna-society.org/ raid/download.html), Cytoscape was used for visualisation to extract the mRNAs interacting with GILncRNAs. Furthermore, the functional enrichment of interacting mRNAs was analysed, and the cluster profiler was utilised for the pathway enrichment analysis. We utilised org. HS. Eg. DB to transform gene names and GOplot & ggplot 2 to visualise the pathways. Hierarchical Clustering based on GILncRNAs According to the GILncRNAs-lncRNAs in all the samples, the Consensus Cluster Plus package of R software was utilized to cluster the samples for unsupervised analysis. The clustering method used was K-means, and the distance function utilized was Euclidean. The variation of the two sample sub-types was counted. The group with a high variation was called the GU-like group, whereas the lower group was called the GS-like group, and the two sub-types based on the stability of the genome were finally determined. Survival of the two sub-types was analysed using the 'Survival' and 'Survminar' packages of R software, and the KM curve was drawn. The heat map of GILncRNAs expression in two sub-types was drawn using R-package complex heatmap. Establishment of GILncRNAs-Based Prognostic Analysis Methods The samples were allocated into a training set and a testing set (the ratio of samples in the training set and testing set was 7:3). The Chi-square test was used to ensure that no deviation is present in the division of the training data set and test data set. Then, 'Survival' and 'Survminar' packages of R software were utilized to conduct the univariate Cox regression analysis. LncRNAs with Cox P < 0.05 were considered as the candidate genes with prognostic values. Then, the least absolute shrinkage and selection operator (LASSO) regression algorithm was utilized to screen candidate GILncRNAs. The LASSO Cox regression was then used to select variables for constructing the signature and provide coefficients. The risk score was calculated using the following formula: risk score = expression level of lncRNA1 × β1 + expression level of lncRNA2 × β2+ … + expression level of lncRNAn × βn, where risk score is a measure of prognosis of patients with EC, and β is the regression coefficient for each variable. The risk score of each patient was calculated according to the risk characteristics, and then, they were divided into two groups (high-risk and low-risk) based on the risk score. We utilized the Kaplan-Meier method to plot the curve of survival of patients in the two groups. Furthermore, the log-rank test was utilised to assess the survival of patients, P < 0.05. Finally, the GILncSig risk model was employed in the testing set and TCGA set to assess its function. Prognosis Prediction and Clinical Stratification Analysis To examine the potential role of GILncSig as an independent predictor of other crucial clinicopatho-logical parameters, the univariate and multivariate Cox regression analyses were conducted using the 'Survival' package of R software. A P value of <0.05 was considered to signify statistical significance. Then, the clinical stratification analysis was performed to evaluate the value of GILncSig for predicting prognosis in patients with EC. According to the clinical parameters including age, the patients in The Cancer Genome Atlas (TCGA) were divided into subgroups according to the age (≥ 60 years), and disease course (stage I-II and stage III-IV). Based on the median value of the GILncSig score, cases in each clinical subgroup were further allocated into two groups (high-risk and low-risk). We then performed the Kaplan-Meier analysis and log-rank test to analyse the survival rates. Establishment and Verification of a Nomogram Scoring System Nomograms were used to display the results of Cox regression directly. According to the regression coefficients of all the independent variables, the scoring standard was set, and the total score of each patient was calculated, then the probability of each patient's prognosis time was calculated using the conversion function between the score and the prognosis probability. The nomograms were mainly drawn using the 'RMS' and 'sarviva' packages of R software. Firstly, the Cox proportional hazard regression model was constructed with CPH, and then, the Survival function was utilized to calculate the survival probability. Finally, the nomogram function tree was utilized to construct the nomograms, which showed as the plot, and the correction curve and time-dependent ROC prediction curve were assessed. Statistical Analysis Chi-square test and Mann-Whitney U test were utilized to assess differences in the classification and quantitative data. A 2-tailed P value of < 0.05 denoted statistical significance. R version 4.0.2 (Institute of statistics and mathematics, Vienna, Australia 4) was compared by visual and statistical Analysis. Identification of Genome Instability-Related lncRNAs Of the 499 samples, 130 EC patients with the highest mutation rate were assigned to the GU-like group, whereas 125 patients with the lowest mutation rate were assigned to the GS-like group (Fig. 1A). Then, the differentially expressed genes (DEGs) of the two groups were detected and 78 lncRNAs were found, with 32 lncRNAs up-regulated and 46 lncRNAs down-regulated (Fig. 1B). To determine whether the differentially expressed lncRNAs reflected the genomic instability of the patients, we performed an unsupervised hierarchical clustering assay on the 78 lncRNAs. All 499 cases were divided into two groups with a significant difference in their mutation count (Fig. 1C). Next, we explored the potential function of GILncRNAs through the co-expression analysis and GO enrichment analysis. The lncRNA-mRNA co-expression networkwas used to show the relationship between lncRNAs and mRNAs (Fig. 1D). A total of 43 pairs of interacting GILncRNAs and mRNAs were identified, indicating that GILncRNAs are tightly correlated with the regulation of mRNAs expression. GO analysis of GILncRNAs-associated genes revealed that DE-lncRNA with mRNAs in this network are significantly associated with Binding to a Bcl-2 homology (BH) and death domain binding in molecular function (MF) as well as mitotic cell cycle regulation in biological process (Fig. 1E). All the aforementioned factors are believed to be associated with genome stability. Based on the KEGG pathway analysis of lncRNA-related protein coding genes (PCG), 39 most enriched pathways were identified and the most of them were found to be related to the genome stability factors such as cell cycle regulators and malignancies (Fig. 1F). Collectively, these results suggested that 78 differentially expressed lncRNAs are associated with genome stability. In addition, the expression levels of these lncRNAs might compromise the cellular genome stability by disrupting the equilibrium of lncRNA-associated PCG modulatory web, thus tampering with the regular repairing pathways for genomic damage and causing an increased genome instability. Hierarchical Clustering based on GILncRNAs Based on GILncRNAs, 499 EC patients were divided into two groups through unsupervised clustering (154 patients in Cluster 1 and 345 patients in Cluster 2). We defined the group with a high mutation number as GU-like and the other group as GS-like. As shown in Figures 2A and 2B, the number of mutations in cluster 2 appeared to be significantly higher than that in cluster 1 (P = 1.6e-07). Hence, cluster 2 was defined as the GU-like group, and cluster 1 was defined as the GS-like group. Then, survival of the two subtypes was analysed. The survival curve revealed remarkable differences , with the GU-like group showing poor prognosis compared to the the GS-like group (P=0.0014). These results indicated that genome instability is strongly correlated with patient's survival. Screening of the GILncSig and Predictability Evaluation The 499 EC cases were randomly allocated into a training group and a test group with the ratio as 7:3. A total of 22 lncRNAs that were tightly associated with the survival rates in the training set were examined. Of these 22 lncRNAs, 7 lncRNAs were protective factors, whereas 15 lncRNAs were risk factors (Fig. 3A). Furthermore, 22 prognosis-related lncRNAs identified through Cox uni-variate regressions were selected for the LASSO regression. To construct the best model, the minimum lambda value, which is lambda.min, was selected through cross-validation, and then, 15 more significant lncRNAs from the 22 lncRNAs were selected to construct a human malignancy-related prognostic risk score model (P < 0.05, Figure 3B and 3C). According to the optimised model, the following formula was utilised to calculate the risk score: Risk score = 0.331 × AF131215.9 -0.119 × RP3 -443C4. 2) with negative coefficients were considered as protective factors, whose up-regulation correlated with better outcomes. According to the calculated risk score, cases with scores greater than the median were categorised as the high-risk group, whereas cases with scores ≤ the median were categorised as the low-risk group. The results revealed that cases in the low-risk group had a better prognosis than those in the high-risk group (Fig. 4A). The area under curve (AUC) values of the ROC curves in the training set for the 1-year, 3-year, and 5-year survival prediction of risk scores were 0.828, 0.811, and 0.837, respectively (Fig. 4B). To verify the accuracy of predicting the survival rate using risk scores, we calculated the risk scores of the test set and the whole TCGA set and plotted the ROC curves. In the test set, the survival time of the low-risk group was observed to be longer than that of the high-risk group (Fig. 4C). The AUC values of the ROC curves in the training set for the 1-year, 3-year, and 5-year survival prediction of risk scores were observed to be 0.719, 0.683, and 0.67, respectively (Fig. 4D). The results obtained were similar to those in the entire TCGA dataset, which confirmed that patients with EC in the low-risk group exhibit significantly longer survival ( Figure 4E). The time-dependent ROC curves analysis of the GILncSig yielded an AUC in the training set for the 1-year, 3-year, and 5-year survival prediction of risk score were 0.79, 0.771, and 0.786, respectively (Fig. 4F). All these findings suggested that the risk score is strongly associated with a great survival predictive significance. Risk scores are associated with clinical features Based on the calculated risk scores, a correlation analysis with clinical features was performed. The risk scores in the GS-like/GU-like subgroups were found to differ significantly, with the risk scores being higher in the group with genomic instability (Fig. 5A). The risk scores were distributed differently in the various stages of EC and were higher in the stage III -IV group (Fig. 5B). Additionally, the risk scores were distributed differently in patients with a different grade and were higher in the G3 + high group than that in the G1 + G2 group (Fig. 5C). In addition, the risk score was distributed differently in the varied age groups, and the patients aged less than 60 years tended to have higher risk scores. However, no significant difference was observed between the BMI groups ( Fig. 5D-E). Altogether, these findings verified the efficacy of GILncSig in predicting prognosis of patients with EC. Assessment of the Independent Prognostic Value of GILncSig To examine the independent prognostic value of GILncSig, uni-variate and multivariate Cox regression analyses were performed on all the patients, and factors such as age, disease course, and GILncSig were included. The uni-variate analysis revealed that GILncSig, tumour stage, tumour grade, clustering, and age were significantly associated with overall survival (P < 0.01) (Fig. 6A). However, the correlation between BMI and overall survival was not significant. Multivariate Cox regression analysis revealed that the risk score and cancer development were significantly correlated with the survival rate (Fig. 6B). The results revealed that the overall survival of the low-risk group was higher than that of the high-risk group (Fig. 6A-H). Taken together, these findings indicated that the predicting values of GILncSig in prognosis can be considered independent of other clinicopathological parameters. Establishment and Verification of a Nomogram for Prognosis Prediction in EC To validate the prognostic significance of a multi-lncRNA signature, we performed multivariate Cox regression analysis, applying Limma R package to value the accuracy of the risk score and combine GILncSig with prognostic factor, including age, staging, grade and survival rate then construct a statistical nomogram model. The accuracy was verified through the calibration curve. As shown in Fig. 7A and Fig. 7B, the AUC of ROC for 3-year survival predictions was 0.771. The 1-year, 2-year, 3-year, and 5-year survival predictors revealed great consistency between the actual and predicted survival rates of the three data sets (Fig. 7C-F). Overall, these results suggested that the prediction efficacy of the nomogram was enhanced. To show the top 20 mutant genes in the GU-like group and GS-like group, cumulative number of somatic mutations per patient was calculated and sorted in the decreasing order. The somatic mutation count of PTEN was the highest in both groups, meanwhile the number of the missense mutations in PIK3CA was the highest in both groups (Fig. 8A-B). High TMB consistently selects for benefit with immune checkpoint blockade (ICB) therapy. Our results show obvious difference in the level of TMB in two group as well as in stromal and immune score (Fig. 8C). Taken together, the GILncSig correlated with genomic mutation rate in EC and can act as an evaluation model of the degree of genome instability. 4. Genomic instability is a crucial factor that contributes to the acquisition of various human malignancy-related characteristics. Persistent mutations drive tumourigenesis, cancer progression, and resistance to treatment [21]. Research has demonstrated that abnormal transcriptional and epigenetic regulation affects the genome stability [22]. Studies have investigated mRNA and miRNA markers to determine the extent of genomic instability in cancerous tissues [23]. In the past decade, lncRNA expression changes have been shown to promote tumour development and progression and hence can be used as a new tumour biomarker [24,25]. And lncRNAs have been reported to play key roles in EC progression [26]. Additionally, lncRNAs and genomic instability exhibit a close relationship. Recent advances in the exploring of functional mechanisms of lncRNAs revealed that lncRNAs are essential for genomic stability, such as NORAD and GUARDIN. Nevertheless, the relationship between genomic instability-related lncRNAs and human EC remains to be fully elucidated. Hence, we propose a GILncSig and examined its prognostic significance in EC. In this study, the EC patients were grouped according to the gene mutation number, and the analysis to screen the differentially expressed genes was performed. Following the multivariate Cox regression analysis, the independent prognostic factors, except for the risk score, were stratified. Among the seven GILncRNAs, PRR34-AS1, FGF14-AS2, GLIS3-AS1, RP11-440D17.3, LINC01224, AF131215.9 were identified as the risk factors for patients prognosis, whereas AC144831.1, HOXVB-AS3, ATP2A1-AS1, MIR210HG, LBX2-AS1, AC092580.4, RP11-760H22.2, RP3-443C4.2 were identified as the protective factors associated with better survival. Among these risk factors, LncRNAPRR34-AS1 has been reported to aggravate the progression of hepatocellular carcinoma [27], GLIS3-AS1 is found to be correlated with the poor prognosis of intraductal papillary mucinous neoplasms [28], and LINC01224 is reported to modulate the malignant transformation in colorectal human cancer, gastric human cancer, ovarian human cancer, and hepatocellular carcinoma [29][30][31][32]. However, FGF14-AS2 functions as a favourable prognostic biomarker in various human malignancies including breast human malignancy and colorectal human malignancy [33,37,38,39]. In this study, MIR210HG was identified as a protective factor, and it has been reported to promote tumour progression in endometrial cancer, non-small cell lung cancer, triple-negative breast cancer, cervical cancer, colorectal cancer, and hepatocellular carcinoma [34][35][36][37][38][39][40][41]. Moreover, LBX2-AS1 has been identified as a non-favourable prognostic biomarker in colorectal cancer, ovarian cancer, glioma, and gastric cancer [42,43]. The other lncRNAs, namely RP11-440D17.3, AF131215.9, and AC144831.1, HOXVB-AS3, ATP2A1-AS1, AC092580.4, RP11-760H22.2, and RP3-443C4.2, were studied for the first time in this research. Nevertheless, more studies are warranted to explore their functions in EC prognosis. In this study, we found 78 gene lncRNAs by screening the expression of lncRNAs among cases with different mutation numbers. These lncRNAs were confirmed to be correlated with genomic instability, which was verified through hierarchical cluster analysis, mutation count, and differential analysis of driving genes responsible for genomic instability. Then the prognostic value of 78 lncRNAs was assessed, and a risk factor score formula composed of 15 lncRNAs was constructed. GILncSig was confirmed as an independent prognostic predictor; patients with a high-risk score were found to often have unfavourable prognosis. Taken together, GILncSig, as a genome instability-derived two lncRNA-based gene signature was proved to stratify patients into high-risk and low-risk groups with significantly different outcome and was validated in multiple independent patient prognostic factors. Additionally, we found a remarkable correlation between the risk score in patients with EC and the tumour mutation pattern, and the high-risk score correlated with high mutation as well as genomic instability. Notably, in different clinical subgroups, risk scores markedly correlated with EC prognosis. These results indicated that the risk factors identified in this study could be the promising markers for prognosis prediction and genomic instability in patients. Finally, a nomogram combining risk factors with tumour staging was constructed in the training set, which further improved the performance and accuracy of the prediction model. Although we identified GILncSig as a factor for predicting prognosis in EC, our study still has some limitations. Firstly, we only used the data in the TCGA EC database.Therefore, more independent data sets are needed for further verification. Secondly, RP11-440D17.3, AF131215.9, AC144831.1, HOXVB-AS3, ATP2A1-AS1, AC092580.4, RP11-760H22.2, and RP3-443C4.2 associated with genomic instability, which is related to the prognosis of EC have been reported for the first time. Therefore, further studies are required to clarify their roles in EC. Thirdly, more biological experiments are warranted to verify and investigate the mechanism of GILncSig in the genome stability. Currently, our results are being validated in clinical trials and our conclusion would be verified in follow-up studies.
2022-05-07T06:23:12.381Z
2022-04-11T00:00:00.000
{ "year": 2022, "sha1": "7310a68c3051313016ae615baf1db8ea22572e5b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "caa762a964eb1b47e34f1ef66a72b64cbdd9522f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249084138
pes2o/s2orc
v3-fos-license
Medical Students’ Knowledge and Attitude Towards Artificial Intelligence: An Online Survey this to investigate the attitudes of Jordanian medical students regarding Artificial Intelligence (AI) and Machine Learning (ML). Moreover, to estimate the level of knowledge and understanding of the effects of AI on medical students. Nine hundred medical students from six universities in Jordan participated in this survey. The participants were asked to fill out an electronic pre- validated questionnaire using Google’s forms and those forms were published via social media. The questionnaire included questions of Likert and dichotomous questions. 89% of the students believed in the importance of AI in the medical field, and 71.4% believed in the beneficiary of teaching AI in the medical career. 47% of the students had an understanding of the basic principles of AI, 68.4% of the students believed that it is mandatory for medical students to receive knowledge of AI. Statistically, students who received teaching/training in AI were more likely to consider radiology as a career given the advancement in AI (p = 0.000). advancements. Adding courses and training related to artificial intelligence and machine learning to the study plan should be considered. INTRODUCTION Artificial intelligence (AI) is simulating human attitudes and abilities using computers and technology by teaching machines how humans think, behave, and react toward different situations [1]. AI has a significant effect on the development of many sectors, economics, manufacturing, edu-cation and health [2]. AI is improving quickly, and its application in the field of medicine is increasing [2]. It is becoming increasingly popular in many medical fields; including ophthalmology [3], dermatology [4], and pathology [5]. Machine learning (ML) is one division of AI which is involved in teaching machines to classify different medical illnesses based on images, sounds or any data source [6]. ML can also help in diagnosis, finding treatment options, smart health records and many other applications [1]. AI and ML would play an essential part in enhancing medicine in the future [2] and support the future needs of medicine by analyzing the enormous amounts and various forms of data that patients and healthcare institutions record at every moment [7]. Within the near future, doctors can be anticipated to encounter patients in very distinctive wellbeing care settings compared with the present time and hence, restorative instruction must advance [6]. Omnipresent and digitalized wellbeing care frameworks permit both doctors and patients to obtain biomedical data easily [7]. In addition, progressed restorative advances will lead doctors to experience a developing number of more inactive patients with incessant conditions and comorbidities due to drawn-out life spans [8]. Exponentially extending restorative information requires doctors to upgrade, not review, what they know and select the ideal data from an excess of alternatives. AI can diminish the burden of doctors within the associated interruption of computerized information and can progress their capacity to analyze [9]. Subsequently, the non-analytical, humanistic viewpoint of medication will be more emphasized since it is troublesome to supplant with innovation. Hence, collaboration between physicians and machines has the greatest potential to improve clinical decision-making and patient health outcomes [10,11] Moreover, it is essential to have medical education about AI and ML because future physicians will deal with patients in different health care settings than the present [3,4]. In the United Kingdom, national governmental review is focusing on the use of new digital health approaches, including artificial intelligence [6]. However, the actual level of knowledge and training levels among medical staff remains unknown in many countries, including Jordan [7]. In Canada, medical students were less keen to consider radiology as a career because of the fear of being replaced by robots [8]. While in Germany, around one third only believed that AI would replace radiologists [9]. They need to have courses to introduce the importance of AI in their work and how it can save time and effort [9]. Furthermore, having knowledge about AI may be beneficial for discovering actual disease rates and find suitable powerful medications for difficult diseases [2,3]. The aim of this study is to estimate the level of knowledge about AI and ML among medical students in Jordanian Universities. Additionally, this study will investigate the different views about artificial intelligence and whether artificial intelligence will influence students' careers and if AI is considered as a positive addition to this field or is considered a threat for replacing physicians. METHODS A total of nine hundred participants out of 9200 medical students from six universities in Jordan answered the questionnaire. The six universities were Al-Balqa' Applied University, Hashemite University, Jordan University, Jordan University of Science and Technology, Mutah University and Yarmouk University. A validated and previously published electronic questionnaire (9) by Google's Forms and published online was used in this work and was sent to participants through social media applications (medical students' groups on Facebook, WhatsApp and Instagram). Students were required to fill their university student email addresses and only those with validated university email addresses were able to fill the questionnaire. Ethical approval was obtained from the University of Jordan number 2020/300, and all experimental protocols were approved by the institutional review board committee (IRB) of Jordan University Hospital. Written informed consent was confirmed with individual participants at the beginning of the survey. The responses were posteriorly anonymized to protect their privacy and confidentiality and participants were informed about this issue. A previously validated and published questionnaire was used (9). The questions were constructed and phrased to be easily understood. Moreover, a few questions were added at the begging, including basic information about the participants, their current specialization, their interest in future specialization and the year of study. Moreover, the definition of artificial intelligence and machine learning was explained at the start of the questionnaire and were defined as the following: Artificial intelligence is a technology that enables a machine to simulate human reactions and machine learning is a type of artificial intelligence which allows machines to automatically learn from past data without programming. The rest of the survey contained 21, 5-point Likert questions(1= strongly disagree to 5=strongly agree) whereby participants rated their agreement towards a presented statement related to their current attitudes towards AI, their career intentions towards radiology, their current understanding of AI, and their confidence in using AI tools in a routine and critical manner following graduation. Dichotomous questioning was used to determine if participants received training on AI and if this teaching formed a compulsory part of their curriculum. Statistical analysis was performed using IBM SPSS program version 26. The analysis included finding frequency, percentages, charts, crosstabulations, Likert relationships, Chisquare, and Wilcoxon rank-sum test. Simple descriptive statistics were presented in percentages. Comparisons were made to find relationships between students who received teaching/training in AI/ML with the importance of AI/ML in healthcare sector and considering radiology as a career in the future. A p-value of equal to or less than 0.05 was considered as significant, otherwise non-significant. RESULTS A total of 900 responses from medical students with a mean age of 21.34 years +/-2.43 were received. They were from the six universities with an accredited program for teaching Medicine in Jordan. The sample was divided into males (52.2%, n=470) and females (47.8%, n=430). Regarding year of study, students from various years of study were included; where 31.8% (n=286) were from the 6 th year, 7.8% (n= 70) from the 5 th year and the rest were equally distributed between the remaining studying years. The majority of the students were interested in Surgery (16.4%, 147) followed by Internal Medicine (13%, n=117), the rest of the percentages are shown in Fig. (1) below. Most of the participants (77.4%, n=697) believed that artificial intelligence would play an important role in health care, with only a minority who did not agree or were neutral. The majority (around 85% n= 765) of respondents heard about Artificial Intelligence or Machine learning, and around half of the participants (51.8%, n=466) read articles about artificial intelligence or machine learning in the last 2 years. Concerning attending courses about artificial intelligence/machine learning and data science in the last 5 years, the majority never attended any course (78.4%, n=706) followed by participants who attended one course only (11.6%, n=104). The courses attended were personal efforts of students as courses related to AI/ML were not part of any medical school curriculum. With regard to the question related to the importance of AI in healthcare; an overwhelming majority of the respondents, 89% (n = 801) selected strongly agree or agree, whilst 1.8% (n=16) selected disagree or strongly disagree, and the remaining 9.2% (n = 83) were neutral. About the likelihood of having a carrier in radiology due to the advancement of AI, since radiology is one of the first medical divisions to use ML/AI, nearly half of the respondents 47% (n = 423) selected strongly agree or agree, whilst 17% (n = 153) selected disagree or strongly disagree, and the remaining 36% (n = 324) were neutral. Nearly half of the responders believed that AI would be replacing specialists in the future; 55.8% (n = 502) of respondents selected strongly agree or agree, whilst 25% (n = 225) selected disagree or strongly disagree, and the remaining 19.2% (n=173) were neutral. Regarding the question about understanding the basic principles of AI, most students had a basic knowledge with variable degree; about half of the respondents 47% (n = 423) selected strongly agree or agree, whilst 25% (n = 225) selected disagree or strongly disagree, and the remaining 28% (n = 252) were neutral. With regard to the question of being comfortable with the nomenclature related to AI; nearly half the respondents 45.5% (n = 410) selected strongly agree or agree, whilst 16% (n =144) selected disagree or strongly disagree, and the remaining 38.5% (n = 346) were neutral. Fig. (2) shows the responses for the questions group of AI understanding and importance. With regard to the question relating to understanding AI limitations; the majority of the respondents 63.4% (n = 571) selected strongly agree or agree, whilst 15.6% (n = 140) selected disagree or strongly disagree, and the remaining 21% (n = 189) were neutral. About the benefits of teaching AI in the medical career question; the majority of the respondents 71.4% (n = 643) selected strongly agree or agree, whilst 7.8% (n = 70) selected disagree or strongly disagree, and the remaining 20.8% (n = 187) were neutral. A majority of students believed that there is a mandatory need for medical students to receive teaching in AL as around two-thirds of the respondents 68.4% (n = 616) selected strongly agree or agree, whilst 10.6% (n = 95) selected disagree or strongly disagree, and the remaining 21% (n = 189) were neutral. Regarding the question about confidentiality of using basic healthcare AI tools if required; most respondents 61.6% (n = 554) selected strongly agree or agree, compared to 19.6% (n =176) who selected disagree or strongly disagree, the remaining 18.9% (n = 170) were neutral. When asked if students are likely to have a better understanding of the methods used to assess healthcare AI algorithm performance after graduation; nearly half the respondents 56.2% (n = 506) selected strongly agree or agree, whilst 17.8% (n = 160) selected disagree or strongly disagree, and the remaining 26% (n = 234) were neutral. Regarding the question of possessing the knowledge needed to work with AI in routine clinical practice after graduation; nearly half the respondents 54% (n = 486) were comfortable with using IA, by selecting strongly agree or agree. However, 22% (n = 198) selected disagree or strongly disagree, and the remaining 24% (n = 216) were neutral. However, more than 60% of participants believed that students should receive teaching about artificial intelligence. Fig. (3) shows the responses for the questions group on AI combination with healthcare and the need for having courses during medical years of study. Fig. (2). Summary of questions relating Artificial Intelligence understanding and importance. With regard to the participants' opinion regarding whether AI/ML will drastically change and revolutionize the medical field in 10 years or not; the majority of the respondents that is 78.8% (n = 709) believed that it would by choosing strongly agree or agree, whilst 8.8% (n = 79) selected disagree or strongly disagree, and the remaining 12.4% (n = 112) were neutral. Regarding the belief that AI/ ML will be replacing doctors in the future; most of the respondents 65.8% (n = 592) disagreed and selected strongly disagree or disagree, whilst 19.4% (n = 175) selected agree or strongly agree, and the remaining 14.7% (n = 133) were neutral. For discouraging the participants to get into medicine because of AI/ML; the majority of the respondents that is 60% (n =540) disagreed and selected strongly disagree or disagree, whilst 18.8% (n = 169) Values selected agree or strongly agree, and the remaining 21.2% (n = 191) were neutral. Regarding the statement that AI/ML will eventually replace human doctors; the majority of the respondents 69.6% (n = 626) selected strongly disagree or disagree, whilst 17.2% (n = 155) selected agree or strongly agree, and the remaining 13.2% (n =119) were neutral. When asked about improving performance by having advance personal AI/ML knowledge; more than half the respondents 60.6% (n = 545) agreed by selecting strongly agree or agree, whilst 15.4% (n = 139) selected disagree or strongly disagree, and the remaining 24% (n = 216) were neutral. About the question on the demand of AI/ML education in medical school or residency; the majority of the respondents 62.8% (n = 565) selected strongly agree or agree, whilst 5.8% (n = 52) selected disagree or strongly disagree, and the remaining 31.4% (n = 283) were neutral. Fig. (4) shows the responses to the questions relating AI to medical study and its effects on future learning. In the rating scale questions, about half of students rated the question about being self-perceived novices in AI/ML with 3 or more (around 51.6%) (n = 464). About planning to advance AI/ML knowledge to improve performance as future physicians, the majority were keen on doing so and rated the question with 4 in 33.8% of participants (n = 304), 28% of participants (n= 252) rated 3, 15.6% of them (n= 140) rated 5, and only 12.2% (n= 110) rated 2, and 10.4% (n = 94) rated it as 1. For how enthusiastic students were to be involved in AI/ML research, the majority were interested and gave a high rate to the question. Nearly 41.8% (n= 376) gave 4 and 5 points about its importance, followed by 30.4% (n= 274) rated it by 3, and the rest (27.8%) rated it with 1 and 2 (n = 250). Regarding the importance of integrating educational material about AI/ML into medical school curriculum, the majority gave a high rate to the question nearly 45.8% (n= 412) gave 4 and 5 points, followed by 28.2% (n= 254) rated it by 3, and the rest rated it with 1 and 2 by 26% (n = 234). With regard to the importance of clinical skills and knowledge training in residency about AI/ML, the majority believed in its importance and gave high rate to the question. Nearly 42.2% (n= 390) gave 4 and 5 points, followed by 28.8% (n= 260) rated it by 3, and the rest rated it with 1 and 2 by 27.8% (n= 250). Fig. (5) shows the summary of assessment questions about Artificial Intelligence/Machine Learning, future knowledge acquisition and advancements. However, only 18.4% (n = 166) of the students have received teaching/training courses in AI, while the majority (81.6% n= 734) have never taken AI courses. Out of 166 students who have taken AI courses, 50% (n = 83) thought that teaching/training on AI/MI should be a compulsory part of their medical degree. Those students who have taken teaching/training in AI rated the courses as being extremely useful for 20.6% (n = 34), very useful with 39.1% (n = 65), somewhat useful with 29.3% (n = 48). However, 7.8% (n = 13) did not find them useful and 3.2% (n = 6) did not find it useful at all. Students who received teaching/training in AI were more likely to agree on the important role of AI in healthcare (Wilcox test, p = 0.026). Students who received teaching/training in AI were more likely to consider radiology as a future career, given the advancement in AI (Wilcox test, p = 0.000). Fig. (4). Shows the responses to the questions relating AI to medical study and its effects on future learning. DISCUSSION Artificial intelligence and machine learning will have a vital role in the future of different medical fields [1] and most medical students know the importance of AI and ML in improving medical applications [8,9]. This study aimed to understand the knowledge of medical students in Jordan universities about machine learning, according to which 89% of the participants had previously heard about artificial intelligence and its importance (77.4) and around half (47%) understood the basic principles and were familiar with and comfortable with its nomenclature (45.5%). However, most of them have not read any articles or attended courses related to artificial intelligence. This could explain the fact that about half of the participants believed that AI will replace physicians in the future. Education about the positive aspects of machine learning roles may help shaping the correct perception and provide the students and health care practitioners the ability to selectively apply artificial intelligence with methods to avoid the negative aspects of this application [2]. In our cohort, around half of the students would not choose radiology as a future carrier. The negative influence on radiology recruitment secondary to advancement in AI was previously shown by Gong et al in a Canadian cohort [8], where one-sixth of medical students who were interested in radiology would not choose to study radiology as a future carrier. The Canadian study identified the main reason is that students believed that AI will replace radiologists in the future [8], a misconception which is also common in our cohort. Similar results were found from a survey of United Kingdom medical students [7]. On the other hand, a separate study by Pinto dos Santos et al. demonstrated that in their cohort the majority of students (83%) did not believe that AI would replace radiologists [9]. This belief may be the result of popular media and famous figures from the computer sciences who suggested that radiologists will not be replaced by machines in the future due to AI [10]. We believe that proper education about the potential use of AI and its limitation should be presented to students in a clear way to avoid this misconception. In our cohort, most of the participants had an overall good level of knowledge about AI and ML. this was demonstrated by asking three basic questions about the understanding of the principles of AI, familiarity with associated nomenclature, and basic understanding of the current limitations of AI. However, most of them did not read or attend any scientific activityrelated and none of them had training as part of their official study. A similar survey conducted on UK students demonstrated that less than half of students had some understanding of AI [7]. Pinto dos Santos et al. [9] showed that German students had an overall low level of knowledge about AI, with students stating that they acquired this from mainstream media rather than university teaching. They also highlighted that students who were more knowledgeable about AI were less afraid of working with technology which is similar to our finding [9]. It is interesting to note that most medical students in our cohort believed that AI should be compulsorily integrated into the medical school curriculum, and a similar majority believed on its beneficiary for their future. In our survey, it was found that there is lack of teaching materials concerning AI/ML in Jordanian Medical Schools, despite the fact that most students believed in its importance, a similar finding was suggested by Sit et al. in UK medical schools [7]. Furthermore, it was demonstrated that students who received training in AI/ML believed more significantly in the importance of the role of AI/ML in the future of medicine. It is widely accepted that AI will likely have an integral part in the future practice of many medical fields [1 -5]. As a result, it is Values inevitable that AI and other digital tools will be integrated into clinical practice, regardless of specialty. A study from Korea suggested that Korean doctors and medical students believe in the role of AI in the medical field (83%) (558/669). Additionally, the majority of physicians surveyed thought that AI will not replace medical staff in the future (64.6%) (432 of 669) [11], which is consistent with our results. It is vital to equip our future physicians with sufficient knowledge and future doctors must possess the ability to use computers and software in a way that benefits evidence-based medicine use [7,8]. LIMITATIONS This current work has limitations, first participants were not asked background questions about AI. Each student has had a different understanding of AI, second a possible selection bias as students who were more motivated and who might have more positive attitudes completed the questionnaire. However, the answers were randomly selected and were from different years of study and the total number was higher than in previous similar works. CONCLUSION Medical students in Jordan are arranged to acknowledge AI application in clinical medication and social medical frameworks. They have a little resistance to AI in therapeutic instruction. It is important to have training courses for those students about AI and ML to learn more about their full engagement with medical study. Students know about AI and ML, which emphasize that new generations care more about technology advancements in the medical field. In conclusion, it is important that medical students get good education about AI and ML advances in the medical filed and view their applications and the new trends of diagnosis and treatment. The need emerges to link between medicine concepts and their relationship with advancements in technology. AI and ML will pave the way for new and modern healthcare trends. Therefore, it is essential to create more viable curricula, restorative teachers and clinical instructors ought to take these states of mind into consideration. AUTHORS' CONTRIBUTIONS All authors have equally contributed to collecting the data, analyzing the data, reviewing the literature, writing the first draft and revising and approving the final draft. LIST OF ABBREVIATIONS AI = Artificial Intelligence ETHICS APPROVAL AND CONSENT TO PARTICIPATE The manuscript has been approved by the Institutional review Board of the Jordan University Hospital, Amman, Jordan number 300/2020 and all participants agreed to participate. All methods were performed in accordance with the relevant guidelines and regulations. Informed consent was confirmed with individual participants at the start of the survey. HUMAN AND ANIMAL RIGHTS No animals were used in this research. All procedures performed in studies involving human participants were in accordance with the ethical standards of institutional and/or research committee and with the 1975 Declaration of Helsinki, as revised in 2013. CONSENT FOR PUBLICATION Informed consent was obtained from all participants. STANDARDS OF REPORTING STROBE guidelines were followed. AVAILABILITY OF DATA AND MATERIALS All data are available upon request from the corresponding author. FUNDING None.
2022-05-27T15:10:18.405Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "5049350e623a6d612cdac37b1c0ec2e8e0c68942", "oa_license": "CCBY", "oa_url": "https://openpublichealthjournal.com/VOLUME/15/ELOCATOR/e187494452203290/PDF/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2b761316e0ea5e977083b083c2afd7a33945ad89", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
235356784
pes2o/s2orc
v3-fos-license
ShenLing BaiZhu San alleviates Ulcerative Colitis in Rats by Regulating Gut Microbiota Daxing Gu South China Agricultural University College of Veterinary Medicine Shanshan Zhou South China Agricultural University College of Veterinary Medicine Lili Yao South China Agricultural University College of Veterinary Medicine Ying Tan south china agricultural university college of veterinary medicine Xingzi Chi south china agricultural university college of veterinary medicine Dayou Shi south china agricultural university college of veterinary medicine Shining Guo south china agricultural university college of veterinary medicine Cui Liu (  liuc@scau.edu.cn ) South China Agricultural University College of Veterinary Medicine Background Gut microbiota is considered to be an critical factor in deriving ulcerative colitis (UC) [1,2,3], which is characterized by abnormal microbiota leading to disruption of ora balance, decreasing the complexity of the intestinal microbial ecosystem [4]. At the same time, UC is thought to be caused by an imbalance between intestinal microbiota and mucosal immunity [5]. Among UC patients, the composition and functional diversity of intestinal microbiota and the stability of intestinal bacteria were reportedly destroyed [6]. Furthermore, the speci c Firmicutes decreased, yet Bacteroides and facultative anaerobic bacteria increased [7]. Studies have shown that traditional Chinese medicine could obviously modulate the composition of the gut microbiota and the gut microenvironment [8,9,10]. Meanwhile, the gut microbiota is essential for the metabolism of traditional Chinese medicine in vivo [11,12]. Shenling Baizhu San (SLZBS), originated from the Song Dynasty "Taiping Huimin Mixing Agent", is composed by Panax Ginseng, Poria cocos, Atractylodes macrocephala, Dioscorea opposita, Dolichos Lablab, Semen Nelumbinis, Semen Coicis, Fructus Amomi, Platycodon grandi orus, and Glycyrrhiza uralensis Fisch, which is used for weakened of the spleen and stomach [13]. Modern pharmacological studies have revealed that many components of SLBZS contain anti-in ammatory activities. Ginseng polysaccharides, one of the constituents of Panax Ginseng, improved intestinal metabolism and absorption of ginsenosides [14]. Furthermore, Ginsenoside Rg1, also one of the main constituents of Panax ginseng, and its metabolites could inhibit colitis [15]. 16α-hydroxytrametenolic acid from Poria cocos improved intestinal barrier function [16]. Yam polysaccharide from Dioscorea opposita reduced in ammation in the rat model of colitis induced by TNBS [17]. Moreover, it has been reported that SLBZS, by means of application of the combination, can treat UC in a signi cant way [13]. Besides, research shows that SLBZS could regulate the pathogenesis of UC with reduction of in ammatory cytokines, inhibition of pyroptosis and protection of colonic barrier integrity [18,19]. Currently, there is no uniform conclusion on the effect of UC on gut microbiota, and the traditional Chinese medicine SLBZS treats UC, which has not been theoretically explained. 2,4,6-trinitrobenzene sulfonic acid (TNBS) used to establish UC model is a common modelling method [20]. In this study, TNBS was used to induce the UC model to evaluate the e cacy and safety of SLBZS in the treatment of UC. And then fecal samples were collected after 7-day to identify the change of structure and diversity of the gut microbiota in response to the SLBZS treatment for the alleviation of UC. Furthermore, the levels of serum in ammatory factors and activity of antioxidant enzymes were measured, and the pathological changes of colon in UC rats were observed. Materials And Methods Preparation of TNBS and SLBZS TNBS was purchased from Sigma-Aldrich Co., Ltd.. TNBS is dissolved in 50% ethanol solution to prepare 5% TNBS ethanol solution when used. The traditional Chinese medicine prescription SLBZS, purchased from Beijing Tongrentang (Lot no:16101034), comprises of Panax Ginseng, Poria cocos, Atractylodes macrocephala, Dioscorea opposita, Dolichos Lablab, Semen Nelumbinis, Semen Coicis, Fructus Amomi, Platycodon grandi orus, and Glycyrrhiza uralensis Fisch, and is dissolved in distilled water when used. Animal 80 g-100 g SD male rats, purchased from the Center of Experimental Animals of Southern Medical University (approval number:SCXK 2016-0041), and the rats were housed in plastic cages with the ambient temperature controlled at 22 °C − 24 °C, light for 12 hours and free drinking water and food. The bottom horn mesh and white paper make it easy to observe the stool specimens. Replace the padding every day during the test. All the experimental procedures of this study were approved by the Animal Ethics Committee of the South China Agricultural University (Guangzhou). In this experiment, after 5 days of adaptive feeding, 40 male rats were randomly divided into normal control group (CON), model group (TNBS), low dose group (TNBS-L), medium dose group (TNBS-M) and high dose group (TNBS-H) (n = 8). With TNBS using in model building, rats were weighed one day ahead of modeling and would fast except water in 12 hours before modeling. Anesthetized with iso urane, except in CON group, rats were given an enema with 2.5 ml/kg TNBS according to their weigh to induce UC, while rats in CON group were given an enema with an equal volume of physiological saline. The test period was 10 days (observation for two days after modeling, treatment for 8 days). After modeling, rats in CON group and TNBS group were intragastrically administered with 2 ml of physiological saline, 2 ml (0.1 g) SLBZS for rats in TNBS-L group, 2 ml (0.2 g) of SLBZS for rats in group TNBS-M and 2 ml (0.3 g) of SLBZS for rats in TNBS-H group. On the 9th day, feces were frozen immediately with liquid nitrogen and stored at -70 °C. On the last day of the experiment, the abdominal aorta of anesthetized rats were dissected to collect blood samples, which were then separated for serum. 16S rRNA gene sequence analysis of gut microbiota in fecal samples The total DNA of the feces was extracted using the TIANamp STool DNA Kit (BEIJING, DP328). The extracted DNA was determined by 0.8% agarose gel electrophoresis, quantitatively analyzed by an ultraviolet spectrophotometer. And then, selected DNA was ampli ed by 16S rRNA V3-V4 region. The 16S rRNA V3-V4 region-speci c primer for PCR ampli cation was 338F (5-barcode + ACTCCTACGGGAGGCAGCA-3'), 806R (5'-GGACTACHVGGGTWTCTAAT-3'). The PCR reaction system (25 µL) was as follows: 0.25 µL Q5 high-delity DNA polymerase PCR, 5 µL Reaction Buffe (5×), 5 µL High GC Buffer (5×), 2 µL ntp (10 mM), 2 µL of template DNA, 1 µL of each primer, 8.75 µL of double distilled water. The PCR reaction conditions were 98 °C for 30 s initially, followed by 25 cycles of denaturation at 98 °C for 30 s, annealing at 50 °C for 30 s and extension at 72 °C for 30 s. The PCR ampli cation products were identi ed by electrophoresis, and then the ampli ed products were recovered and puri ed using the Axygen DNA Gel Recovery and Puri cation Kit. The products were sequenced on the Illumina MiSeq sequencing platform. Bioinformatics and statistical Analysis QIIME was used for Operational Taxonomic Unit (OTU) classi cation and identi cation [21,22]. Using R software to draw rarefaction curve, and calculate Alpha diversity index, including Chao1 estimator, Shannon diversity index. The Principal Component Analysis (PCA) and weighted and unweighted Nonmetric Multidimensional Scaling (NMDS) analysis based on UniFrac were carried out for community composition structure at genus level by R software [23,24]. According to the statistics of the relative abundance of two levels of taxonomy, two levels of the phylum and the genus are analyzed. All data obtained in this study were processed statistically and divergence were presented as means ± SE. Analysis of variance was used for multiple comparison. SPSS Statistics 20.0 for Windows was used and P < 0.05 was considered to be signi cant differences. Histological observation of colon The collected colon tissue was xed in 10% formalin, dehydrated with different concentrations of ethanol, embedded in para n, stained with hemtoxylin and eosin (HE) and cleaned by PBS, incubated in proteinase K and TUNEL solution, labeled by DAPI (TUNEL), sliced under a uorescence microscope and imaged. Result Histological Changes of colon tissue in each group after treatment Diarrhea, slightly rectal prolapse, and slightly colonic swelling were observed in TNBS-induced rats. After administration of SLBZS, the lesions of colon in UC rats was improved. The details are shown in Supplementary materials (Fig. S1). The results illustrated that there was normal histological feature in CON group, but a large number of in ltratedly in ammatory cells and fuzzy structure of each layer in TNBS group and the structure of each layer of TNBS-H group was clearer and more integrated compare to TNBS group, TNBS-L group and TNBS-M group (Fig. 1a). More apoptotic cells in TNBS group compared to SLBZS group (Fig. 1b). SLBZS reversed these changes. Change of the structure of the whole intestinal microbiota of each group after treatment After high-throughput sequencing, 1707839 effective sequences were obtained, including 307951 in CON group, 302534 in TNBS group, 383561 in TNBS-L group, 377902 in TNBS-M group and 335891 in TNBS-H group. The QIIME software performed OTU partitioning on these sequences, which were based on 97% sequence similarity. Beta diversity analysis including PCA and and weighted and unweighted NMDS based on unifrac were used to analyze the similarity of the gut microbiota among different samples. PCA and NMDS analysis revealed that (Fig. 2a-c) the structure of gut microbiota in TNBS group differed from CON group. However, after administration of SLBZS, the structure of intestinal microbiota in SLBZS group was similar to CON group, particularly in TNBS-M and TNBS-H group, which proved that administration of SLBZS could restore the intestinal structure of UC rats. The Chao1 and Shannon curves (Fig. 2d,e) indicated that the curve tended to be at when the sequencing depth was greater than 15000, proving that the sequencing depth was su cient to re ect the species diversity and basically contained all species in the sample. The Alpha diversity index (Fig. 2f,g) showed that shannon diversity index in TNBS-L group was lower than CON group and TNBS group (P < 0.05) and chao1 estimator in SLBZS group was higher than CON group and TNBS group (P > 0. 05). Taxonomic composition of communities at phylum and genus levels after treatment The three typical microbiota at the phylum level were Firmicutes, Bacteroidetes and Proteobacteria (Fig. 3a). SLBZS treatment could increase the relative abundance of Firmicutes and Proteobacteria and reduce Bacteroidetes in UC rats (Fig. 3b). At the genus level (Fig. 3c), 6 of 110 genera were typically different after SLBZS treatment. In TNBS-L and TNBS-H group, the relative abundance of Prevotella increased to normal level, but Bilophila, Bacteroides and Helicobacter decreased compared to TNBS group. In TNBS-M group, the relative abundance of SLBZS Oscillospira was close to normal level (Fig. 3d). ELISA test for serum in ammatory factors and antioxidant enzymes after treatment The ELISA test (Fig. 4) showed that low dose of SLBZS treatment could signi cantly reduce the level of IL-6 (P < 0.05) and SLBZS treatment could reduce the heightened activity of MPO induced by UC (P < 0.05). The SOD activity of TNBS-L and TNBS-M group was elevated compared to TNBS group (P > 0.05). The CAT activity of TNBS-L and TNBS-H group was elevated compared to TNBS group (P > 0.05). Discussion Ulcerative colitis is a chronic in ammatory disease of colon with unclear mechanism. Generally, It has been believed that its pathogenesis involves the defect of epithelial barrier defects, dysregulated immune responses, and the disorder of intestinal microbiota. In TNBS-induced ulcerative colitis, UC rats are represented by diarrhea, ulceration of colon tissue, increase of IL-6 level and enhancement of MPO activity in serum [25,26]. In this study, all UC model rats demonstrated clinical symptoms of diarrhea. Concurrently, overexpression of in ammatory factors and the disturbance of gut microbiota existed in UC model rats, indicating the UC model was successfully established in this study. Colonic epithelial cells and mucosal barrier are strongly related to the pathogenesis of UC. By inhibiting the apoptosis of colonic epithelial cells, mucosal ulceration and mucosal epithelial cell damage in UC rats can be improved [27]. As a famous formula for 900 years, SLBZS has been widely used in the treatment of gastrointestinal diseases. It's has been reported that SLBZS might exhibit ameliorating effects against diarrhea by modulations on intestinal absorption function as well as mucosal ultra structure [28]. From the pathological section of the colon and the change of the colon in each group of the experiment, high dose of SLBZS for UC had a certain recovery effect on the intestinal villi detachment, the over all structural damage of the colon, in ammatory cell in ltration and apoptosis induction, which was bene cial to regulate the reabsorption capacity and mucosal barrier of the colon. The level of in ammatory factors and activities of antioxidant enzymes in rats serum were also ascertained. Previous studies delineated that overexpression of IL-6 could lead to a continuous in ammatory response and in turn promote in ammatory bowel disease [29]. IL-6 can promote in ammation by activating multiple target cells, including antigen-presenting cells and T cells [30]. MPO is mainly located in aniline blue particles in neutrophils [31], which re ects the in ammatory state to some extent. Studies have found that reactive oxygen species (ROS) is closely related to UC colon mucosal tissue damage [32]. Although low level of ROS are necessary for some physiological processes, excessive ROS are produced in UC patients [33]. SOD and CAT can remove ROS, prevent lipid peroxidation and maintain the stability of cell membrane. In our study, we observed that low dose of SLBZS treatment could decrease the level of IL-6 and MPO (P < 0.05) and increase the activities of SOD and CAT (P > 0.05) while all dose SLBZS treatment could signi cantly decreased the level of MPO (P < 0.05) compared to TNBS group. Our study proved that SLBZS could treat UC by inhibiting in ammation and improving antioxidant capacity. Abnormal microbial composition and reduced complexity of intestinal microbial ecosystem are common features of ulcerative colitis [34]. To monitor the structural modulation of the gut microbiota during UC treatment with SLBZS, high-throughput sequencing analysis of 16S rRNA genes was performed in our study. As re ected by the chao1 estimator and shannon diversity index, we found that SLBZS treatment reduced the shannon diversity index but increased the chao1 estimator, suggesting SLBZS treated UC by reducing the diversity of gut microbiota while increasing its richness. PCA and NMDS showed that SLBZS treatment could change the structure and composition of the microbiota, and the structure of the microbiota can be closer to the normal state than the TNBS group. In order to analyze the further difference in the structure of gut microbiota after treatment, this study also carried out a comparative analysis of the gut microbiota at the phylum and genus levels of each group. We found that the relative abundance of Bilophila, Desulfovibrio and Bacteroides decreased in TNBS-L and TNBS-H compared to TNBS group while Oscillospira and Helicobacter increased in TNBS-M and Prevotella increased in TNBS-L and TNBS-H group. Prevotella and Oscillospira are short chain fatty acid (SCFA) producing bacteria [35,36], and SCFA, important nutrients of colon mucosal, is capable of colon cell proliferation and mucosal growth [37]. Undigested dietary ber, protein and peptides can be fermented through gut microbiota in the cecum and colon, resulting in the generation of SCFA. SCFA can induce intestinal epithelial cells to secret IL-18, antimicrobial peptide, mucin and upregulate the expression of tight junction to regulate the integrity of intestinal barrier [38]. Meanwhile, SCFA can induce neutrophil migration and enhance phagocytosis [39]. Bacteroides is involved in metabolism and nutrient absorption in vivo [40], but promotes in ammation in in ammatory bowel disease [41]. Both Desulfovibrio and Bilophila are conditional pathogens, creating H 2 S in combination with H 2 by sulfuric acid or sulfur-containing compounds, which has an important relationship with the in ammatory state of the intestinal epithelium (such as UC) [42]. Studies have shown that the body with Helicobacter removed is more susceptible to colitis than the untreated group, suggesting that Helicobacter has potential protective effects on colitis patients [43]. Therefore, these result further indicated that the amelioration of UC using SLBZS may be mediated by the enrichment of bene cial bacteria to product SCFA for protection of colon mucosa and a reduction in bacteria, such as Bacteroides, Desulfovibrio and Bilophila, to inhibit in ammation. Conclusions In summary, our study proves that gut microbiota structurally remodeled in UC rats after administration of SLBZS, additionally with increase of bene cial bacteria, such as Prevotella and Oscillospira, and reduction of harmful bacteria like Desulfovibrio and Bilophila. In addition to remodeling the structure of gut microbiota, SLBZS can inhibit in ammation and enhance antioxidant capacity. But how SLBZSmediated changes in the gut microbiota contribute to the improvement of UC need further study.
2020-10-28T19:21:37.442Z
2020-10-16T00:00:00.000
{ "year": 2020, "sha1": "4feb922aba3c928d9e9e8cac4238b516f0ceefbb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21203/rs.3.rs-90760/v1", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9b06ce10948680e6b053073760d12b7a4807f3c6", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
708139
pes2o/s2orc
v3-fos-license
Towards Measuring the Stop Mixing Angle at the LHC We address the question of how to determine the stop mixing angle and its CP-violating phase at the LHC. As an observable we discuss ratios of branching ratios for different decay modes of the light stop ~t_1 to charginos and neutralinos. These observables can have a very strong dependence on the parameters of the stop sector. We discuss in detail the origin of these effects. Using various combinations of the ratios of branching ratios we argue that, depending on the scenario, the observable may be promising in exposing the light stop mass, the mixing angle and the CP phase. This will, however, require a good knowledge of the supersymmetric spectrum, which is likely to be achievable only in combination with results from a linear collider. Introduction is often studied at the LHC. Taking the SPS1a ′ scenario as an example, a large number of stops and sbottoms will appear in the gluino decay chain. Both, however, can give a similar experimental signature and consequently one has to do a simultaneous analysis of sbottom and stop sectors. This leads to good constraints for the sbottom sector but the constraints on the stop mixing angle are much weaker [5,17]. Another possible observable is the polarization of top quarks in the decayt 1 →χ 0 1 t. The information on the stop mixing angle can be extracted here from the forward-backward asymmetries in leptonic and hadronic top decays [18]. In this paper we focus our attention on the decays of the light top squark to charginos and neutralinos that are possible in a wide range of scenarios of the Minimal Supersymmetric Standard Model (MSSM):t The stop and sbottom decays have already been analyzed in the literature in some detail, including radiative corrections [19,20,21,22,23]. In this paper we propose a method to measure the properties of the stop sector using simultaneously the decays Eqs. (1) and (2). We analyze three scenarios of the MSSM with different gaugino/higgsino composition of charginos and neutralinos. We show that the branching ratios for these decays can be a sensitive probe of the mixing angle in the stop sector and also of the CP-violating phase. We use a model-independent approach, i.e. without assuming a particular structure for the stop mass matrix, and parametrize the stop interactions in terms of the mixing parameters cos θt and φt. Since the absolute measurement of branching ratios is expected to be very difficult at the LHC we propose to exploit another set of observables -ratios of branching ratios, cf. Ref. [5,17]. We argue that by looking at direct stop pair production and the following decays one can get a good accuracy for the determination of the mass and the mixing parameters of stops. We briefly discuss possible experimental issues for these processes. Finally, a χ 2 fit is performed to give a range for the expected parameter determination precision. The paper is organized as follows. In Section 2 we give a brief overview of the mixing and the couplings of the stop, chargino and neutralino sectors of the MSSM. In Section 3 we give analytic expressions for the decay widths of the light stop into charginos and neutralinos and analyze their dependence on the stop mixing parameters in chosen scenarios. Section 4 explains in detail how to determine the stop mixing parameters using stop decays at the LHC for our benchmark models. Finally, we summarize our findings in Section 5. .1 Stop sector of the MSSM In the Minimal Supersymmetric Standard Model the stop sector is defined by the mass matrix Mt in the basis of gauge eigenstates (t L ,t R ). The 2 × 2 mass matrix depends on the soft scalar masses M Q and M U , the supersymmetric higgsino mass parameter µ, and the soft SUSY-breaking trilinear coupling A t . It is given as where and tan β = v 2 /v 1 is the ratio of the vacuum expectation values of the two neutral Higgs fields which break the electroweak symmetry. From the above parameters only µ and A t can take complex values thus yielding CP violation in the stop sector. The hermitian matrix M 2 t is diagonalized by a unitary matrix Rt that rotates gauge eigenstates,t L andt R , into the mass eigenstatest 1 andt 2 : where we choose the convention m 2 t 1 < m 2 t 2 for the masses oft 1 andt 2 . The matrix Rt rotates the gauge eigenstates,t L andt R , into the mass eigenstatest 1 andt 2 as follows where θt and φt are the mixing angle and the CP-violating phase of the stop sector, respectively. The masses are given by whereas for the mixing angle and the CP phase we have By convention we take 0 ≤ θt < π and 0 ≤ φt < 2π. It must be noted that φt is an 'effective' phase and does not directly correspond to the phase of any MSSM parameter. Instead, the phase will have contributions from both φ At and φ µ . If m LL < m RR then cos 2 θt > 1 2 andt 1 has a predominantly left gauge character. On the other hand, if m LL > m RR then cos 2 θt < 1 2 andt 1 has a predominantly right gauge character. Chargino mixing In the MSSM, the mass matrix of the spin-1/2 partners of the charged gauge and charged Higgs bosons,W + andH + , takes the form where M 2 is the SU(2) gaugino mass parameter. By reparametrization of the fields, M 2 can be taken real and positive, while the higgsino mass parameter µ can be complex, see Eq. (7). Since the chargino mass matrix M C is not symmetric, two different unitary matrices are needed to diagonalize it U and V matrices act on the left-and right-chiral ψ L,R = (W ,H) L,R two-component states giving two mass eigenstatesχ ± 1 ,χ ± 2 . Neutralino mixing In the MSSM, the four neutralinosχ 0 i (i = 1, 2, 3, 4) are mixtures of the neutral U(1) and SU(2) gauginos,B andW 3 , and the higgsinos,H 0 1 andH 0 2 . The neutralino mass matrix in the (B,W 3 ,H 0 1 ,H 0 2 ) basis, is built up by the fundamental SUSY parameters: the U(1) and SU(2) gaugino masses M 1 and M 2 , the higgsino mass parameter µ, and tan β = v 2 /v 1 (c β = cos β, s W = sin θ W etc.). In addition to the µ parameter, a non-trivial CP phase can also be attributed to the M 1 parameter: Since the complex matrix M N is symmetric, one unitary matrix N is sufficient to rotate the gauge eigenstate basis (B,W 3 ,H 0 1 ,H 0 2 ) to the mass eigenstate basis of the Majorana fieldsχ 0 The masses mχ0 i (i = 1, 2, 3, 4) can be chosen to be real and positive by a suitable definition of the unitary matrix N. Couplings of stops to charginos and neutralinos We now give explicit formulae for the couplings relevant for decays Eqs. (1) and (2) in the convention of Ref. [24]. In terms of two-component (Weyl) gauge eigenstates the coupling between stop, top and neutral gauginos/higgsinos is given by where e = g 2 s W = g 1 c W , T 3 = 1 2 τ 3 is the SU(2) generator and τ 3 is the Pauli matrix. After electroweak symmetry breaking for the mass eigenstates t,t i andχ 0 j we get where with the top Yukawa coupling given by We now see that the right squark couples only to the bino and the higgsino components of the neutralino. If the µ parameter is much larger than the gaugino mass parameters then the light charginoχ ± 1 and light neutralinosχ 0 1 ,χ 0 2 are gauginos with small higgsino components. In this case the Yukawa term in Eq. (23) is negligible for stop decays into these states. On the other hand, as can be seen in Eqs. (22) and (23), left squarks couple both to the bino and the wino, however with the bino coupling suppressed by a factor 1/3 due to hypercharge. Therefore, having a prior knowledge on the composition of neutralinos we can infer the structure of the stop sector by comparing strength of the stop coupling to different neutralino states. Let us turn now to the coupling between chargino, stop and bottom quark. The interaction Lagrangian in terms of gauge eigenstates reads in Weyl notation After electroweak symmetry breaking and rotation of fields to their mass eigenstates we get where with the bottom Yukawa coupling given by with the kinematic triangle function and the couplings Q ± ij given by Eqs. (27), (28). Substituting the explicit matrix elements of Eq. (9) we can make the following expansion in terms of the stop mixing angle and the phase We see explicitly that the dependence of the phase φt appears only if there is a significant higgsino component (U j2 or V j2 ) in the charginoχ + j we are interested in. Analogously, for decays to neutralinos we have [15,23] with κ(x, y, z) given by Eq. (31) and couplings Q 0 ij by Eqs. (22), (23). Similarly we obtain An interesting feature of Eqs. (30) and (34) is the relative importance of the squared |Q L ij 2 + Q R ij 2 terms and the left-right interference Re Q L ij Q R * ij terms. As they are multiplied by mass factors, it is going to be sensitive to the mass splitting between stop andχ + i b,χ 0 i t pairs. If the given decay mode is close to its kinematic threshold (which will be the case for heavier neutralinos) the second term will become dominant, whereas far from the threshold the first term will usually be much larger. Discussion of typical mixing scenarios In order to analyze the dependence of the stop mixing angle on the decay widths and the branching ratios we consider three benchmark points of the MSSM. The first scenario is the well known mSUGRA inspired SPS1a ′ parameter point [4] -in the following we will refer to it as Scenario A. A feature of mSUGRA scenarios is that the charginos and the neutralinos are to a large extent pure gaugino/higgsino states: the lightest neutralino is bino-like, the light chargino and the second neutralino are winos, and the heavy chargino and the heavy neutralinos are higgsino-like. Scenarios B and C are adopted from Ref. [25]. In Scenario B the wino mass parameter M 2 and the higgsino mass parameter µ are of a similar order, giving strong mixing between the wino and the higgsino components of the charginos and the neutralinos. This makes the determination of θt more difficult since both left and right couplings of Eqs. (21) and (26) come into play for any value of cos θt. On the other hand this gives the possibility to study the dependence on the CP-violating phase φt, thanks to the last terms of Eqs. (32), (33), (35) and (36). Finally, Scenario C features the wino mass parameter two times larger than the µ parameter. In this case higgsino-like states will be lighter than winos with rather small mixing. In both cases, Scenarios B and C, the lightest supersymmetric particleχ 0 1 is bino-like. In order to study the possible dependence of branching ratios on the CP-violating phase in the last two scenarios we introduce a CP phase for the stop trilinear coupling A t . For all three scenarios we keep the values of other parameters (i.e. slepton and squark sectors) as in the SPS1a ′ scenario. The values of the gaugino, higgsino and stop sector parameters are collected in Tab. 1 and the nominal values of masses, mixing angles and branching ratios are listed in Tables 2 and 3. We now discuss the behaviour of the decay widths and the branching ratios with respect to the stop mixing angle and the CP phase in each of the scenarios. Scenario A -mSUGRA According to the discussion in Sec. 2.4, for Scenario A we expect that if thet 1 is mainly a left stop (i.e. for cos θt = ±1) then it will dominantly couple toχ + 1 andχ 0 2 (which are both winos), whereas the coupling to the bino-likeχ 0 1 is suppressed. On the other hand, Figure 1: Decay widths (left column) and branching ratios (right column) fort 1 in Scenario A as a function of the stop mixing angle cos θt (upper row) and the stop CP phase φt (lower row). Black, red and blue lines are forχ + 1 b,χ 0 1 t andχ 0 2 t final states, respectively. contributions from Eqs. (22), (23), (27) and (28). On top of that, the decayt 1 →χ 0 2 t is further suppressed by the phase space, since mχ0 2 + m t = 355 GeV is only slightly lower than the light stop mass. As one can see, the decay widths change by an order of magnitude or more. Therefore they are a sensitive probe of the mixing between left and right stop states. The upper right panel of Fig. 1 shows the dependence of the branching ratios on cos θt that exhibit a similar behaviour as the decay widths. Although Scenario A does not contain CP phases, we include them here to analyze the sensitivity of the decay widths and the branching ratios. The respective plots can be seen in the lower row of Fig. 1. The most significant change is for the decay to a chargino and a bottom quark. This results from the third term of Eq. (32) that changes sign when varying φt from 0 to π giving destructive interference. Although the dependence on φt is clearly visible the constraints on this parameter, as we will see it later, will be rather weak. The last discussed scenario features the hierarchy M 1 < µ < M 2 . Therefore the light chargino and neutralinosχ 0 2 ,χ 0 3 are higgsino-like with small mass differences between them. The lightest neutralino is bino-like as in the previous scenarios. The dependence of the decay widths on the stop mixing angle has been shown in the left panel of Fig. 3. The difference in the decay to the chargino for left and right stops is a consequence of the Y t coupling for right states in Eq. (25). A similar effect was seen in Scenario B, however it is now more pronounced due to the higgsino nature of the light charginoχ + 1 . We also observe the interesting exchange of the decay widths to heavier neutralinos when the sign of cos θt changes. This feature arises due to the Y 2 t Re N * 2 j4 term in the second line of Eq. (36) that is enhanced both by the large top Yukawa coupling and the higgsino nature of the two neutralinos. Since neutralinos χ 0 2 andχ 0 3 have opposite intrinsic CP parities, cf. Ref. [28], the entries in the neutralino mixing matrix that correspond toχ 0 3 are purely imaginary. Therefore the contribution has an opposite sign in the decay width and hence different behaviour with respect to the sign of cos θt. A similar dependence of the decay widthsχ 0 2 t andχ 0 3 t on the sign of cos φt can be seen in the right panel of Fig. 3. Its origin is the same as in the above discussed case for cos θt. As before the change in the width of the decay toχ + 1 b is caused by change in the sign of the last term of Eq. (32) with cos φt, as φt is varied from 0 to π. It is interesting to note that now the branching ratio for the decay to charginoχ + 1 does not show a strong dependence on the phase φt (as opposite to Scenario B). However the dependence of the branching ratios for the decays to neutralinos is still well pronounced. Ratios of branching ratios As one can see in Figs. 1, 2 and 3, the decay widths can change by up to a few orders of magnitude depending on the stop mixing angle and the CP phase. In addition, the branching ratios are also very sensitive to these parameters. However since the measurement of decay widths and branching ratios will be difficult at the LHC we propose to analyze the ratios of branching ratios. That means comparing the number of stops decaying to one final state with the number of stops decaying to another final state. Having three decay modes possible we can define the following ratios of branching ratios for each of the Scenarios A, B and C . (37) Figure 4 shows the above ratios of branching ratios in Scenario A as functions of cos θt and the CP-violating phase φt. For Scenario B we have three additional combinations due to the For Scenario C due to the decayt 1 →χ 0 3 t being allowed we have Because of the higgsino nature of neutralinosχ 0 2 andχ 0 3 they are very close in mass and it might turn out that they are impossible to disentangle at the LHC. Therefore we define two additional ratios by combining the decay modes toχ 0 2 t andχ 0 3 t In our analysis we focus on direct stop production pp →t 1t * 1 in order to have better control over the number of observed stops and to reduce the background due to bottom squarks. In the SPS1a ′ scenario the cross section for this process amounts to 3.44 pb at the next-toleading order [10,27], whereas the total SUSY cross section is 60 pb. The cross sections for stop pair production in Scenarios B and C are given in Tab gives a relatively clean environment for the observation of direct light stop pair production. Possible final states are as follows: pp →t 1t * pp →t 1t * pp →t 1t * pp →t 1t * The production process itself can be tagged using a clean decay mode for one of the stops, for instance the decay toχ 0 2 t followed by a leptonic neutralino decay and hadronic top decay. For an integrated luminosity of L = 100 fb −1 we would have more than 300 000 stop pair production events. Assuming that on average 10% of charginos and neutralinos decay to leptons in our scenarios [26], taking into account the hadronic top branching ratio and a selection efficiency of 5%, cf. Ref. [18], one can expect more than 1000 stop events to be observed. Therefore in our further analysis we will assume that 1000 events have been correctly identified and show that even with this amount of experimental data one can still get strong constraints on the stop mixing angle and the mass. The other important point we wish to emphasize are the branching ratios for decays of the charginoχ ± 1 and the neutralinoχ 0 2 into leptons. Although one may expect that the related uncertainty will cancel out to some extent in the ratio R 1b 2t (as in our scenarios χ ± 1 andχ 0 2 have similar gaugino/higgsino composition), this is not true for the other ratios involving decays to the LSP. Because our focus here is on the stop sector we will assume that the leptonic branching ratios of the charginos and neutralinos are known. However as this would require better knowledge of the structure of the gaugino/higgsino sectors it is possible that the measurements from the LHC would have to be supplemented by the linear collider experiment, where charginos and neutralinos can be measured with a high precision. This would be an interesting example of LHC/ILC interplay [17], in particular for the scenarios where direct stop production is beyond the kinematical reach of the ILC. A large number of SUSY and SM backgrounds are expected for stop production at the LHC. The most severe Standard Model background, especially for the channels Eqs. (44)-(46), will be tt production. As shown in Ref. [18], for the process Eq. (44) this background can be effectively removed by using appropriate cuts. In any case, the key feature to distinguish the signal from SM background will be missing transverse energy, which is much larger for stop production due to large energy carried by the LSPs. The most important SUSY background process is going to be gluino production with subsequent decays to stops or sbottoms. One important difference between the signal and these backgrounds is the number of b-jets. The signal event always results in exactly 2 bjets, whereas SUSY backgrounds will typically have 4 b-jets and this feature can be used to suppress them. Finally, we note that the signal process with leptonic top decay, e.g.χ 0 1 t → bℓ+E miss , can give the same final state as the decay mode with charginos, i.e.χ + 1 b → bℓ + E miss . However, we note that this complication does not affect the result of the fit since it does not introduce any new unknown parameters. The fitted observables would be a linear combination of the original ones, Eq. (37), and the fit would rely on the same set of information. Hence, one can combine the above channels and actually enhance the signal. An important note is that it will not be sufficient to simply remove as much background as possible using the relevant cuts. We will also need to understand with a high degree of accuracy how each individual signal channel will be affected by the backgrounds. Understanding the background well is required, as for each channel we study, the number of background events contaminating the sample will be different. Therefore the pollution due to backgrounds will affect our ratios of branching ratios. The reconstruction efficiency, cuts and triggers will also have a different effect on each channel and will have to be well understood for our measurements to be accurate. We leave detailed analysis of these effects for our different final states and the additional uncertainties they may induce for future work. Determination of stop mass and mixing angle In order to show the possible advantages of using ratios of branching ratios for the analysis of the stop sector we first define the normalized ratios where θ nominal t is the actual mixing angle in the given scenario and i, j run over all possible channels, i.e. 1b, 1t, 2t etc., cf. Eqs. (37)-(40). According to this definition R i (cos θ nominal t ) ≡ 0. Furthermore, we assume that we have n = 1000 of well identified events of stopt 1t * 1 pair production. We now take the expected number of events in each decay mode n i = n × BR i . Note that 1 = n i only if decays to charginos and neutralinos are the only possible decay channels. However, for our method it is not necessary to measure all possible decay modes. The statistical error for n i is ∆ stat n i = √ n i . The resulting error for ratios of branching ratios is given by Before analyzing the expected accuracy of determination of stop sector parameters let us study the possible influence of the gaugino/higgsino sector parameters, taking as an example Scenario A. The precise knowledge of the LSP mass and the mixing angles of the charginos and the neutralinos may only be accessible after the results from a linear e + e − collider are available. In Fig. 5 we show the dependence of the normalized ratios Eq. (37) on the gaugino mass parameter M 2 and the mass of the LSP, m LSP ≡ mχ0 1 . In the first case we keep the mass differences mχ0 2 − mχ0 1 and mχ± 1 − mχ0 1 fixed as these are expected to be measured with high precision at the LHC. As can be seen, the value of R 1b 1t is very stable in both cases, whilst R 1b 2t and R 1t 2t exhibit an increase for larger values of M 2 and m LSP . This is because both R 1b 2t and R 1t 2t include a branching ratio for the decay toχ 0 2 t that is close to its kinematic threshold. Therefore, for an increasing m LSP or M 2 (note that M 2 ≃ mχ0 2 in Scenario A) we approach the point where this decay becomes impossible. High sensitivity of the decay width near the threshold means that to use such a decay mode to determine the mixing angle, one would have to know the masses extremely precisely. In this case the ratio of branching ratios is no longer a good observable. Moreover the branching ratio for such a decay usually becomes very small. In order to analyze the possible accuracy in extracting the mixing parameters of the stop sector we start with the example of Scenario A. In Fig. 6 we show the behaviour of the normalized ratios of branching ratios, Eq. (47), near the nominal value of the mixing angle cos θ nominal t . Using only one of three possible ratios, the smallest error and hence the best estimate we get is using the ratio R 1b 1t , which depends on the dominant decay modesχ 0 1 t and For the ratio R 1t 2t the impact of the error is slightly larger due to the limited statistics. On the other hand the ratio R 1b 2t gives the weakest constraints because bothχ + 1 andχ 0 2 are winos, hence their couplings tot 1 follow a similar pattern. We assume here that the values of the other SUSY parameters, including thet 1 mass, are known. Using only the information Having at hand three possible decay modes we can constrain not only the mixing angle cos θt but also the mass of the light stop quark and the CP-violating phase φt. This can be done using a χ 2 fit defined as follows where the error is defined by Eq. Fig. 7. We find two minima of χ 2 that fit the input data well. In order to resolve the two-fold ambiguity, additional observables will be needed. Assuming that we can pin down the correct solution we get the following 1-σ estimate of the two parameters mt 1 = 366 +3 −2 , cos θt = 0.56 ± 0.04 , θt = 0.98 ± 0.05 for 1000 events. The better lower bound for the measured mass is a consequence of the earlier discussed small difference between mt 1 and mχ0 2 + m t . In the right panel of Fig. 7 we show the results of the χ 2 fit to the mixing angle and the phase φt. As expected, the sensitivity to the CP phase is poor and taking into account the possible ambiguity in the mixing angle cos θt, the full range of phases remains allowed. The situation changes for Scenarios B and C. We now have 6 possible ratios in each case, for Scenario B: R 1b 1t , R 1b 2t , R 1t 2t , R 1b 2b , R 1t 2b , R 2t 2b , and for Scenario C: 3t . The results of the fit for n = 1000 events have been shown in Figs. 8 and 9. We again consider two cases: fitting of the mass mt together with the mixing angle cos θt and fitting of the mixing angle together with the CP-violating phase. In each case we assume that the value of the third parameter is known. Charginos and neutralinos now have a significant higgsino component and, as we saw in Figs. 2 and 3, the dependence on the mixing angle is much weaker. Therefore the constraints for the mixing angle and the mass that we get are not as good as in the case of Scenario A. It is interesting to note that in general the results of the fit are better in Scenarios A and C (gaugino and higgsino, respectively) than in Scenario B (mixed case). Consequently we conclude that the scenario with strong mixing between gauginos and higgsinos would be the most difficult to resolve. Analyzing both the mixing angle and the phase, we obtain four allowed regions. Nevertheless only small regions are allowed for the CP phase as our observables are more sensitive to it than in Scenario A. Branching ratios are CP-even observables, therefore they cannot resolve ambiguities for the CP phase. This shows that for precise measurements in the stop sector one has to use other CP-sensitive observables, like triple products of momenta [25,29]. Only such a combined analysis of CP-even and CP-odd observables can give an unambiguous determination of the stop sector parameters. Finally we discuss the results in Scenario C when combining decay modesχ 0 2 t andχ 0 3 t, as the two close-in-mass higgsino-like neutralinos may be difficult to disentangle at the LHC. The fit is now performed to 5 ratios of branching ratios: R 1b 1t , R 1b 2t , R 1t 2t , R 1b 23t and R 1t 23t , Eqs. (37) and (40). In Fig. 10 we show the fit to the stop mass and the mixing angle (left panel), and to the mixing angle and the CP phase (right panel). It turns out that we lose sensitivity to the elements of the stop mixing matrix. In such a case additional input, for example from linear collider, would be needed in order to resolve properties of the stop sector. Conclusions In this paper we have analyzed the stop sector of the Minimal Supersymmetric Standard Model. In particular, we have looked at the couplings and the decays of the supersymmetric top partners to the charginos and the neutralinos. As stops play an important role in the MSSM, it is crucial to measure their couplings and masses at future colliders in order to understand the underlying model. Therefore we have proposed a promising method for the determination of the stop sector parameters at the LHC. A careful analysis of the couplings of scalar tops to electroweak gauginos and higgsinos shows a strong dependence on the mixing angle and the CP-violating phase of the stop sector. This effect arises due to the structure of the electroweak gauge couplings and the Yukawa couplings of left and right stop states. We have analyzed three benchmark scenarios with different structures for the gaugino and higgsino sectors, where the light charginos and neutralinos had gaugino-like, higgsino-like or mixed composition. Analysis of the decay widths and the branching ratios has shown a strong relation between the stop mixing parameters and the decay pattern in each of the scenarios. Next, we have discussed a possible approach to determine the light stop mass, the mixing angle and the CP-violating phase at the Large Hadron Collider. Since stops will be produced in large numbers at this machine one can hope to learn the stop properties from their decay pattern. As the branching ratios are going to be difficult to be measured at the LHC we propose to analyze the ratios of branching ratios for different decay modes. These observables inherit a strong dependence on the mixing angle from stop decay widths and therefore can be a sensitive probe of the stop sector. Since they rely only on the relative numbers of stops decaying via various channels, many experimental uncertainties will cancel. In particular, one does not need to control all of the possible decay modes. In fact, as we have shown for the SPS1a ′ parameter point, using only two decay modes can give good constraints on the stop mixing angle. Finally we have performed χ 2 fits to show that the ratios of branching ratios can give strong bounds on the parameters of the stop sector: the mass oft 1 , the stop mixing angle cos θt and the CP-violating phase φt. The expected accuracy depends upon the scenario studied but looks the most promising for mSUGRA models. Application of this method will require the study of many possible final states. Therefore a good control of detector effects, like fake rates for leptons and b-jets, and SM as well as SUSY backgrounds will be needed. It is clear that more detailed experimental studies are needed to assess the validity of the method and its possible accuracy. However, taking into account the importance of the stop sector for our understanding of the supersymmetric model, we think that such a study deserves further attention.
2011-01-06T13:17:54.000Z
2009-09-17T00:00:00.000
{ "year": 2009, "sha1": "f600dcd6bcc4469eefeb35fbf61cdc2717a4be2f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0909.3196", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f600dcd6bcc4469eefeb35fbf61cdc2717a4be2f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
196349946
pes2o/s2orc
v3-fos-license
Gender differences in adolescents’ perceptions toward dentists using the Japanese version of the dental beliefs survey: a cross-sectional survey Background While adult women show greater dental anxiety than adult men, few studies have examined gender differences in adolescent perceptions of dentists. Therefore, this cross-sectional study aimed to evaluate the gender differences in adolescents’ perceptions toward dentists by using the Japanese version of the Dental Beliefs Survey (DBS) and the factor structure of the DBS. Methods We conducted surveys at schools, and 957 Japanese adolescents (403 girls and 554 boys, aged between 13 and 15 years) participated in this study. To assess their confidence in dentists, participants were asked to complete the self-reported, 15-item Japanese version of the DBS. We performed a Welch’s t-test and a one-way analysis of variance to assess differences in DBS scores by gender and age. Factor analysis (principal components, varimax rotation) was used to assess the scale’s factor structure. Results A significant gender difference was observed in the DBS scores (P = 0.018), suggesting that boys exhibit greater negative perceptions toward the behavior of dentists than girls. However, there was no significant difference found among ages. The factor analysis yielded two results: Factor 1, “trust” (seven items); and Factor 2, “lack of control” (five items). Notably, the factor structure differed according to gender. As such, by including only factors with eigenvalues above 1.0, the DBS for girls comprised “trust” (seven items) and “communication” (three items), while that for boys comprised “lack of control” (six items) and “belittlement” (six items). Conclusions This study identified two factors of differing strengths pertaining to the confidence of Japanese adolescents in dentists. Gender differences in perceptions toward dentists were observed. Accounting for these differences may improve the effectiveness of strategies to lower dental anxiety and foster positive dental beliefs in young patients. Background In young patients, distrust of dentists may often result in dental fear [1]. Negative beliefs about dentists have been shown to have a strong relation to the high frequency of cancellations and missed appointments [2]. Another study reported the role of psychological variables-such as being embarrassed by dental fear-that lead to avoidance, deterioration in dental health, and feelings of shame that culminate in reinforced avoidance [3]. However, little attention has been directed toward the subjective perceptions of adolescents regarding the behavior of their dentist. Beliefs about dentists have been assessed by a selfreported questionnaire-the Dental Beliefs Survey (DBS)-developed by Milgrom [4]. The purpose of the DBS is to identify to what degree the patient perceives the behavior of the dental professional as contributing to his or her fear or anxiety, and the information obtained from the DBS is useful from both diagnostic and prescriptive standpoints. The survey questions are designed to help dental professionals tailor their approach to best address the specific concern of the patient [4], so, in that way, it is more useful than a standard fear or anxiety questionnaire as it is more focused on the effects from the dentist's specific behavior. Several international studies using the DBS have been conducted in various populations [5][6][7]. While Milgrom [4] suggested that the original DBS fit a four-factor structure, only a few studies have been conducted to assess its structure in different populations [8]. To the best of our knowledge, no epidemiological studies have been conducted to evaluate Japanese adolescents' confidence in dentists. Further, no study has assessed the factor structure of DBS results of the Japanese adolescent population. Thus, a study accessing dental beliefs in Japanese-speaking adolescents is required. Although adult women show greater dental anxiety than adult men [9,10], evidence regarding the differences in dental fear between boys and girls has been inconsistent. Some studies have reported that girls are more fearful [11,12], while others have indicated no significant difference between boys and girls [13,14]. Schienle et al. [15] studied gender difference in neural correlates of dental phobia. They indicated that compared to male individuals with dental phobia, female individuals experience less cognitive control and show more avoidance behavior during treatment. Another study showed that, after the completion of the treatment, women remembered more pain and other negative experiences than men [16]. Considering the gender differences in perceptions toward dentists may improve the effectiveness of strategies for lowering dental anxiety in young patients. As such, this study aims to evaluate the confidence in dentists of Japanese-speaking adolescents using a Japanese translation of the DBS, examine the factor structure of the Japanese version of the DBS, and assess the gender differences. We expect girls to report more negative perceptions of dentists than boys and for their factor structure of the DBS to differ from that of boys. Participants The participants of this study, individuals aged 13-15 years, were also participants in a school-based crosssectional survey regarding temporomandibular disorders [17]. The present study included Japanese adolescents (13-15 years old) from a regional survey of 998 students from three junior high schools in Suginami, Tokyo. From the 23 junior high schools that we had approached, the administration of these three schools consented to participate in this study. No schools with intellectually disabled or learning-disabled students were included. Participation in this survey was voluntary and anonymous. A questionnaire concerning perceptions toward dentists was distributed in class; the students were asked to respond to the questionnaire only if they were willing to complete the survey. Junior high schools in Japan are legally obligated to conduct annual oral checkups for school students. All data were collected during the schools' annual oral checkups, held between October and November 2011 at all three schools. Among the 998 students who participated in this study, 41 students missed one or more questionnaire items, and hence their data were excluded. Therefore, the final sample included data of 957 students (Table 1). This sample could be representative of the 10,100 students aged between 13 and 15 who attended junior high schools in Suginami during 2011. The study protocol was reviewed and approved by both the Ethics Committee at the Nippon Dental University School of Life Dentistry (NDU-T2011-21) and the local education authority and conformed to the guidelines of the Declaration of Helsinki. The students and their parents provided their informed consent prior to participation. Measures The DBS consists of 15 items with a 5-point Likert scale (1: never, 2: a little, 3: somewhat, 4: often, 5: always) with scores ranging from 15 to 75, where 75 indicates maximal negative perception toward dentists. The DBS measures the subjective perceptions regarding the behavior of dentists and the way in which dental care is delivered; it also takes the aspect of control into account ( Table 2). Milgrom et al. [4] noted four separate dimensions of the DBS: "communication," "belittlement," "lack of control," and "trust." Five questions (1, 3, 4, 14, and 15) in the survey represent the dimension of "communication," which explores how well the patient thinks the dentist communicates. Three questions (6, 9, and 10) represent the dimension "belittlement," which examines the respondent's anticipation of how the dentist might view their fear. Three questions (5, 12, and 13) represent the dimension "lack of control," which examines the respondents' beliefs regarding their ability to control the situation while undergoing treatment. Finally, two questions (7 and 8) represent the dimension of "trust," which examines how skeptical the patient is of the dentist. Two questions (2 and 11) are not included in these four dimensions of the DBS; these questions concern fear that the dentist will not perform well due to other factors. To assess their confidence in dentists, participants were asked to complete the self-reported, 15-item Japanese version of the DBS. The questionnaire was originally developed in English, but, as Japanese was the common language among all participants, the questionnaire was translated into plain Japanese by the authors. To confirm that the English and Japanese questionnaires had the same content, the initial Japanese translation was retranslated into English by bilingual faculty members, and the contents of the original English and retranslated English versions were compared to ensure consistency. All versions were also analyzed and compared by the authors, and a final version was obtained. Additionally, to assess the equivalency of the translation, a preliminary study was conducted. The questionnaire was given to eight bilingual adult volunteers, twice under similar conditions. Each volunteer was randomly assigned one version of the DBS, either English or Japanese, and asked to complete it. The volunteers were then asked to complete the other version of the DBS on the following day and under similar conditions, without referring to the previous questionnaire. The Kappa coefficient was used to evaluate the equivalency of the language. Accordingly, 14 of the 15 items on the questionnaire obtained an average Kappa value of 0.61-with two items obtaining a score equal to 1, while others had a Kappa value between 0.40 and 0.77. While it was not possible to calculate the Kappa coefficient for one of the 15 items, seven of the eight volunteers responded with the same answers regarding this item in both versions. These results indicated good equivalency between the two versions of the questionnaire. To determine the repeatability of the Japanese version of the DBS, another preliminary study was conducted. The questionnaire was administered to 12 adult volunteers who did not participate in the former preliminary study, twice under similar conditions. The volunteers were asked to complete the Japanese version of the DBS questionnaire, and, a week later, they were asked to complete the same questionnaire under similar conditions, without referring to the previous one. We calculated the intraclass correlation coefficient (ICC), in which subjects and occasions were considered random factors [18]. The average ICC for all questionnaire items was 0.93, indicating that the repeatability of the questionnaire was sound. (A copy of the Japanese version of the DBS is available from the corresponding author for any interested researchers.) Statistical analysis Before performing analyses for group comparisons, the Levene test was used to assess the homogeneity of variance. Descriptive statistics and t-tests were used to compare age according to gender. As the Levene test revealed a significant difference in DBS scores between genders, Welch's t-test was used to compare the DBS scores on the basis of gender. A one-way analysis of variance was performed to assess the differences in DBS scores according to age. The significance level was set at P < 0.05. To assess the factor structure of the Japanese version of the DBS, factor analysis (principal components, varimax rotation) was employed. Factor analysis uses the correlation matrix between items on a scale to determine whether a subset of items is related in such a way as to suggest that they are measuring the general concept of interest. The principal components method extracts factors and retains the maximum amount of common variance possible in the first factor, while subsequent factors keep the maximum amount of the remaining common variance. Factors are always listed in descending order according to the amount of variation they explain; i.e., from the highest (first) factor to the lowest (last) factor. An eigenvalue indicates the amount of variance explained by each factor, and eigenvalues above 1.0 are considered strong enough to be retained. Within each factor, item loading was categorized as follows: > 0.70 excellent, > 0.63 very good, > 0.55 good, and > 0.45 fair [19]. The highest loading in a factor was taken into account for each item. The Kaiser-Meyer-Olkin measure (KMO) was used to determine sampling adequacy. A KMO value equal to 0.70 indicates that factor analysis can be performed [19]. Cronbach's alpha was used to test internal consistency. Reflecting the average intercorrelations of the items with each scale, Cronbach's alpha has been high in previous studies that used the DBS [20,21]. We chose to use Cronbach's alpha as a measure of reliability for subscales based on the results of the factor analysis. All analyses were performed using a statistical software package (IBM SPSS Statistics, version 21, IBM Japan, Tokyo, Japan). DBS score by age and gender The mean age of all participants was 14.1 ± 0.8 years. Girls comprised 42.1% of participants (mean age: 14.1 ± 0.8 years), while boys constituted 57.9% (mean age: 14.0 ± 0.8 years). No significant difference was found in age between girls and boys. The total score of the Japanese-language DBS ranged from 15 to 75, with a mean value of 21.3 ± 10.5. The total mean score was 20.3 ± 9.1 for girls, and 21.9 ± 11.3 for boys. A significant difference among genders was observed in the DBS scores (P = 0.018), suggesting that boys exhibit greater negative perceptions toward the behavior of dentists than girls. However, no significant difference among ages was found in the DBS scores (P = 0.65). The highest ranked items, in descending order, were: "don't feel comfortable asking questions," "don't feel I can stop for a rest," "make me feel guilty," and "worry if dentists are technically competent." However, the ranking of these items differed between girls and boys. The means and standard deviations in the DBS of girls, boys, and all participants are shown in Table 3. Factor analysis in all participants The KMO value was 0.96, and Cronbach's alpha for all 15 items was 0.95. The factor analysis yielded two factors with eigenvalues above 1.0, which collectively accounted for 65.8% of the variance. We labelled these factors as follows: Factor 1, "trust," which corresponded to items 1, 4, 5, 7, 8, 9, and 10; and Factor 2, "lack of control," which corresponded to items 11, 12, 13, 14, and 15 (Table 4). Cronbach's alpha was 0.92 for Factor 1 and 0.88 for Factor 2, which suggests that these factors have good reliability. Factor analysis in girls and boys The KMO value was 0.93 for girls and 0.96 for boys, and the factor structure differed according to gender. Thus, by including only factors with eigenvalues above 1.0, the DBS for girls comprised two factors: Factor 1, "trust" (seven items); and Factor 2, "communication" (three items) ( Table 5). The DBS for boys comprised two different factors: Factor 1, "lack of control" (six items); and Factor 2, "belittlement" (six items) ( Table 6). Discussion In this study, we obtained the mean DBS score of 21.3 ± 10.5. The score we obtained for Japanese adolescents was lower than those for Swedish [22], Singaporean [1], and Norwegian [5,6] adolescents. Klingberg et al. [23] have argued that particular cultural and social habits along with the dental care system may affect the development of dental fear in adolescents. Regarding the dental care system in Japan, annual oral checkups for junior high school students are conducted as per the School Health and Safety Act. Dentists employed by the school conduct oral examinations, which include checking for dental/periodontal conditions and existence of temporomandibular disorder symptoms, malocclusion, and dental plaque/calculus by using dental mirrors under artificial light. These examinations are non-invasive and of a shorter duration than actual dental treatments. This regular event may contribute to the comparative low DBS score. In terms of cultural habits, we did not examine the cultural relevance of the DBS questions for the Japanese adolescent sample. Generally, Japanese men are encouraged to remain silent about their emotions, which is a cultural habit that could contribute to our findings, since the DBS asks them about fear and anxiety, and this may be a study limitation. However, in the present study, boys exhibited significantly higher DBS scores than girls. Thus, we consider that the questions were appropriate for the Japanese cultural context and did not need modification. Given the narrow age range (13-15 years) in our study population, the relationship between age and level of dental fear was not observed. As past studies have indicated that younger patients are more anxious than older patients [9,24], further study using broader age brackets is needed to confirm the correlation between age and the DBS scores. However, previous studies have also reported no significant differences between the DBS scores of female and male participants [5,21,22,25]. Only one study, on 13-to 15-year-old participants in a population-based sample, showed that boys reported higher DBS scores than girls [1]. Moreover, the DBS has been found to correlate positively with other fear scales-such as the Dental Anxiety Scale and the Dental Fear Survey [7,21,26]. In general populations, levels of dental anxiety were significantly higher in women compared to men [9,10]; thus, we expected that the DBS score would be higher in girls than boys. Contrary to this expectation, boys demonstrated a slightly but significantly higher DBS score than girls in our study. It has been found that participants who have been patients for many years may demonstrate positive attitudes toward their dentist, as well as a higher degree of satisfaction with style of treatment [21]. In our study, 62 boys (11.2% of the total number of boys) and 30 girls (7.4% of the total number of girls) had never visited a dental clinic (data not shown). However, we could not attempt to investigate the routine dental visits of the participants, so further research is necessary in this respect. A larger sample composed of participants who routinely visit the dentist and who never visit the dentist would enable such analysis regarding the degree of confidence in dentists. The number of items included in the factor structure was nearly identical to the original version of the DBS. In our study, only two dimensions were identified. Factor 1 consisted of seven items-1, 4, 5, 7, 8, 9, and 10which we labelled "trust" based on the content of items 7 and 8. Factor 2 consisted of five items-11, 12, 13, 14, and 15-which we labelled "lack of control" based on the content of items 12 and 13. These were intended to capture two distinct dimensions concerning patient attitudes and perceptions toward dental care among Japanese adolescents. Interestingly, gender difference was not limited to the DBS score-the factor structure also showed the gender difference. Dental anxiety can be acquired after feeling a lack of control in the dental treatment situation [22]. The perception of having control over aversive stimuli has been shown to reduce the stressfulness of the event, while diminished feelings of control increase agitation during stressful situations [1]. Indeed, a study reported that children who perceived a lack of control at the dentist were 13.7 times more likely to report high fear and 15.9 times less likely to return to the dentist willingly [1]. In our study, the boys assumed that "lack of control" and "belittlement" were the most important factors for confidence in dentists. In contrast, the girls assumed that "trust" and "communication" were the most important factors, while "lack of control" was excluded. Lu et al. [27] described that Asian children may develop a greater fear of pain as a result of Asian parents guarding them from challenging situations. Moreover, [28]. These tendencies may hinder communication or cause a lack of control over aversive stimuli at the dental clinic. Indeed, the cultural background in Japan-where it is considered a virtue for men to remain silent in such circumstances-may impact this situation. Women are more likely to express their fears, whereas men may not express their fears as openly as women [24]. Therefore, boys may feel they do not have control over aversive stimuli at the dental clinic. These factors may shape the gender difference in factor structure. Hence, for boys, strategies to enhance the patient's sense of control-such as making the patient more active in the treatment, signaling the beginning and end of procedures, and providing the opportunity to ask questions-are recommended in the dental environment [4]. In contrast, strategies that enhance the impression of sympathy for patients-such as listening to their concerns, explaining procedures thoroughly, and increasing their feeling of choice-are recommended for girls. Despite our findings, this study does have a limitation. The DBS has been revised and expanded to a 28-item version (the Revised DBS, R-DBS), reflecting increased understanding of the concerns of fearful patients [29]. Additionally, the R-DBS has been translated into a number of languages, and different language versions have been created [30][31][32]. However, we used the original DBS in this study because the R-DBS has many more questionnaire items than the original and would take more time to complete. As the present study was conducted with another survey regarding temporomandibular disorders, the questionnaire items were limited so participants could answer all items in the allotted time. Consequently, we cannot directly compare the findings of the present study, which uses the DBS score, to other studies using the R-DBS score. A future study using a Japanese translation of the R-DBS is needed to assess gender differences and compare Japanese participants' R-DBS scores with those of other populations. Conclusions This study identified the mean value of the scores of the Japanese-language DBS, as well as two factors of differing strengths pertaining to Japanese adolescents' confidence in dentists. Gender differences in perceptions toward dentists were also observed: boys regard "lack of control" and "belittlement" as important factors for confidence in dentists, whereas girls require "trust" and "communication." As suggested in this paper, accounting for these differences may improve the effectiveness of strategies intended to lower dental anxiety and foster positive dental beliefs in young patients.
2019-07-14T13:31:55.821Z
2019-07-12T00:00:00.000
{ "year": 2019, "sha1": "021596d79e62e67f88d87a1760eee74605f136e0", "oa_license": "CCBY", "oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/s12903-019-0845-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e0d620a133248398101c9ec543104d7e4c317b78", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19112226
pes2o/s2orc
v3-fos-license
Imprints of climate forcings in global gridded temperature data Monthly near-surface temperature anomalies from several gridded data sets (GISTEMP, Berkeley Earth, MLOST, HadCRUT4, 20th Century Reanalysis) were investigated and compared with regard to the presence of components attributable to external climate forcings (associated with anthropogenic greenhouse gases, as well as solar and volcanic activity) and to major internal climate variability modes (El Niño/Southern Oscillation, North Atlantic Oscillation, Atlantic Multidecadal Oscillation, Pacific Decadal Oscillation and variability characterized by the Trans-Polar Index). Multiple linear regression was used to separate components related to individual explanatory variables in local monthly temperatures as well as in their global means, over the 1901– 2010 period. Strong correlations of temperature and anthropogenic forcing were confirmed for most of the globe, whereas only weaker and mostly statistically insignificant connections to solar activity were indicated. Imprints of volcanic forcing were found to be largely insignificant in the local temperatures, in contrast to the clear volcanic signature in their global averages. Attention was also paid to the manifestations of short-term time shifts in the responses to the forcings, and to differences in the spatial fingerprints detected from individual temperature data sets. It is shown that although the resemblance of the response patterns is usually strong, some regional contrasts appear. Noteworthy differences from the other data sets were found especially for the 20th Century Reanalysis, particularly for the components attributable to anthropogenic forcing over land, but also in the response to volcanism and in some of the teleconnection patterns related to the internal climate variability modes. Introduction Temporal variability within the climate system results from a complex interaction of diverse processes, both exogenous and arising from internal climate dynamics.To identify and quantify the effects of individual climate-forming agents, two complementary approaches are typically employed (e.g., IPCC, 2013): numerical simulations based on general circulation models (GCMs) and statistical techniques.While the statistical methods do not offer the physical insight provided by the GCM-based simulations, they are potentially able to capture relations omitted or distorted within GCMs due to the need for simplified representation of the relevant physical processes.A number of authors have investigated the presence of relations between climate forcings and time se-ries of climate variables by statistical means, often involving multivariable regression analysis or related techniques.The resulting studies typically show a strong link between temperature and anthropogenic forcing (e.g., Pasini et al., 2006;Lean and Rind, 2008;Schönwiese et al., 2010;Rohde et al., 2013b;Canty et al., 2013;Chylek et al., 2014b), although linear change with time is also often used to approximate the long-term temperature evolution (e.g., Foster and Rahmstorf, 2011;Gray et al., 2013;Zhou and Tung, 2013).Imprint of solar activity is usually quite weak in the near-surface temperature series (e.g., Lockwood, 2012, and references therein) and the spatial patterns of eventual response tend to be quite complex (Lockwood, 2012;Gray et al., 2013;Hood et al., 2013;Xu and Powell, 2013).Major volcanic eruptions typi-Published by Copernicus Publications on behalf of the European Geosciences Union. cally manifest by temporary cooling in the globally averaged temperature, although its magnitude differs somewhat among individual temperature data sets as well as between ocean and land (Canty et al., 2013) and the geographic fingerprint of the temperature response is far from trivial (Stenchikov et al., 2006;Driscoll et al., 2012;Gray et al., 2013). Compared to the often pan-planetary reach of the external forcings, major manifestations of internal climate variability modes tend to be more localized, though sometimes with ample projection of weaker influences through teleconnections.Relatively well understood is the El Niño/Southern Oscillation (ENSO) system, dominating in the tropical Pacific, but also affecting various aspects of weather patterns in many regions across the globe and leaving a distinct imprint in globally averaged temperature as well (e.g., Trenberth et al., 2002).The effect of the North Atlantic Oscillation (NAO) is prominent particularly in the areas around the northern Atlantic (e.g., Hurrell et al., 2003).The Northern Atlantic is also the primary area of activity of Atlantic Multidecadal Oscillation (AMO), with potential imprints noticeable in local temperatures as well as their global means (e.g., Tung and Zhou, 2013;Zhou and Tung, 2013;Rohde et al., 2013b;Muller et al., 2013;Chylek et al., 2014b;van der Werf and Dolman, 2014;Rypdal, 2015).A related (pseudo)oscillatory system manifests in the northern Pacific in the form of Pacific Decadal Oscillation (PDO: Zhang et al., 1997), although its direct link with global temperature seems to be less pronounced than AMO's (e.g., Canty et al., 2013).Other potentially influential variability modes can be identified in the climate system, though their exact mechanisms and effects are not always completely known.Selection and preparation of explanatory variables representing individual climateforming factors is a critical part of statistical attribution analysis; more details on their choice and specific form in our tests are provided in Sect.2.1. Of the descriptors of the climate system, temperaturerelated characteristics are arguably the most intensely investigated.Over the recent years, various research groups have developed and gradually evolved data sets of near-surface global gridded temperature (including MLOST: Smith et al., 2008;GISTEMP: Hansen et al., 2010;HadCRUT4: Morice et al., 2012;Berkeley Earth: Rohde et al., 2013a, b), which now provide more than a century of mid-to-high resolution data for a substantial portion of the globe.In addition to these temperature analyses, created primarily by interpolation and/or averaging techniques, reanalysis data are also used to approximate past climate.Of particular interest regarding the longer-term variability is the 20th Century Reanalysis (20CR: Compo et al., 2011), currently providing global gridded data from the mid-19th century on.While all these data sets approximate the same historical evolution of the climate system and share much of their basic temporal variability on pan-planetary scale (e.g., Hansen et al., 2010;Foster and Rahmstorf, 2011;Compo et al., 2013;Rohde et al., 2013b), the respective temperature fields do differ to some, regionally dependent, degree.In this paper, we aim to investigate and compare selected aspects of spatio-temporal variability in several gridded data sets of monthly temperature, introduced in Sect.2.2, with emphasis on identification of temperature responses attributable to climate forcings and major modes of internal climate variability. Our methodology of attribution analysis is largely based on multiple linear regression, as detailed in Sect.3. Basic match of temporal variability between the temperature data sets is quantified through linear correlations, with results shown in Sect.4.1.Presence, magnitude and statistical significance of components attributable to individual explanatory variables in globally averaged temperatures are investigated in Sect.4.2, including an analysis of potential timedelayed responses.An analysis of the geographical response patterns is then carried out in Sect.4.3, followed by an assessment of local time-delayed responses in Sect.4.4 and discussion of the results in Sect. 5.Only the key outcomes of our analysis are presented in the paper itself -additional materials are provided in the Supplement, particularly results derived for shorter sub-periods of the time series studied. Explanatory variables Although many of the statistical attribution studies pursue a similar goal and share much of their basic methodology, substantial diversity exists in the selection of the explanatory factors employed and their specific variants.Here, we used eight predictors with proven or reasonably suspected influence on climate on global or continental scale, representing effects of various external forcings and climatic oscillations (Fig. 1). Among the external influences on the climate system, role of the greenhouse gases (GHGs) is relatively well understood (e.g., IPCC, 2013).Due to their positive contribution to radiative forcing, man-made GHGs are believed to be responsible for much of the near-surface global temperature rise during the later stages of the instrumental period.Anthropogenic influences to climate do also manifest through formation of various aerosols, including sulfates or black carbon, or by production of tropospheric ozone, although the uncertainties regarding their direct and especially indirect impacts are still profound (e.g., Skeie et al., 2011;IPCC, 2013).Furthermore, due to the limited lifespan of the aerosols, their amounts are highly variable in time and space, unlike the concentrations of the relatively long-lived GHGs.From the perspective of statistical analysis, the often strong temporal correlation of the amounts of GHGs and aerosols is also problematic, making it difficult for a regression mapping to distinguish between their respective effects.For these reasons, anthropogenic aerosol forcings were not directly considered here, and global CO 2equivalent GHG concentration was used as the sole anthro- pogenic predictor, in the version provided by Meinshausen et al. (2011; http://www.pik-potsdam.de/~mmalte/rcps/),interpolated onto monthly time resolution.Note that the temperature responses obtained with this GHG-only predictor would be virtually identical to those derived for total global anthropogenic forcing, as further discussed in Sect. 5. Global monthly series of stratospheric aerosol optical depth provided by NASA GISS at http://data.giss.nasa.gov/modelforce/strataer/ (Sato et al., 1993) was employed as a proxy for volcanic forcing.The effects of variable solar activity were characterized through monthly values of solar irradiance, based on the reconstruction by Wang et al. (2005) and obtained from http://climexp.knmi.nl/data/itsi_wls_mon.dat.Extension of the series beyond year 2008 was done by the rescaled SORCE-TIM measurements from http://lasp.colorado.edu/home/sorce/data/tsi-data/(Kopp et al., 2005). In addition to the external forcings tied to exogenous factors, temporal variability of the climate system is also shaped by various internal oscillations.Southern Oscillation index (SOI), provided by CRU at http://www.cru.uea.ac.uk/cru/data/soi/ (Ropelewski and Jones, 1987), was used to characterize the phase of ENSO, the dominant variability mode in the tropical Pacific.North Atlantic Oscillation (NAO) was represented by its index (NAOI) by Jones et al. (1997), defined from normalized pressure difference between Reykjavik and Gibraltar (CRU: http://www.cru.uea.ac.uk/cru/data/nao/).A great deal of attention has recently been devoted to the effects of Atlantic Multidecadal Oscillation (AMO), a climatic mode possibly exhibiting periodicity of about 70 years (Schlesinger and Ramankutty, 1994) and typ-ically characterized by indices derived from north Atlantic SST (e.g., Enfield et al., 2001;Canty et al., 2013).Presence of AMO-synchronized components in temperature series has been demonstrated at both global (e.g., Canty et al., 2013;Rohde et al., 2013b;Zhou and Tung, 2013;Chylek et al., 2014b;Rypdal, 2015) and local (e.g., Enfield et al., 2001;Tung and Zhou, 2013;Chylek et al., 2014a;Mikšovský et al., 2014) scales, although discussion still continues regarding AMO's exact nature and optimum way of its representation (Mann et al., 2014;Zanchettin et al., 2014;Lewis, 2014;Knudsen et al., 2014;Ting et al., 2014).In this analysis, AMO's phase has been characterized through a linearly detrended index (AMOI) based on the prevalent definition by Enfield et al. (2001) and downloaded from http: //www.esrl.noaa.gov/psd/data/timeseries/AMO/.Note that a non-smoothed version of the index was used, involving both long-term and shorter-term SST variability in the northern Atlantic.An AMO and ENSO-related phenomenon in the north Pacific area, Pacific Decadal Oscillation (PDO - Zhang et al., 1997), is typically characterized through a series of the first principal component of north Pacific SST. Here, the variant calculated by KNMI Climate Explorer at http://climexp.knmi.nl/from ERSST data was employed as predictor, further referenced as PDOI.Lastly, to explore patterns of temperature variability in the southern extra-tropical regions, Trans-Polar index (TPI) was also used as an explanatory variable.The respective series, calculated as normalized pressure difference between Hobart (Tasmania) and Stanley (Falkland Islands), is available from CRU at http: //www.cru.uea.ac.uk/cru/data/tpi/ (Jones et al., 1999) for the 1895-2006 period.Beyond the year 2006, sea-level pressure data from the 20th Century Reanalysis were used to extend the CRU-supplied series. Not all of the predictors here can be considered mutually independent, from neither physical nor statistical perspective.In Table 1, formal similarity of the series of individual explanatory variables is illustrated through values of Pearson correlation coefficient r, and degree of collinearity is also quantified by variance inflation factor for each predictor.The positive correlation between GHG amount and solar irradiance (r = 0.37 for our version of the predictors, over the 1901-2010 period) stems from similarity of the long-term components of these signals (lower values in the early part of the 1901-2010 period, higher towards the end); their causal link over the time period studied here is unlikely though.Noteworthy links can also be seen for PDO, which is considered to be partly driven by ENSO (Newman et al., 2003), resulting in anticorrelation of the PDOI and SOI series (r = −0.37).A relation also exists between PDOI and AMOI: although the connection is weak for synchronous series (r = 0.01), distinct time-delayed correlations exist (e.g., Zhang and Delworth, 2007;Wu et al., 2011).Correlation between AMOI and solar irradiance (r = 0.16) and volcanic aerosol optical depth (r = −0.27)may be an indication of possible external forcing of AMO (Knudsen et al., 2014); similarity between GHG and AMOI series (r = 0.22) may stem from use of linear detrending in the calculation of AMOI (see Canty et al., 2013, for a broader discussion of the related matters).Anticorrelation between volcanic aerosol optical depth and SOI (r = −0.17)results mainly from coincidence of some of the major volcanic events with the El Niño phases of ENSO.While the correlations within our set of predictors are mostly mild, there are some potential implications of this shared variability, as discussed in Sect. 5. Temperature data sets Monthly series of near-surface temperature on a (semi-)regular longitude-latitude grid from four temperature analyses and one reanalysis were studied: -GISTEMP of NASA's Goddard Institute for Space Studies, available at http://data.giss.nasa.gov/gistemp/(Hansen et al., 2010).The data set provides temperatures since 1880; it was employed here in the version on a 2 • × 2 • grid, with 1200 km smoothing, using ERSSTv3b as the source of sea surface temperatures.Tests were also carried out with the version employing 250 km smoothing; however, due to substantially more limited data coverage, and just small differences between the resulting temperature response patterns, the outcomes for the 250 km variant are only provided as an additional material in the Supplement (Fig. S5). -Temperature analysis of the Berkeley Earth group, obtained from http://berkeleyearth.org/data (Rohde et al., 2013a, b).While the data set is primarily created for land, a variant with coverage of oceanic areas by reinterpolated HadSST3 (Kennedy et al., 2011a, b) is also provided.We used this combined data set here; for brevity, it is referred to as BERK.The data are available in the spatial resolution of 1 • × 1 • , for years from 1850 on. -Merged Land-Ocean Surface Temperature Analysis (MLOST) by NOAA, from http://www.esrl.noaa.gov/psd/data/gridded/data.mlost.html(Smith et al., 2008).Defined on a 5 The respective global monthly series were obtained from the web pages of the individual research groups, with the exception of 20CR, for which global average was calculated as a latitude-adjusted weighted mean from the gridded data for the full globe or for the area between 60 • S and 75 • N (i.e., excluding the poleward-most regions with the most incomplete temperature coverage by the observational data sets). Table 1.Pearson correlation coefficient between series of individual predictors (Fig. 1) in the 1901-2010 period.The upper-right segment of the matrix contains values for the original concurrent series, the lower-left segment values for their time-shifted versions (as specified in Fig. 4's caption).The bottom-most row shows values of the variance inflation factor (VIF) for individual time-shifted predictors, calculated as 1/(1-R 2 ), where R 2 is the coefficient of determination obtained from regression of the given explanatory variable on the rest of the predictors.See Table S1 in the Supplement for correlations over the sub-periods 1901-1955 and 1956-2010. Regression analysis setup Despite the inherently nonlinear and deterministically chaotic nature of the climate system, the interaction of external climate forcings in temperature signals can often be approximated quite well by a simple linear superposition (e.g., Shiogama et al., 2013).Even when effects of internal climatic oscillations are studied in the frame of multivariable statistical attribution analysis, nonlinearities are generally not dominant, albeit sometimes detectable (e.g., Pasini et al., 2006;Schönwiese et al., 2010;Mikšovský et al., 2014). Further considering the increased computational costs and more complicated interpretation for the nonlinear regression techniques, only multiple linear regression (MLR) was applied here to separate contributions from individual predictors, subject to a calibration procedure minimizing the sum of squared regression residuals. Although application of MLR-based mappings is quite straightforward in itself, potential challenges await when estimating the statistical significance of the regression coefficients, particularly due to non-Gaussianity and serial correlations in the data.For construction of the confidence intervals in Sect.4.2, bootstrapping was used.Since the basic form of bootstrap (resampling data for individual months as fully independent cases) does not account for autocorrelation structures in the data, which cannot be ignored in the monthly temperatures (e.g., lag-1-month autocorrelations in the regression residuals ranged between 0.32 and 0.61 for different versions of globally averaged temperature), moving-block bootstrap was used (e.g., Fitzenberger, 1998). In an effort to alleviate the high computational costs of full bootstrap, an alternative approach to assessment of statistical significance was also explored: Monte Carlo-style tests designed to estimate thresholds of the regression coefficients, consistent with the null hypothesis of the absence of regressor-related component(s) in the regressand.Our experiments have shown that the effect of autocorre-lation structures on the coefficient thresholds is approximated quite well by the predictor-specific expansion factors ((1 + a p a r )/(1 − a p a r )) 1/2 , with a p and a r representing AR(1) autoregressive parameters for the predictor series and for the series of the regression residuals, respectively.This factor resembles the one occasionally employed in estimation of statistical significance of correlations between series with AR(1)-type autocorrelation structure (e.g., Bretherton et al., 1999); its use allows for a numerically inexpensive approximation of statistical significance provided that the structure of the regression residuals conforms to a AR(1) model.While such assumption is not completely valid for the temperature data (e.g., Foster and Rahmstorf, 2011), the results obtained proved to be close to those from moving-block bootstrap, with noticeable differences only appearing in the presence of the strongest residual autocorrelations.These predictorspecific inflation factors (applied to the coefficient significance thresholds derived for predictand data free of serial correlations) were therefore used for approximation of the significance of the regression coefficients in the tests involving gridded temperature data in Sects.4.3 and 4.4. The analysis has been carried out over the 1901-2010 period, chosen as a compromise between maximizing the length of the signals studied and limited availability and reliability of data for the earlier parts of the instrumental period.Additional results for the first and second half of the target period are provided in the Supplement.To facilitate comparison of the contributions from individual explanatory variables mutually and to temperature variability itself, outcomes of the regression analysis are presented in the form of temperature responses to pre-selected characteristic variations of individual predictors, illustrated in Fig. 1 and specified in its caption.To limit biases due to incompleteness of the temperature series in some locations and data sets, only results for predictands with less than 10 % of missing values are shown. Local temperature correlations Ideally, all the temperature data sets should follow the same, historical, trajectory of the climate system.In reality, differences appear among individual representatives of the climatic past, due to variations in the structure of the source data and specifics of their processing.While we obviously cannot make a comparison to a perfect embodiment of the past states of the atmosphere, the existing temperature approximations can be compared mutually, to assess which regions and/or periods exhibit a higher degree of match (signaling lower uncertainty due to the data set choice), and where stronger contrasts emerge.The basic structure of these differences is illustrated in Figs. 2 and S1 (in the Supplement) through pairwise Pearson correlations (r) between monthly series of temperature anomalies from different data sets.Unsurprisingly, a vast majority of locations exhibit positive correlations, for any data set couple, but magnitude of this link varies substantially among different regions.Over continents, a particularly good match is indicated for Europe and (especially eastern) North America, regions with high density of reliable observations spanning the entire target period.On the other hand, in central Africa, central South America and south-east Asia, the resemblance of temperature series is weakened.The mismatch is also more noticeable when only the first half of the analysis period is considered (Fig. S1).The 1956-2010 period then shows generally higher correlations, though it should be noted that the presence of stronger longterm trend in the later 20th century, largely shared by all the data sets and most locations, amplifies the values of correlations in this sub-period. The above-specified general tendencies in regional correlation patterns also hold for the relation between the analysistype data sets and 20CR (bottom row in Fig. 2): a relatively good match of the temperature anomalies in Europe and eastern US contrasts with more profound differences in the tropical parts of Africa and much of South America.The question remains whether the disparities detected can be attributed to misrepresentation of any specific source(s) of temperature variability -an issue that is further investigated in the following sections. Forcing imprints in global mean temperature Much of the existing research of temperature variability and its attribution by statistical means focuses on globally averaged data.Aside from limiting the number of signals to be analyzed (and thus allowing for more detailed examination of each of them), the world-wide averaging suppresses regional variations and allows factors associated with globalreaching forcings to become more reliably detectable.On the other hand, effects contributing responses of opposite sign in different regions (such as ENSO or NAO) may be obscured in pan-planetary representation.In this section, global and global land temperature signals are investigated for the presence of the imprints of individual internal and external forcing factors. It has been shown on various occasions that responses in climate variables (including temperature) are not necessarily perfectly synchronized with the variables representing the climate forcings, and time-offset relations may manifest (e.g., Canty et al., 2013 and references therein).In Fig. 3, this is illustrated via application of MLR mappings with individual predictors offset by t ranging between −24 and +24 months.Results from the full range of t are shown for all predictors, to illustrate the fact that regression analysis may indicate formal links even in the absence physically meaningful dependencies (such as the connections between temperature and volcanic forcing for highly negative t). For GHG concentration, the lack of short-term variability results in near-invariance of the temperature response.Some t-related variability is indicated for solar irradiance influence, though the dependence seems largely governed by irregular fluctuations and no distinct extremum appears.A delayed response is clearly noticeable in the component associated with volcanic activity -a distinct, though rather flat, maximum of anticorrelation between about 5 to 10 months is indicated for all the analysis-type data sets.In the case of SOI, the strongest response occurs for time lags between approximately 0 and 6 months.The effect of NAOI, on the other hand, is generally instantaneous.The response of global temperature to AMOI and PDOI also shows maximum at, or close to, t = 0.For TPI, the imprint in global temperature series is weak regardless of the predictor's shift. All four analysis-type data sets exhibit high degree of similarity of the features in the globally averaged series.On the other hand, some noteworthy distinctions appear for 20CR.Most notably, the volcanism response curve is similar in shape to the ones characterizing the observational data, but shifted towards positive values.Furthermore, NAO response peaks at +1 month instead of t = 0 and weakerthan-observed connection to GHG is indicated over land. These differences can be partly ascribed to the specifics of calculation of mean temperature for the observational data sets, particularly variable level of data coverage for the observed data.However, different spatial response patterns are also likely responsible, as shown in Sect.4.3. To facilitate mutual comparability of the results, and also to consider that the physical links between predictors and temperature should be the same for all data sets, a unified set of time shifts was employed for the tests in Sects.4.2 and 4.3.Lead time of +1 month was used with the solar irradiance, as previously done by Lean and Rind (2008) or Canty et al. (2013), although very similar outcomes would have been obtained with t = 0, too.The time shift was set to +2 months for SOI, same as in Canty et al.'s setup, and volcanic forcing was used with t = +7 months (close to Lean and Rind's and Canty et al.'s shift of +6 months).The rest of the predictors entered the regression mappings without a time offset, due to just a small difference compared to a setup with t = 0, or absence of a distinct, physically justified extremum within the analyzed range of time delays.In Fig. 4, the results of the analysis are shown in the form of temperature responses to the characteristic variations of the predictors, with their 99 % confidence intervals generated by moving-block bootstrap.The regression fits of individual temperature series are also visualized in Fig. S4. Our analysis suggests the GHG-attributed rise in global temperature to be approximately 0.8 • C over the 1901-2010 period, within the range usually associated with anthropogenic forcing (IPCC, 2013).Over land, values between 1.05 and 1.2 The response of global temperature to volcanic forcing is clear, statistically significant and of similar magnitude in all analysis-type data sets: drop of 0.36 to 0.44 • C in global land temperature is indicated for Mt.Pinatubo-sized event, slightly stronger than the values reported by Canty et al. (2013).The response range is lowered to about 0.16 to 0.19 • C when the oceanic areas are included, close to Canty et al.'s results.As already shown in Fig. 3, 20CR temperature behaves in a somewhat different fashion, with a smaller, statistically insignificant temperature response.A look at the results for individual sub-intervals reveals that this positive bias may be stemming from the relations indicated for the first half of the 20th century (which, however, contains just a very limited set of volcanic events, with the strongest of them -Novarupta eruption of 1912 -being extratropical and thus atypical regarding its world-wide effects).For the 1956-2010 period, 20CR global volcanic response is more in line with the behavior of the observational data sets. While our results show the well-known tendency towards higher global temperature anomalies during the El Niño phases of ENSO (e.g., Trenberth et al., 2002), the respective components tested close to the threshold of statistical significance at α = 0.01.A response of comparable magnitude was found for NAO, with a positive link indicated between all temperature signals and NAOI, though, again, at rather low levels of statistical significance in most cases. Conforming to several previous studies concerned with association between global temperature and AMO (e.g., Rohde et al., 2013b;Zhou and Tung, 2013;Chylek et al., 2014b) and using similar (i.e., linearly detrended) version of its index, our results suggest formally strong link of detrended mean North Atlantic temperature and its global counterpart, distinct for land-based temperatures as well.The question remains, however, of how representative AMOI really is of in-ternal variability in the climate system, as further discussed in Sect. 5. The imprint of PDO in global temperature is quite clear and, for our combination of predictors, actually about as strong as SO's.It should be considered though that SOI and PDOI series are not independent and, as predictors, they partly compete for the same variability component in the temperature signals.When included alone among the explanatory variables (i.e., either SOI or PDOI, but not both), the respective responses are generally strengthened, as is their statistical significance.Considering that SOI and PDOI are only partly collinear and that their temperature response patterns do differ in many regions (Sect.4.3), both were included as formally independent predictors in our analysis. The final predictor considered in our setup, TPI, does not project much influence upon global temperature, though the respective component is borderline statistically significant for some of the data sets.Just as in the case of SOI, NAOI or PDOI, the relatively weak global response can be traced to the presence of mutually opposite contributions from different regions, as demonstrated in the next section. Forcing imprints in local temperatures Even clear and strong presence of a component associated with a particular forcing factor in globally averaged temperature does not automatically imply its universal relevance on a local scale.Conversely, locally dominant factors may be marginal in a global perspective.Here, we present an overview of geographic patterns of temperature response to external and internal forcing, for the set of eight predictors identical to that in Sect.4.2.Only results for the data sets with mostly complete data coverage in the 1901-2010 period (GISTEMP, BERK, 20CR) are shown (Fig. 5); see the Supplement (Fig. S5) for the full set of results including MLOST and HadCRUT4. While positive correlation between GHG concentration and temperature is typical for most regions of the world, the strength of the component formally attributed to greenhouse gases (or, more generally, to anthropogenic forcing) varies substantially, and insignificant links or even anticorrelations appear in some smaller areas.Most prominently, the oceanic region south of Greenland, known for a negative temperature trend since 1901 (e.g., IPCC, 2013), displays high contrast to the rest of the world.Relatively good match between the analysis-type data sets is found in most regions.However, notable differences between the gridded observations and 20CR appear in a few geographically limited locations.Aside from mild contrasts in some oceanic regions (particularly central and eastern equatorial Pacific), distinctly negative temperature responses appear over land in the eastern Mediterranean, central South America and Texas.On the other hand, warming response over northern China is overestimated in 20CR.A similar pattern of discrepancy between the observed data and 20CR has already been reported and Earth Syst.discussed by Compo et al. (2013) in their analysis of linear trends in the temperature series for 1901-2010, with various potential explanations suggested.Generally, although longterm components (whether expressed by match with anthropogenic forcing, or by linear trends) in 20CR are characterized consistently with the analysis-type data in many regions, their representativeness cannot be assumed universally. The local temperature responses to solar irradiance are arranged in a complex pattern, encompassing both positive and negative links, combining in a near-neutral contribution to global land average.Statistically significant responses are rarely indicated and influence of solar variability therefore seems largely inconclusive at a local scale (Figs.5b, S5b).Nonetheless, sign and magnitude of the links appear to be similar across individual data sets, including 20CR.From the results for the oceanic areas, it is revealed that main contributions to the borderline significant link between global temperature and irradiance come from southern extratropical areas and the northern Pacific.The response patterns shown by Lean (2010), Zhou and Tung (2010) or Gray et al. (2013) do differ somewhat from our results; however, direct comparison is problematic due to distinctions between time periods analyzed as well as detection methodology employed.The outcomes for the 1901-1955 and 1956-2010 sub-periods (Fig. S6) suggest some degree of stability of the response patterns, though with enough differences to explain the mismatch in contributions to globally averaged land temperature (Sect.4.2).Overall, our analysis confirms that solar activity does not leave a strong, unambiguous imprint in lower tropospheric temperature. While the cooling effect of volcanic forcing was clearly apparent in global mean temperature, its local influence is less ubiquitous (Figs.5c, S5c).Regions with negative response do slightly prevail in the observational data sets, but positive contributions are detected in several areas, too.Only few locations show statistically significant responses of either sign.The pattern revealed bears basic resemblance to the ones shown by Lean and Rind (2008) and Lean (2010), with post-eruption cooling indicated in North America and warming over northern Asia.Some differences emerge, however, emphasizing the sensitivity of the forcing response patterns to the analysis details such as specific choice of the predictor(s) or time period considered.In the 20CR, positive responses are more numerous and stronger in magnitude, pushing the global mean volcanism-attributed signal towards positive values and statistical non-significance.This tendency is noticeable especially during the first half of the analysis period (Fig. S6), although it should be noted again that the relative lack of global-reaching volcanic events renders the results rather uncertain for the 1901-1955 period. The canonical pattern of temperature response associated with SO/ENSO activity (e.g., Trenberth et al., 2002;Lean and Rind, 2008;Gray et al., 2013) also emerged in our analysis, including the teleconnections extending beyond the tropical Pacific region (Figs. 5d,S5d).While some minor differences exist among individual data sets, the resemblance of the respective patterns is high; some minor exceptions are found for 20CR over land, such as weaker projection of SOI influence over eastern Africa.The effect of North Atlantic Oscillation, too, is shown very clearly for its primary area of Unlike the multipolar geographical responses associated with SO and NAO, the regression coefficients between AMOI and local temperature are predominantly positive worldwide, and significant connections extend across the globe (Figs. 5f,S5f).This largely unidirectional link, pre-Earth Syst.Dynam., 7, 231-249, 2016 www.earth-syst-dynam.net/7/231/2016/viously pointed out through correlation analysis by Muller et al. (2013), results in much stronger AMO-correlated component in global temperature.On the other hand, it also raises a question of what exactly the relation between temperatures worldwide and those in the northern Atlantic is (beyond the obvious fact that Atlantic SST is one of the components averaged into global temperature, and thus not completely independent).While many of the recent studies employed the (linearly detrended) AMO index in the role of an independent explanatory variable, arguments have been made for use of different forms of the index (see Canty et al., 2013 and the references therein) or questioning the nature of AMO itself (e.g., Booth et al., 2012;Mann et al., 2014).In our analysis, we focused rather on formal connections in the data studied and mutual (in)consistency of various data sets; the issue of exact physical nature and stability of AMO was not central.The imprint of AMOI is similar across individual data sets; noticeable differences appear especially over central and eastern Eurasia.PDO's influence pattern shows both positive and negative connections, strongest in the Pacific area (e.g., Deser et al., 2010), but with some significant teleconnections extending to more distant regions as well (including Africa or Scandinavia).PDO's imprint in 20CR is relatively close to that in the analysis-type data; differences appear especially over parts of Africa (Figs. 5g,S5g). The relation between temperature and TPI manifests in a semi-regular pattern of alternating positive and negative sectors over the southern oceans and nearby continents, though only in the segments near South America and Australia do the relations test as statistically significant (Figs.5h, S5h).The 20CR-based response resembles the observational pattern in shape, but is generally stronger magnitude-wise. Delayed responses in local temperatures The homogeneously timed predictors employed in Sect.4.3 do provide a robust basis for an assessment of the superposition of their effects in globally averaged temperature, but overlook the possibility of geographically dependent delays.To reveal the characteristic patterns of locally specific asynchronous responses to the explanatory variables, regression analysis of local temperature was also carried out with individual predictors shifted in time by t ranging between −24 and +24 months.Figures 6 and 7 summarize the outcomes by displaying the strongest local temperature response detected, along with the corresponding t.Note that the statistical significance thresholds have been calculated to account for the fact that the strongest response within the −24 to +24 months range is used.As a result, they are generally higher (i.e., a stronger response is required to be deemed significant at the given significance level) than in the setup with fixed t in Sect.4.3.Only the three data sets with least missing values -GISTEMP, BERK and 20CR -were analyzed in this case. For the GHG amount, the results exhibit little sensitivity within our time window, and the magnitude of temperature responses is virtually identical to the t = 0 setup, due to the absence of short-term variations in the predictor series.Likewise, the strongest responses to solar forcing are quite similar to the ones for the pre-set delay of 1 month (Fig. 5b), while the maximum seems to be rather randomly positioned, arguably reflecting the stochastic components in the time series.For volcanism, even with the variable time delay option, still only a handful of gridpoints show a significant response and the pattern of time delays associated with maximumstrength components does not show any distinct regularity. The spatiotemporal variability of temperature response to ENSO phase is well known (e.g., Trenberth et al., 2002) and reflected in our results as well: the occurrence of the strongest temperature response leads SOI by a few months in the eastern equatorial Pacific, whereas largely concurrent variability is indicated for the western Pacific.In the Indian Ocean, strongest temperature response lags by a few months behind SOI and delay of 6 to 8 months is indicated around southeast Asia as well as in northern Australia.20CR reproduces these patterns quite well over the oceans, but noticeable differences appear for teleconnections over land, most notably in less consistently expressed links to Africa and the southern part of South America. The strongest statistically significant temperature responses to NAO are instantaneous in most areas, or delayed by 1 month (mostly over northern Atlantic).The pattern detected from the observational data sets is reproduced quite well in 20CR, with the most notable exception again being the breakdown of transcontinental teleconnection over eastern Asia and appearance of a link to southern Africa.The reason for the temporal shift of NAO-attributed signal in 20CR global temperature (Fig. 3) therefore does not seem to be the misrepresentation of timing of the local temperature responses.Rather, it can be traced to the perturbed balance between the opposite-in-sign responses from different regions (note especially the overly negative contribution from northern Africa).Though these deviations are relatively small, they vary for different t, enough to alter the relatively weak globally averaged signal and bring forth a spurious delay in global response. There is a distinct connection between the AMO index and local temperature in many regions of the world even without a time shift (Fig. 5f), but the timing of the maximum strength of this association varies distinctly within our ±24 months testing range.Concurrence is indicated in much of the northern Atlantic, delay of 2 to 5 months in the northern part of the Indian Ocean and adjacent land, and around 4 to 10 months in a large portion of the western equatorial Pacific.On the other hand, in the eastern and northern part of the Pacific, temperatures at −12 to −6 months show the strongest association with AMOI, whereas delays between pattern with only minor differences.More distinctions appear over land, especially in southern Asia.Similar behavior is also indicated for PDO: quite a realistic representation of the delayed responses over oceans and areas adjacent to the northern Pacific by 20CR breaks down somewhat for more remote land areas (most notably Africa), though some of the teleconnections seem to be maintained quite well (Scandinavia). Finally, in the case of TPI, the results indicate concurrence of the oscillations or delay of 1 month for most locations with a statistically significant response.The pattern is reproduced quite well by 20CR, though magnitude of the temperature variations is somewhat exaggerated again. Discussion and conclusions The primary objective of our analysis was twofold.Firstly, we aimed to provide a unified outlook into the local temperature responses associated with activity of multiple climateforming agents, exogenous and endogenous, and the way they combine in pan-planetary temperature signals.While various past studies already dealt with a similar kind of sta-tistical attribution analysis, their scope was typically more focused, phenomenon-or region-wise, but also regarding the temperature data source.Our second objective therefore consisted in assessing the robustness of the attribution analysis results among several commonly employed representations of monthly temperature throughout the 20th and early 21st century.To this end, four observational temperature data sets and one reanalysis were studied through linear regression, extracting components synchronized with temporal variability of eight predictors representing external climate forcings and internal variability modes. The basic correlation analysis in Sect.4.1 revealed the general geographical patterns of temperature (mis)match among different observational data sets.Unsurprisingly, the best agreement was found for regions with the best coverage by measurements (most notably Europe and eastern North America, where the Pearson correlations of monthly temperature anomalies typically exceeded 0.9), leaving relatively little room for uncertainty in the gridded data.Regions with sparser observations, such as interiors of Africa or South America, exhibited more disparity, and coverage by the gridded data was often incomplete in these locations.Of even greater interest was the resemblance between analysistype data sets and the 20th Century Reanalysis (20CR).Since 20CR does not directly utilize the temperature measurements over land, greater deviations from "reality" may be expected, especially for the continental areas.While the correlation analysis indeed indicated somewhat loosened relation to the analysis-type data, the match was still quite good in most regions, with the poorest agreement again found in Africa and South America.Major differences between the temperature anomaly series were seldom observed over oceans (the most notable exception being the higher latitudes of the southern hemisphere).Since all the data sets (including 20CR) employ sea surface temperature as inputs, temperatures are tied more closely to the historical trajectory of the climate system and eventual contrasts can be largely ascribed to differences among individual SST representations (assessed in detail by Yasunaka and Hanawa, 2011). While the correlation analysis pointed out the basic patterns of differences between individual data sets, the question remains how much these can affect the outcomes of the attribution analysis.Match among the GHG-attributed temperature changes was generally strong in most locations, but certain smaller regions were highlighted in 20CR where this trend-like component diverged substantially from the analysis-type data.These local discrepancies, previously pointed out by Compo et al. (2013), also somewhat decrease the magnitude of the GHG-attributed component in the global land temperature for 20CR.Furthermore, when drawing conclusions from the results presented, it is essential to consider the limitations of the statistical approach to the attribution analysis.First of all, even formally statistically significant connections are not proof of physically meaningful relations, as the regression analysis only seeks formal similarities among the time series, unable to verify causality of the links.For the attribution of the temperature trends to GHGs, this is particularly critical.Although the significance level is generally high for the GHG-related regression coefficients, it would be such for any explanatory signal of similar structure (including a plain linear trend).While it is physically justified to associate the increase in GHGs with warming tendencies, there are other potential anthropogenic forc-ing factors sharing similar temporal evolution, yet intentionally omitted in our analysis.Various man-generated aerosols can contribute to either local warming (e.g., black carbon) or cooling (e.g., sulfate aerosols; see, e.g., Skeie et al., 2011).In many areas, the temporal progression of aerosol-related predictors closely mimics that of GHG concentration (for instance, the Pearson correlation between GHG concentration and regional SO 2 emissions is over 0.5 in most of the world and often exceeds 0.9 locally, based on the SO 2 data by Smith et al., 2011).Our GHG-based predictor should therefore be considered an approximate (and simplified) characterization of the anthropogenic forcing in general, rather than of greenhouse gasses alone.Note also that very similar values of temperature response would have been obtained for a predictor representing total global anthropogenic forcing rather than GHGs alone, due to very high temporal correlation of the respective series (exceeding 0.99 over our analysis period when using the forcing data by Meinshausen et al., 2011) and due to the fact that the responses are scaled by the endto-end increase in the predictor series here.Naturally, this near-invariance in the given statistical setup should not be interpreted as equivalence of the respective forcings in a physical sense.A more accurate view of the issue could perhaps be gained by an analysis employing local-specific descriptors of anthropogenic activity, but the challenges attached (such as high collinearity of the anthropogenic predictors, limiting the ability of the regression mappings to distinguish among their effects) make such a task less suitable for approaching by purely statistical means.General circulation models may represent a more suitable tool for capturing the related links, even though the associated uncertainties are still substantial (e.g., IPCC, 2013).This also applies to the evaluation of other complex aspects of the climate system dynamics, such as effects of long-term memory or climatic feedbacks, intentionally omitted in our simplified regression-based analytical frame. Of the natural forcings, the imprints of solar activity seem to be represented in quite a similar manner by all the data sets studied, including 20CR.The component attributed to variations of solar irradiance (involving both the 11-year cycle and longer-term variability) was quite weak, in most individual regions as well as in globally averaged temperature.These results are largely consistent with previous assessments of the impacts of solar activity on temperature (e.g., Lockwood, 2012;Gray et al., 2013).Still, the spatial patterns of solar influence exhibit some degree of temporal stability, suggesting that even though the fingerprints detected do largely not test as statistically significant, they are not just an artifact of stochastic components in the temperature series. An interesting contrast between the results for globally averaged temperature series and for their local counterparts was found in the case of the effects of volcanic activity.The wellknown near-surface cooling following major volcanic eruptions was clear in all versions of globally averaged observed temperature, but a rather complex pattern emerged from the gridded temperature data.Post-eruption warming was indicated in several regions.There might be dynamical reasons for such behavior (e.g., Stenchikov et al., 2006;Driscoll et al., 2012), but the structures detected were quite ambiguous, exhibiting both poor temporal stability and low statistical significance (an uncertainty partly ascribable to distinctiveness of individual volcanic events and their relatively brief periods of effect within the time frame of our analysis).Furthermore, aliasing of volcanic and ENSO activity (with major late-20th century eruptions coinciding with El Niño phases of ENSO) also needs to be considered when attributing the volcanic activity, as well as the possibility of its influence on the AMO phase (Knudsen et al., 2014).Interpretational pitfalls aside, there was a strong agreement between the observational data sets in their representation of the volcanism-attributed spatial pattern.20CR data showed tendency toward more positive post-eruption temperature anomalies in several regions, resulting also in a more neutral response to volcanism in the globally averaged 20CR data (largely due to the anomalous response of 20CR-based global land temperature during the first half of our analysis period). The temperature variability patterns related to the climate oscillations considered (SO, NAO, AMO, PDO, TPI) were generally captured similarly by individual data sets.This also applies to 20CR for the most part, though there seem to be some break-downs in the representation of transcontinental and trans-oceanic teleconnections in the reanalysis data, most noticeable in the influence of NAO over eastern Asia, AMO over northern parts of Eurasia or weakened links to SO and PDO in parts of Africa.One might speculate that this distinction is rooted in the specific behavior of the reanalysis engine, distorting the complex mechanisms propagating the teleconnections.However, an unrealistic representation of the long-distance links by the 20CR cannot be blamed automatically.Note that the differences detected are generally more prominent in the first half of the analysis period, and less striking (though still noticeable) during the later halfperiod (Fig. S6).The reanalysis may thus simply struggle to recreate the observed patterns in regions where the assimilable data are rare and relatively unreliable, just as the procedures generating the analysis-type gridded data are burdened with increased errors when faced with a lack of reliable inputs.Neither of these data sources can thus be considered consistently superior and increased attention to the effects of data uncertainty is needed when investigating climate variability in regions and periods with sparse observations.Keeping these limitations and specifics in mind, the 20th Century Reanalysis seems to provide a satisfactory approximation of the past temperatures during the 20th and early 21st century, and thus a suitable tool for studies concerned with validity of climate simulations. Potential pitfalls related to the attribution of temperature changes to trend-like predictors were already discussed above, but even interpretation of the components associated with faster variable explanatory factors needs to be done with caution.Some of the internal climate oscillatory modes are interconnected, and their respective indices partly collinear.Variability assigned to a certain predictor does therefore not need to originate from the respective forcing factor alonefor instance, the relationship between SO/ENSO and PDO implies that effects of the variability modes in the Pacific area cannot be entirely separated, on neither physical nor statistical level.The issue of interdependent predictors is not limited to pair-wise relationships: it has been shown that various variability modes in the climate system are intertwined in quite complex networks, with nontrivial time-delayed relations among oscillations in different regions (e.g., Wyatt et al., 2012).Intricacy of such structures becomes even more apparent when generalized links are studied, unrestricted to just the conventional variability modes (e.g., Hlinka et al., 2013Hlinka et al., , 2014a, b), b). Caution is also needed when interpreting the outcomes of the tests of statistical significance.The AR(1) model of residual autocorrelations, assumed here when assessing significance of predictors' connections to the gridded temperatures, provides basic approximation of the short-term persistence.Often, such an approach seems sufficient, especially over land where the residual autocorrelations generally rapidly approach zero.In other cases (particularly for tropical oceans and global averages encompassing oceanic areas), longerterm autocorrelations of various shapes appear in the residuals.Their presence is indicative of unaccounted-for components in the data, long-term memory and/or presence of biases and inhomogeneities, potentially infesting temperature analyses and reanalyses alike (e.g., Cowtan and Way, 2014;Ferguson and Villarini, 2014).To further assess the validity of our significance tests, bootstrap-based estimates of statistical significance for the gridded temperature data were also implemented, using a variable-sized moving block, reflecting the magnitude of residual autocorrelation (Politis and White, 2004;Bravo and Godfrey, 2012).Little difference in the regression outcomes was found compared to the other test designs in this paper.Artifacts of annual cycle were also often found in the residuals, traceable (at least in part) to non-stationary representation of the seasonal variations (Foster and Rahmstorf, 2011).A treatment by inclusion of components approximating the 12-month periodicity among the predictors was attempted, but resulted in no major changes to the regression coefficients or their significance. Another important aspect shaping the outcomes of the regression mappings is the choice of the explanatory variables.Most of the predictors applied here exist in alternative variants, differing in their definition or method of (re)construction.A sizable discussion could be devoted to the specifics of each of them.While we did not study this issue in such depth, partial experiments were carried out to assess the degree of variability of the analysis outcomes if alternative predictors were used.First, robustness of the imprints of volcanic forcing was assessed, with GISS aerosol optical depth (Sato et al., 1993) Unterman's (2013) data.The resulting change to the global temperature response and the corresponding spatial fingerprints proved to be minor, generally smaller than uncertainties associated with the regression coefficients themselves.Use of hemisphere-specific volcanic aerosol amounts instead of their global representation also induced just minor changes to the respective response patterns. Of the multiple definitions of the indices characterizing the climatic oscillations studied, we prioritized the forms not directly involving temperature itself, to avoid explicit contribution of the temperature signal to the explanatory variables.This was not a problem for NAO and TPI, as their descriptors are derived from the baric characteristics.In the case of ENSO, the pressure-based SOI was preferred over the SST-based NINO indices or multivariate ENSO index.On the other hand, the usual forms of AMOI and PDOI are calculated from areal SSTs, and thus likely interrelated with the temperature signals.For PDOI, which exhibits comparatively weaker correlation with globally averaged temperatures (at least partly due to the fact that PDOI is, by its definition, detrended by global sea-surface temperature), this issue seems less serious.However, it is still worthwhile to see how much the outcomes change from employing another version of the index.Use of the PDO index from JISAO (http: //research.jisao.washington.edu/pdo/PDO.latest)resulted in generally weaker PDO imprint in global temperature (though still largely within the confidence intervals shown in Fig. 4), but nonetheless very similar spatial response pattern (with the relatively strongest distinction being somewhat stronger negative link over northern China).In the case of AMO, the issue of predictor selection and interpretation of its effects is more critical.Our AMO index of choice (linearly detrended, as per the prevalent definition by Enfield et al., 2001) seems to be formally associated with rather strong component in global temperature, as well as in local temperatures in various regions across the globe.While this may indeed suggest existence of trans-planetary teleconnections involving AMO-related variability, there is a danger in overly formalistic interpretation of the patterns detected.Firstly, several definitions of AMO index exist, embodying different views of the phenomenon (see, e.g., Canty et al., 2013).Use of a differently defined AMOI affects magnitude of the temperature response detected, and potentially also strength of components tied to other predictors, including the volcanic activity or the long-term trends (Canty et al., 2013;van der Werf and Dolman, 2014).Some of our tests were therefore repeated for AMOI series based on detrending the north Atlantic SST by global anthropogenic forcing, proposed by Canty et al. (2013) to limit the aliasing of anthropogenic long-term temperature trend and AMOI.Little impact on the outcomes of the attribution analysis resulted from such change.Greater differences would likely arise from application of AMOI detrended by mean sea surface temperature (Trenberth and Shea, 2006) or global mean temperature (van Oldenborgh et al., 2009), although it has been argued that such method of detrending removes part of the target signal (Canty et al., 2013).Secondly, the associations revealed do not directly provide a conclusion to the still disputed question of the existence and stability of AMO as a natural oscillatory phenomenon.The AMOI-related patterns have exhibited relatively strong resemblance between the first and second half of the analysis period, especially over the oceanic areas.This suggests a fair degree of stability of the relations between north Atlantic SST and local temperature in more distant areas, but does not confirm stationarity of AMO as such.It should also be considered that the 55-year-long subperiods do encompass less than one cycle of the approximately 70-year-long supposed main cycle of AMO, and that the relations detected are in large part due to synchronization of shorter-term variability in AMOI and temperature.Finally, attribution of temperature components to AMOI may also be partly spurious due to aliasing with other predictors, or with explanatory factors omitted in our analysis setup.In particular, changes in amounts of anthropogenic aerosols have been suggested as a cause for temperature variations in the northern Atlantic (Booth et al., 2012), though their responsibility for the bulk of multidecadal variability has been consequently disputed (Zhang et al., 2013).Possible forcing of AMO by combined natural forcings (volcanic and solar) has also been shown (Knudsen et al., 2014), while Ting et al. (2014) suggested AMO to be a product of natural multidecadal variability and anthropogenic forcing.Altogether, the question of AMO's nature and degree of its influence remains still open. Finally, it should be accentuated once again that the issue of attribution of climate variability cannot be completely resolved by statistical approach alone.Statistical solutions to this multifaceted problem therefore need to be considered alongside the GCM-based simulations, conceptually more universal than purely statistical approaches, yet still only partly successful in completely reproducing the observed features of the climate system (IPCC, 2013).Our results here hope to contribute to future efforts in this field: by showing the character and variability of temperature components formally attributable to various forcings across several data sets, their robustness (or lack thereof) was illustrated, providing a picture of the respective fingerprints, as well as support guidelines for the use of the respective data in validation of the climate models. Data availability Several publicly available data sets were employed in our analysis.The specific references and internet links to the individual data sources are given in the text; all their authors and providers have our gratitude. The Supplement related to this article is available online at doi:10.5194/esd-7-231-2016-supplement. Figure 1 . Figure 1.Time series of the explanatory variables employed in the attribution analysis.Bars to the right of individual panels illustrate the pre-selected characteristic variations of the predictors, used for calculation of the temperature responses: increase of CO 2 -equivalent GHG concentration between 1901 and 2010 (+141 ppm); increase of solar irradiance by 1 Wm −2 ; Mt.Pinatubo-sized volcanic eruption (aerosol optical depth +0.15); increase of SOI, NAOI, AMOI, PDOI and TPI by four times the standard deviation of the respective time series.Thicker, darker lines represent a 13-month moving average of the series. Figure 2 . Figure 2. Pair-wise Pearson correlation coefficients between local monthly temperature anomaly series from different data sets for the 1901-2010 period.See Fig. S1 for correlations during the 1901-1955 and 1956-2010 sub-periods. Figure 3 . Figure 3. Temperature responses ( • C) to characteristic variations of the explanatory variables (specified in Fig.1), obtained by multiple linear regression carried out with one predictor shifted in time by t, while keeping the others at t = 0. Figure 4 . Figure 4. Regression-estimated responses ( • C) of global (blue) or global land (green) monthly temperature anomalies to pre-selected characteristic variations of individual explanatory variables (specified in Fig. 1).Time shift of +1 month (predictor leading temperature) was applied for solar irradiance, +7 months for volcanic aerosol amount, +2 months for SOI.The boxes illustrate the 99 % confidence intervals, calculated by moving-block bootstrap (12-month block size).The 20CR-based results are shown for the series averaged over the 60 • S to 75 • N area.Obtained for the 1901-2010 period; see Figs.S2 and S3 for results over the 1901-1955 and 1956-2010 sub-periods; Fig. S4 for visualization of individual temperature series and their regression-based fits. Figure 5 . Figure 5. Geographic patterns of regression-estimated contributions to local temperature (• C) from pre-selected characteristic changes of the explanatory variables (specified in Fig.1).Time shift of +1 month (predictor leading temperature) was applied for solar irradiance, +7 months for volcanic aerosol amount, +2 months for SOI.Areas with response statistically significant at the 99 % level are highlighted by hatching.See Fig.S5for results derived from the MLOST and HadCRUT4 data sets as well as from GISTEMP data with 250 km smoothing; Fig.S6for results over the1901-1955 and 1956-2010 sub-periods. Figure 6 . Figure 6.Geographic distribution of the predictor offset time t for which the strongest local temperature response was detected, within the ±24 month range.Positive values of t correspond to setups with predictor leading temperature; only grid points with response statistically significant at the 99 % level are shown.See Fig. 7 for the corresponding values of the temperature response. Figure 7 . Figure 7. Geographic distribution of the strongest temperature response ( • C) to individual explanatory variables within the ±24 month range of the temporal offset of the predictor.Areas with the response statistically significant at the 99 % level are highlighted by hatching. • C are typical in the analysis-type data, and somewhat lower for 20CR.Positive temperature responses to solar irradiance increase are indicated in the global temwww.earth-syst-dynam.net/7/231/2016/Earth Syst.Dynam., 7, 231-249, 2016 peratures (equivalent to roughly 0.05 • C per Wm −2 of solar irradiance), borderline statistically significant at α = 0.01.Global land temperatures, on the other hand, show no such warming component -a behavior previously reported by Rohde et al. (2013b) for Berkeley Earth land temperature, whereas the analysis by Canty et al. (2013) suggested minor temperature rise related to irradiance increase.Results for individual sub-periods provide an even more varied picture of the irradiance-temperature relationship (Figs.S2, S3).Small negative responses are indicated for 1901-1955, possibly due to higher correlation between the predictors characterizing GHG and solar activity (r = 0.46), and thus greater potential for misattribution.Positive responses then appear for 1956-2010, when the trend in solar irradiance (as well as its correlation to GHG concentration) is negligible.Warming effect of the increase of solar irradiance is therefore possible in land-only temperature averages, too, but weak and obscured when all 110 years are analyzed.In any case, imprint of solar irradiance upon globally averaged temperature seems rather minor, especially compared to the GHG influence.
2018-05-07T18:09:46.218Z
2015-11-12T00:00:00.000
{ "year": 2015, "sha1": "85b3e6306045a6a943bb43db999bceca0452ea6b", "oa_license": "CCBY", "oa_url": "https://www.earth-syst-dynam.net/7/231/2016/esd-7-231-2016.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "58367021bee773f72760815b8717c027c36ffcaa", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
140712874
pes2o/s2orc
v3-fos-license
BSR versus Climate Change and Slides We investigate the relationship between climate change and hydrate stability in two peri-Antarctic areas: Antarctic Peninsula and South Chile. We consider these areas because the polar and subpolar areas are the most sensitive about global change. The zone, where the methane can be easily released by hydrate melting, is the shallow water, that is, in proximity of the intersection between the BSR and the sea bottom. In order to simulate the effect of climate change on hydrate stability, we consider the following seven scenarios for both areas: present environmental condition; sea bottom temperature increase/decrease of 1◦C water depth increase/decrease of 100 m; sea bottom temperature and water depth increase/decrease of 1◦C and 100 m, respectively. On the basis of our result, we can draw the conclusion that the modeling is a useful tool to understand the effect of the climate change on hydrate stability. Moreover, in these areas where the sea bottom temperature is influenced by temperature increase, slides could be easily triggered by hydrate dissociation. Introduction Submarine slides are global phenomena that can occur on slopes that may be considerably less inclined than their terrestrial equivalents due to the presence of excess water [1,2]. They can displace huge amounts of material over great distances. For example, the Storegga slide of Norway had a total run-out distance of about 800 km, with a total displaced volume of sediments estimated to be in the order of 5500 km 3 [3]. Slope failures occur when the downslope driving forces due to gravity and other factors exceed the resisting forces that are inherited from the sediment strength. During the last decades, a debate was started regarding the relationship among gas hydrate, climate change, and slope stability. In fact, gas hydrates represent a significant geohazard that is of immediate importance to near and offshore developments. Human activities and installations in regions of gas hydrate occurrence must take into account the presence of gas hydrate and deal with the consequences of its presence [4]. The hydrate stability zone in marine environments is a function of the water depth, the seafloor temperature and the geothermal gradient. Any changes to the temperature and/or pressure, both at the surface and in the area adjacent to the hydrate, affect the thickness of the stability zone. Although temperature and pressure are the main controls in the formation of gas hydrates and the thickness of the hydrate stability zone, other factors such as gas chemistry and gas availability will also alter the thickness and location of the hydrate stability zone [5]. Dissociation of hydrate may trigger the sudden release of large amounts of methane through the ocean into the atmosphere, leading to accelerated climate warming. Hydrate dissociation and gas release in the atmosphere have been proposed as a significant mechanism to explain the rapid and significant climate change during the Palaeocene-Eocene Thermal Maximum [6][7][8][9]. This hypothesis has been challenged by different studies, which suggest that methane from dissociating hydrate may never have reached the atmosphere [10,11]. Alternatively it has been proposed that methane release may follow, rather than lead, climate change [12]. The association between gas hydrates and submarine slope failure has been widely documented (e.g., [2,3,[13][14][15][16]). As is well known, gas hydrates have been found to be a significant constituent of seafloor sediment in many continental shelf-slope environments around the world [17,18]. There are many examples of a possible connection between gas hydrates and submarine slope failures. Kvenvolden [9,10] summarized slope failures on the continental slope and rise of the west coast of Africa, on the US Atlantic continental slope, in the fjords of British Columbia, and on the Alaskan Beaufort Sea continental margin. Several researchers have performed in-depth analyses of the Norwegian continental margin [19][20][21][22][23], and all have suggested that gas hydrates may have triggered one or more large submarine slides in this area. The other famous examples of coincident gas hydrate distribution and slope failure include the Cape Fear slide on the continental slope and of the rise southeast of Cape Fear, North Carolina [15], the Humboldt slide zone near the Mendocino triple junction on the Northern California continental margin [14], and the submarine slope failure in offshore Fiordland [2]. In the last years, several authors (i.e., [22][23][24]) have investigated the relationship between the gas hydrate dissociation and the increase of pore fluid pressure below the bottom simulating reflector (BSR), which is the seismic indication of the base of the gas hydrate stability zone. In fact, dissociation of gas hydrates at the BSR, in response to a change in the physical environment (i.e., temperature and/or pressure regime), can liberate excess gas and elevate the local pore fluid pressure in the sediment [24,25]. The increase in pore fluid pressure has the effect of decreasing the effective normal stress on any assumed failure surface, so that less shear stress is required to initiate failure. Whether free gas liberation by gas hydrate dissociation can singularly cause a slide, rather than just being a contributing load or the final trigger, is dependent on various factors. These include rate of dissociation, sediment permeability, depth below sea level, and depth below the seafloor [26]. For decomposing gas hydrates to be a widespread cause of slope failure, three criteria must be met [28]: (1) gas hydrates must not only be present, but must be widespread as well; (2) slides must have originated in areas that are within the gas hydrate phase boundaries; (3) soils of low permeability must be common at the base of the hydrate zones (to permit the buildup of excess pore pressure that could lead to unstable slopes during sea-level falls). In this paper, we focus our attention on the slides that occur in area where the BSR is shallow and, consequently, in the area where the hydrate stability is influenced by climate change, that is, temperature and pressure changes. In order to quantify the effects of gas hydrate dissociation, a numerical analysis has been undertaken in two areas: Antarctic Peninsula and South Chile ( Figure 1). The first area is chosen because the polar regions are more sensitive to climate change. In fact, the climate change signals are particularly amplified in transition zones, such as the peri-Antarctic regions [29]. On the other hand, the high amount of gas hydrate present in the South Chile plays an important geohazard related to intense seismicity affecting the region [30,31] and shallow BSR locally present [32]. In this contest, we plan to study the relationship between the shallow hydrate/BSR depth and the pressure/temperature changes. [33], 1996/1997 [27] on the South Shetland margin. Example of the seismic data acquired in this area with a clear BSR was reported in Figure 2. In the prestack depth migrated section the presence of the base of the free gas reflector (BGR; [33]) is clearly detected. To better characterize the area where the BSR is very strong and continuous, another cruise was carried out to acquire detailed bathymetric data (12 kHz was the acoustic frequency used), subbottom profile data, two gravity cores, and seismic data with a short hydrophone streamer (600 m) during the Austral summer 2003/2004 [29]. The multibeam bathymetric data, collected using a Reson multibeam echo sounding system (Reson SeaBat 8150), cover an area of about 4,500 km 2 [29,34]. The data were calibrated using water column velocity profiles, reconstructed from conductivity-temperature-density measurements (CTD) acquired in four representative sites. The new bathymetry map was generated in the form of a shaded digital elevation model, using the processing software PDS2000 and based on a cell grid size of 100 × 100 m ( Figure 1). The bathymetric map of the study area provides evidence of mud volcanoes, collapse troughs, and recent slides [29]. The geothermal gradient of the area was estimated analyzing the seismic data, in particular comparing the BSR depth extracted by seismic data analysis and the theoretical BSR depth evaluated considering different geothermal gradient [34]. The analysis indicated that the regional geothermal gradient is 38 • C/km, considering the sea bottom temperature equal to 0.4 • C, as indicated by OBS [27] and CTD data [34]. South Chile. The second study area is located along the south Chilean margin on the continental slope ( Figure 1). BSRs have been detected during several geophysical cruises. In particular, BSR has been recognized along the accretionary prism by several authors [32,[35][36][37][38][39][40]. Unfortunately, in this area data from Global Multi-Resolution Topography compilation [41] were available (http://media.marinegeo.org/category/bathymetry), which contain multibeam data form NBP0602 Project (Simrad EM120; Figure 1). Consequently, the resolution is about 600 m/node, and it is difficult to recognize evidences of slides at relative shallow depth. In the literature, no information about slide related to gas hydrate in the southern Chilean margin can be found. Moreover, on the basis of bathymetric data, we focused our attention on a slide located in the northern part of the investigated area. The geothermal gradient is very variable along this margin (see, e.g., [32]). For this reason, we considered a constant geothermal gradient equal to 38 • C/km as in Antarctic Peninsula. The sea bottom temperature was considered equal to 2.2 • C, as reported in [42]. The Modeling Our objective is to verify if the climate change (i.e., sea level and bottom temperature changes) can be responsible of slides because of gas hydrate dissociation. As is well known, the most crucial zone is the area where there is the intersection of the base of the gas hydrate stability zone with the seabed. This area is affected by a bottom-water temperature increase more than the deeper parts on the hydrate stability zone [19]. In the first condition, gas hydrates are close to their stability limit and will respond quickly to the anticipated warming of the polar region because thermal diffusion times through any overlying sediment are short. Recent models have shown that shallow and cold deposit can be very unstable and release significant quantities of methane under the influence of as little as 1 • C of seafloor temperature increases [43]. For this reason, we model the effect of climate change on the intersection between the base of the gas hydrate stability field and the seafloor. By using bathymetric data, sea bottom temperature, geothermal gradient and considering that the natural gas is methane, we evaluate the theoretical BSR [34]. It was calculated considering the intersection between the geothermal curve (evaluated from sea bottom temperature and geothermal gradient) and the hydrate stability curve, considering the Sloan formula [5]. Figure 2 indicates the geothermal (blue line in the insert) and the gas hydrate stability (red line in the insert) curves, and their intersection is in correspondence to the BSR depth. The bathymetric data are translated in pressure considering the average water density equal to 1046 kg/m 3 , as reported in the literature [44]. In order to simulate the effect of climate change on BSR depth, we consider small temperature variation (equal to 1 • C) in order to verify how slight climate change can influence hydrate stability, as suggested recently by several authors (i.e., [43]). Regarding the sea level change, several models have suggested that the sea level dropped of almost 100 m during the Last Glacial Maximum [45]. For this reason, we adopt this sea level variation in our modeling. On the base of these considerations, we consider the following seven scenarios for both areas: (S0) present environmental condition from measurements; (S1) sea bottom temperature increase of 1 • C with respect to the present temperature (interglacial period scenario); (S2) sea bottom temperature decrease of 1 • C with respect to the present temperature (glacial period scenario); (S3) water depth increase of 100 m with respect to the present bathymetry (interglacial period scenario); (S4) water depth decrease of 100 m with respect to the present bathymetry (glacial period scenario); (S5) sea bottom temperature and water depth increases of 1 • C and 100 m, respectively, with respect to the present environmental conditions (interglacial period scenario); (S6) sea bottom temperature and water depth decreases of 1 • C and 100 m, respectively with respect to the present environmental conditions (glacial period scenario). In Antarctic Peninsula, the seismic BSR depth is affected by an error of about 5%, while the bathymetric data present an error of about 1.5% [34]. Consequently, we consider that the theoretical BSR, evaluated by using the geothermal gradient extracted from seismic and bathymetric data, is affected by an error of about 6.5%. So, we consider that the BSR crosses the seafloor if the difference between the bathymetry and the theoretical BSR depth is less than the bathymetry multiplied by 6.5%. Because not more detailed information is available in the South Chile, we suppose the error considered in the first dataset for coherence. The results of the modeling are shown in Figure 3 for both analyzed areas. The grids representing each scenario are reported with different colors superimposed on multibeam data. In order to understand the effect of climate change on the slope stability, we evaluate the relationship between the considered scenarios and the identified slides, indicated by solid line in Figure 1. In Figure 4, we show the results of our modeling for both analyzed profiles. The solid line indicates the present situation, that is, the intersection between the BSR and the seafloor. The dashed lines represent the scenarios (S2, S3), in which the hydrate is more stable. On the contrary, the dissociation of the gas hydrate is represented by dotted lines for scenarios S1, S4, and S6. Scenario S5 affects very weakly the depth of the intersection between the base of the hydrate stability and the seafloor. Discussion and Conclusions Our modeling clearly shows the relationship between the gas hydrate and the slope stability. In fact, the lines in Figure 4, representing the intersection between the gas hydrate stability and the seafloor for each scenario, are located in proximity of the main head scarp (see Figure 1). This result confirms the hypothesis that the hydrate can influence the slope stability causing important slides. It is important to underline that direct measurements of sedimentary sequence are not available; so, consideration about the age of the slides and the climate change is just a speculation. As already mentioned, we extract profiles, which cross two slides, to better evaluate the effect of climate change on the intersection between the hydrate stability and the seafloor. Note that the results of the two models are in agreement. Note that the two datasets have a different resolution and, for this reason, we perform a qualitative analysis of the model results. Legend N 10 km Hydrate stabilization Let us consider the glacial period, in which we have the decrease of temperature and decrease of pressure. At the beginning, we suppose just a decrease of the temperature (scenario S2), and the result indicates a positive feedback. On the contrary, when the cooling produces a pressure decrease (scenarios S4 and S6), the negative feedback is observed. In this case, gas is released in the seawater, and consequently in the atmosphere, reducing the cooling. In the interglacial period, the first effect is the temperature increase (scenario S1). As expected, the modeling indicates gas hydrate dissociation (positive feedback). So, the methane released can reach the atmosphere, contributing to the global warming. If we consider a case in which the seabottom temperature is not affected by global warming and the water depth increases (scenario S3), the feedback is negative, that is, the hydrate is stable. If we consider the joint effect of temperature and pressure increase, the effect of hydrate stability is negligible. Resuming, the hydrate influences the slope stability and the climate change during the interglacial period only at the beginning of the warming and only if sea bottom is influenced by temperature variation. In conclusion, our modeling points out the strong relationship between gas hydrate presence and climate change. So, it is very important to perform a modeling in the areas where gas hydrate is present in order to simulate the effect of climate change on hydrate and slope stabilities. It is clear that, in these areas, where the sea bottom temperature is influenced by temperature increase, slides could be easily triggered by hydrate dissociation.
2019-04-24T13:07:43.431Z
2011-08-01T00:00:00.000
{ "year": 2011, "sha1": "69a4545ffa1158fe0cd43d18f4f1c1d47100bad0", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2011/390547.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5a10595e136eaf07af232ce0691bee7ab575844d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
857979
pes2o/s2orc
v3-fos-license
The cellular composition of lymph nodes in the earliest phase of inflammatory arthritis Objectives Rheumatoid arthritis (RA) is an immune-mediated inflammatory disease of unknown aetiology. Recent work has shown that systemic autoimmunity precedes synovial inflammation, and animal models have suggested that changes in the lymph nodes may precede those in the synovial tissue. Therefore, we investigated the cellular composition of the lymph node in the earliest phases of inflammatory arthritis. Methods Thirteen individuals positive for immunoglobulin M (IgM) rheumatoid factor and/or anticitrullinated protein antibodies without arthritis were included. Additionally, we studied 14 early arthritis patients (arthritis duration ≤6 months, naïve for disease-modifying antirheumatic drugs), and eight healthy controls. All subjects underwent ultrasound-guided inguinal lymph node biopsy. Different T- and B-lymphocyte subsets were analysed by multicolour flow cytometry. Results There was an increase in activated CD69 CD8 T cells and CD19 B cells in early arthritis patients compared with healthy controls. We also observed a trend towards increased CD19 B cells in autoantibody-positive individuals without arthritis compared with healthy controls. Conclusions This exploratory study suggests that there is increased immune cell activation within lymph nodes of early arthritis patients as well as in autoantibody-positive individuals at risk of developing RA. This method provides a unique tool to investigate immunological changes in the lymph node compartment in the earliest phases of inflammatory arthritis. INTRODUCTION Rheumatoid arthritis (RA) is a prototypic inflammatory autoimmune disease with a poorly understood etiopathogenesis. Given the destructive nature of the disease, early diagnosis and start of treatment is highly important. [1][2][3] Several studies have shown that elevated acute-phase proteins, chemokines, cytokines and RA-specific autoantibodies (rheumatoid factor (RF) and anticitrullinated protein antibodies (ACPA)) can be detected in peripheral blood years before the onset of arthritis. [4][5][6][7][8][9] In prospective cohort studies, these autoantibody-positive individuals can be defined as having systemic autoimmunity associated with RA and being at risk of developing RA. 10 A recent study showed that the cellular composition of the primary target of RA, the synovium, is comparable with that of healthy controls during this phase. 11 Thus, systemic autoimmunity appears to precede the development of synovial inflammation. Since the RA-specific autoantibodies can be present for years without disease symptoms and without increased synovial cellularity, factors outside the synovial compartment should be responsible for the initial changes leading to RA. As a general principle, the recruitment of activated immune cells to the site of inflammation is initiated after informing a nearby lymph node of a danger signal. Thus, the immune reaction in lymph nodes generally precedes the influx of effector cells into the target tissue. Indeed, animal models have shown that the onset of arthritis is preceded by phenotypic changes in the cellular compartment of draining lymph nodes, indicating a primary role for lymph nodes in the initiation of arthritis. [12][13][14] However, very little is known about the initial events that occur in lymph nodes before disease onset in patients with arthritis. Recently, we developed core-needle biopsy sampling of inguinal lymph nodes for research in RA, and we have shown that the procedure is generally well tolerated. 15 In the current study, we investigated the cellular composition of lymph node biopsies obtained from autoantibody-positive individuals at risk of developing RA, and compared the results with those observed in early arthritis patients and healthy controls. Study subjects and lymph node biopsy sampling Individuals with elevated IgM-RF and/or ACPA levels without arthritis were included in the study. These individuals were otherwise healthy and have systemic autoimmunity associated with RA, and are therefore at risk of developing RA (phase c, ref. 10) (further referred to as 'at risk' individuals). Additionally, early arthritis patients (arthritis duration ≤6 months, determined from the first clinical signs and symptoms of arthritis as assessed by the rheumatologist; disease-modifying antirheumatic drug naïve) and healthy controls without any joint complaints and without RA-specific antibodies were included. Ultrasound-guided inguinal lymph node biopsies were obtained by a radiologist using a 16G core needle as previously described, 15 and immediately processed for flow cytometry analysis. The study was approved by the local ethical committee, and all study subjects gave written informed consent. Flow cytometry analysis Lymph node biopsy samples were put through a 70 μm cell strainer (BD Falcon) to obtain a single Statistics Not normally distributed data were presented as medians (IQR). Differences between study groups were analysed using one-way analysis of variances with post-Bonferroni's multiple comparison tests or unpaired t test where appropriate. Categorical data were presented as numbers ( percentages), and differences between groups were analysed using χ 2 test. GraphPad Prism Software (V.5, GraphPad Software, La Jolla, California, USA) was used for statistical analysis. RESULTS Lymph node biopsies were obtained from 13 IgM-RF and/or ACPA positive individuals without arthritis. Nine of them were referred to the outpatient rheumatology clinic because of joint pain, and four through testing first-degree relatives of RA patients (two of the latter did not have arthralgia, all other individuals did have arthralgia). None of these autoantibody-positive individuals has developed arthritis yet after a follow-up time of median (IQR) 12 (9)(10)(11)(12)(13)(14) months. Additional characteristics are given in online supplementary table S1. For comparison, lymph node biopsies were obtained from 14 early arthritis patients (eight RA according to the 2010 ACR/EULAR criteria for RA and six unclassified arthritis (UA) of whom four fulfilled 2010 ACR/EULAR criteria for RA after follow-up; median (IQR) arthritis duration one (0-2) months) and eight autoantibody-negative healthy controls. Table 1 shows the demographic data of the study participants. To explore the cellular composition of the lymph node compartment, freshly collected lymph node specimens were directly analysed by multicolour flow cytometry for the presence of specific T and B lymphocytes including their activation or differentiation state (figure 1). The frequencies of CD4 and CD8 T cells were within the expected range (∼80% CD4 and ∼20% CD8 within CD3 T cells) and no differences were observed between at risk individuals, early arthritis patients and healthy controls ( figure 2A,B). Subsequently, the percentage of activated T cells was determined by analysing coexpression of CD69 ( figure 2C,D). Interestingly, the percentage of activated CD8 T cells was different between the three study groups ( p=0.012), and specifically increased in early arthritis patients compared with healthy controls (median (IQR) 33.10 (23.60-42.905) vs 22.20 (15.75-26.55), p<0.05)). The increase in percentage of activated T cells was not dependent on the presence of autoantibodies or the presence of arthritis in the lower limb on the ipsilateral side of the biopsied lymph node (data not shown). In addition, there were no differences between UA and RA patients. Next, we analysed the percentage of CD19 B cells which showed a significant difference between the three study groups ( p=0.049). The percentage of CD19 B cells was significantly higher in early arthritis patients compared with healthy controls (median (IQR) 38.45 (21.45-48.90) vs 20.05 (13.03-28.25), p<0.05) ( figure 2E). We also observed a trend towards increased CD19 B cells in at risk individuals compared with healthy controls (median (IQR) 29.60 (19.85-39.65) vs 20.05 (13.03-28.25), p>0.05). The increased number of B cells was independent of the presence of autoantibodies ( figure 2F). In the group of early arthritis patients, no differences were observed between UA and RA patients, and the increased number of B cells was independent of the presence of arthritis in the lower limb on the ipsilateral side of the biopsied lymph node (data not shown). The percentage of different subsets of B cells, naïve, memory switched and memory non-switched (based on IgD and CD27), was comparable between the different study groups (data not sown). DISCUSSION This explorative study was undertaken to explore for the first time the cellular composition of lymph nodes in at risk individuals having systemic autoimmunity associated with RA, and early arthritis patients compared with healthy controls. First, the results indicate that flow cytometry analysis of lymph node biopsies is a feasible method for studying the cellular composition and activation of lymph node tissue in the earliest phases of arthritis. Second, we observed more CD19 B cells and activated CD8 T cells in early arthritis patients compared with healthy controls. Third, there was a trend towards an increase in CD19 B cells in at risk individuals compared with healthy controls. During an immune response, the egress of T lymphocytes from lymph nodes is shut down transiently by downregulation of S1P1 and upregulation of CD69 leading to lymphocyte retention, maturation and proliferation. 16 The results of this explorative study suggest increased activation of T cells, as shown by coexpression of CD69, within lymph nodes of arthritis patients during the earliest phase of disease. Of interest, the percentage of activated CD8 T cells is increased. These results are in line with animal models of arthritis where a skewed CD4/CD8 ratio is observed in regional lymph nodes before the onset of arthritis. 13 14 These results support the notion that T cells are intimately involved in the initiation of seropositive RA. [17][18][19] Future research should focus on the identification of T-cell subset(s) and antigen specificity associated with development of RA. Of interest, an increased percentage of CD19 B cells was observed in lymph nodes of early arthritis patients and autoantibody-positive subjects at risk of developing RA. These results of the lymph node analyses differ from those in a previous study on peripheral blood, in which gene expression profiling of at risk individuals revealed a low B-cell signature especially in those individuals who developed arthritis after follow-up. 20 It is tempting to speculate that the B cells are retained in the lymph nodes, to ensure maturation and differentiation during the immune response which would be in line with findings in animal models of arthritis. 12 13 An obvious limitation of the current study is the small sample size and the short follow-up period of the 'at risk' individuals. Of note, it has been challenging to obtain lymph node biopsy samples from these patients and controls. We now have all the tools available to first identify those individuals at risk of developing arthritis and to prospectively analyse the immune cells in lymph node tissues. This will create a framework for studying the molecular events taking place in lymph nodes before the onset of arthritis that can be potentially related to the processes involved in the pathogenic changes in synovial tissues.
2016-05-17T15:54:05.497Z
2013-05-09T00:00:00.000
{ "year": 2013, "sha1": "52b8d17236630a39fdd3ae1e31da1c6355f0cab6", "oa_license": "CCBYNC", "oa_url": "https://ard.bmj.com/content/72/8/1420.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "52b8d17236630a39fdd3ae1e31da1c6355f0cab6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
270719896
pes2o/s2orc
v3-fos-license
Intuitionistic Connection Cloud Model Based on Rough Set for Evaluation of the Shrinkage–Swelling of Untreated and Lime-Treated Expansive Clays : The evaluation of the shrinkage–swelling characteristic of expansive clay is of great signi fi cance, but it is a complex problem since the evaluation process involves numerous uncertain factors, such as randomness, non-subordination, and hesitation uncertainties. Here, an intuitionistic connection cloud model has been proposed to address this issue. First, an evaluation index system is established. According to the reliability of interval-valued evaluation indexes, the corresponding cloud numerical characteristic parameters are speci fi ed based on the membership interval generated by the intuitionistic fuzzy principle. Moreover, the improved conditional information entropy based on rough set theory is utilized to assign the index weight. Subsequently, combined with the weight, the intuitionistic connection degree of the sample to the classi fi cation standard is determined to identify the shrinkage–swelling grade. Finally, a case study on the shrinkage–swelling grade of untreated and lime-treated expansive clays in Hefei Xinqiao International Airport was performed to illustrate the validity and reliability of the model. The results show that the proposed model is reasonable and feasible for the evaluation of the shrinkage–swelling grade of untreated and lime-treated expansive clays. Introduction Expansive clay with strong hydrophilic and considerable shrinkage-swelling potential is a special soil that will directly affect the stability and safety of the building foundation and may cause serious damage to the structure [1].So, to improve the strength of expansive clay, lime treatment is used to deal with the above problems in engineering practice.Sometimes, bad construction quality control and neglect of shrinkage may also result in engineering hazards and huge economic losses.Consequently, to explore the appropriate treatment or modification methods and ensure the safety and stability of the engineering structure founded on the expansive clay, it is important to appropriately estimate the shrinkage-swelling grade of untreated and lime-treated expansive clays [2].However, the problem has not yet been solved because expansive clay in different regions shows specific physical features and engineering behaviors due to its various compositions and environment.Therefore, it is very necessary to develop an evaluation method to enhance the accuracy and reliability of the evaluation of the shrinkage-swelling problem via uncertainty and certainty information and human behavioral characteristics. To accurately assess the shrinkage-swelling grade, some scientists have introduced uncertainty analysis theories-for example, extension theory [3], fuzzy mathematics [4], grey theory [5], rough set theory [6], neural network [7], and support vector machine [8] to depict the uncertainty of evaluation factors.Although these methods have made some substantial progress, they still have some defects in their corresponding application processes: extension theory is an effective method by which to solve the incompatibility problem in the real world, but it does not consider the ambiguity of the evaluation object; while the fuzzy mathematical membership function is difficult to determine in practical application; the grey theory has low accuracy for the evaluation of information dispersion samples; the rough set theory eliminates some important evaluation factors in the process of attribute simplification; and machine learning methods, including artificial neural networks and support vector machines, cannot describe the random ambiguity of the interaction between the influencing factors.To address these shortcomings, Wang et al. presented classification methods, coupling set pair analysis theory with triangular fuzzy numbers or a cloud model to classify the degree of shrinkage and expansion for expansive clay [9,10].They can express the transition between certainty and uncertainty of evaluation indicators.Compared with the above methods, the cloud model can effectively deal with the fuzziness and randomness of the index in the infinite interval, but it ignores the limited interval of the index value distribution and cannot deal with the correlation between multiple evaluation indexes.Recently, Wang et al. [11] proposed a connection cloud model to overcome the above defect of the traditional cloud model to a certain extent.But it still does not consider the hesitant property of the connection degree, which may result in evaluation information loss; for instance, the connection degree is a crisp value of 1 when the index value is the expectation of the connection cloud.This is not consistent with the actual state and may not reflect the hesitant characteristics of the connection degree.Accordingly, some researchers introduced the intuitionistic normal cloud to analyze uncertain problems [12,13].However, thus far, little attention has been paid to the classification associated with weight considering the importance and information of the index under multiple uncertainties. As we know, proper index weighting is the fundamental procedure of evaluation.Objective weighting methods, e.g., the entropy method, the maximum deviation method, and the TOPSIS method, are commonly utilized to assign evaluation index weight.They can fully make use of information but cannot embody the importance of the evaluation index.To overcome this shortcoming, Chen and Huang [14] applied the advantage of the rough set to determine the importance and weight of attributes.This method makes full use of the objective information of sample data but ignores the cases where the weight of the index is 0. However, each index has a certain effect on the shrinkage-swelling property, so its influence should be considered in the evaluation procedure.Based on the discussion discussed above, index weight determination also needs further investigation because previous objective weighting methods mainly focused on index information; few focus on the importance of index information.Thus, there is a need for the development of suitable methods for the shrinkage-swelling classification of untreated and lime-treated expansive clays. In summary, expansive clay in different regions exhibits specific engineering behaviors due to its various compositions and environment.So, the evaluation of the shrinkageswelling characteristic of expansive clay inevitably encounters hesitant uncertainties.However, existing uncertainty analysis methods still lack unified consideration of fuzzy, hesitation, and randomness uncertainties.This may result in difficulties in obtaining a precise description of uncertainty when the measured values are on the mean value.In addition, previous objective weighting methods mainly focused on index information; few focused on the importance of index information.Therefore, the objective of our work is to provide a new means of simultaneously depicting the membership, non-membership, and hesitation degree for the evaluation of the shrinkage-swelling grade associated with index information importance.The proposed method can take into account the uncertainty of the hesitant characteristics of evaluation indicators to achieve the goal of improving the reliability and accuracy of the actual evaluation of the shrinkage-swelling grade. Given that the previous classifications of shrinkage-swelling grades rarely considered the description of the hesitation degree in various types of uncertainty indicators and their weighting, existing evaluation methods still lack unified consideration of fuzzy, hesitation, and randomness uncertainties.To simultaneously depict the membership, nonmembership, and hesitation degree responding to classification standards associated with index information importance, it is very necessary to develop an evaluation method under multiple uncertain environments.Therefore, based on the above considerations, to improve the flexibility and practicability of uncertainty processing, this paper attempts to use a method of coupling the concept of the intuitionistic fuzzy set and the connection cloud model and the rough set and proposes a new intuitionistic connection cloud model to dialectically analyze and evaluate the shrinkage-swelling grade based on the description of membership, non-membership, and hesitation characteristics of interval-valued indicators of classification standards.At the same time, this study introduces an improved conditional information entropy weighting method based on a rough set to enhance the reliability of the evaluation results. Intuitionistic Fuzzy Set At first, Atanassov [15] introduced the concept of intuitionistic fuzzy sets (IFS) as a generalization of fuzzy sets.The IFS is the extension of fuzzy sets and gives both the membership degree and the non-membership degree of an element belonging to a set.So, the IFS has the characteristics of comprehensive delicacy and flexible description of the uncertainty system and overcomes the defect of the fuzzy set, which can only describe the "this is also another" and cannot describe the "not one is not the other".Definition 1. [15,12]: Let X be a conclusive domain.If x corresponds to two mappings, , with the condition ( ) ( ) , an IFS A in X is defined as: ( ) ( ) where u(x), ν(x), and π(x) denote the membership function, non-membership function, and hesitancy function of element x belonging to the intuitionistic fuzzy set A, respectively.π(x) represents the hesitancy degree for the object x to A. Unlike the traditional fuzzy set, which his only characterized by a membership function, the non-membership function and hesitancy function in the IFS are used to depict uncertain information; that is, the intuitionistic fuzzy sets possess the capability of dealing with uncertain information through membership degrees, non-membership degrees, and hesitation degrees.Hence, it is more reasonable and stable in simultaneously depicting the degree of support, opposition, and neutrality of the object x for a particular state through u(x), ν(x), and π(x), respectively.So, IFSs can provide additional options for describing the properties of things and have been widely applied in various fields.This also provides a new concept for solving the problem that the connection cloud cannot describe the hesitant characteristics of fuzzy random concepts.Here, the information is coupled with IFS to make up for the above defect in the connection cloud model. Intuitionistic Connection Cloud Model The normal cloud model can transform qualitative concepts into quantitative data; thus, it is widely used in the analysis of uncertainty problems.However, it still has some defects.First, the cloud droplets mapped by the normal cloud generator are established in an infinite interval, which makes it difficult to describe the actual distribution of the evaluation index in the finite interval.Second, the actual distribution of the evaluation index sometimes struggles to meet the assumption of normal distribution.Finally, the common interaction among uncertainties in the classification problem is ignored, so the conversion trend of the evaluation results at the grading threshold cannot be effectively reflected.These shortcomings limit the application scope of the normal cloud and may cause deviations between the results and the actual situation.To make up for these defects of the normal cloud, Wang et al. improved the normal cloud model by using the connection number theory and proposed a new connection cloud model (Wang et al. 2014a).Connection number theory developed from the principle of set pair analysis can analyze uncertain problems from the perspectives of identity, discrepancy, and contrary aspects. Definition 2. Let P be a quantitative domain and Q be a qualitative concept of P. If x∈P is a random event of concept Q and satisfies x~N(Ex, (En′) 2 ), En′~N(En, He 2 ), then the distribution of x in the universe P is called the connection cloud, and each x is a simulated cloud droplet.Similar to the certainty in the normal cloud, the membership degree of the cloud droplet x in the connection cloud is called the connection degree y(x), and its mathematic model is as follows [11]: where En′ denotes random numbers following normal distribution based on entropy En and hyper entropy He; and k is the order of the connection cloud.When k = 2, the connection cloud will degenerate into a normal cloud. The connection cloud model considers the actual distribution characteristics of evaluation indexes and describes the relationship between certainty and uncertainty relationships of information, the relationship between indexes and grades, and the conversion trend between adjacent grades.These advantages enable the connection cloud to be widely used in various fields.However, like normal clouds, it still does not deal well with the problem of non-subordination and the hesitation of fuzzy concepts.It can be seen from Equation (3) that when the value of the simulated cloud droplet x is equal to the mathematical expectation Ex of the cloud, its connection degree is equal to 1, which does not fully reflect the hesitant characteristics of some fuzzy random concepts [10]; that is, in some cases, the membership of any element of the discussed region cannot be accurately determined, and it is therefore impossible to find the elements that entirely belong to the ambiguous and uncertain classification set.The above drawbacks can be addressed by intuitionistic fuzzy sets, which allow membership to oscillate randomly in an interval rather than being fixed on a crisp number.Therefore, this paper proposes a new intuitionistic connection cloud model using intuitionistic fuzzy sets. Definition 3. Define x (x ∈ P) as a one-time implementation of Q in the region P. To achieve this, the connection degree is no longer equal to a fixed value of 1 corresponding to x = Ex but a uniform random number for the connection cloud model. The membership degree u and non-membership degree v with the condition 0 1 u v ≤ + ≤ are introduced to express the hesitancy degree of the con- nection degree.Combined with the numerical characteristic parameters of the connection cloud model, an intuitionistic connection degree z(x) is given as where , the intuitionistic connection cloud degenerates into a connection cloud.As denoted above, the cloud drop generation algorithm for intuitionistic connection cloud is depicted as follows: (1) Initialization of numbers of cloud drops and the lower limitations of the possible values of the membership degree and non-membership degree corresponding to Ex.To illustrate the advantages of an intuitionistic connection cloud, Figure 1 uses an example to reflect the theoretical optimization process from the normal cloud to the connection cloud and then to the intuitionistic connection cloud.The classification standard interval of the known value to be evaluated is [6,12].The numerical characteristics of the normal cloud and the connection cloud are (Ex, En, He) = (9.0,3.0, 0.01) and (Ex, En, He, ξ, k) = (9.0,3.0, 0.01, 9.0, 1.70), and the corresponding cloud map is obtained after generating 1000 cloud droplets.However, at the classification thresholds x = 6 and 12, there is both the possibility of belonging to the adjacent grades and the possibility of not belonging to the adjacent grades; the membership degree should be assigned to 0.5 at these two points.It can be seen from Figure 1 that the connection degree of the two boundary values 6 and 12 in the connection cloud is equal to 0.5, but the degree of certainty in the normal cloud is not equal to 0.5.Therefore, the connection cloud truly reflects the certainty and uncertainty at the thresholds and its conversion trend, which effectively compensates for the defects of the normal cloud.On the other hand, the membership at the Ex of the connection cloud and norm model is a crisp value 1.This cannot express the hesitation properties.To better deal with the non-membership and hesitation properties of some fuzzy concepts (Du et al., 2020;Wang and Yang, 2013), the intuitionistic cloud model incorporates the concept of the hesitancy degree.In Figure 1, let β~U [0.9, 1]; according to definition 3, the intuitionistic connection degree in the intuitionistic connection cloud is no longer the crisp value of 1 when x takes the mathematical expectation value of 9; instead, it fluctuates in the range of 0.9~1.0.Therefore, compared with the connection cloud and normal cloud, the intuitionistic connection cloud can more effectively capture uncertainty and realize the objective description of the degree of non-membership and hesitation. Basic Principle and Evaluation Process The basic principle of the evaluation model based on the intuitionistic connection cloud model is as follows.First, select the evaluation indicators and classification standards to construct the evaluation index system.Then, specify the numerical characteristic parameters of the intuitionistic connection cloud and generate the intuitionistic connection cloud.Next, investigate the weight assigned via the improved information entropy method based on the rough set and determine the intuitionistic connection degree of the sample to each grade.Finally, identify the shrinkage-swelling grade of the sample according to the principle of maximum membership degree.The corresponding process is shown in Figure 2.And detailed steps are listed as follows.Step 1: Set up an evaluation index system of the shrinkage-swelling grade.The shrinkage-swelling mechanism of untreated and lime-treated expansive clays is complex, and there are many influencing factors.According to the references [10,16] and the Chinese National Standards GB/50112 [17], five kinds of indexes, including the liquid limit C1, total shrinkage-swelling rate C2, plasticity index C3, natural water content C4, and free expansion rate C5, are selected as evaluation indicators, and the shrinkage-swelling property is divided into four grades: extreme high (I); high (II); moderate (III); and low (IV).The corresponding classification standard (Wang et al. 2014b) is listed in Table 1.Step 2: Determine the weight of the evaluation index via the improved information entropy method based on the rough set.Since the rough set does not require any prior knowledge outside the dataset of the decision problem itself, it can effectively avoid human subjective errors [18] but ignore the cases where the weight of the index is 0. As we know, each index has a certain effect on the shrinkage-swelling property of soil, so their influence cannot be ignored in the evaluation process.To make full use of the objective information of sample data and take into account the impact of various attributes, this paper introduces an improved conditional information entropy I(D|C) method to distinguish index importance; that is, the corresponding weight w of the evaluation index is as follows [18]: ( ) Step 3: Determination of membership limitation.The connection cloud model can generally deal with the cases where the sample data is a crisp value, while data are often nearly continuous interval data for actual uncertainty problems.In this case, the connection cloud model often only takes the sample mean for analysis, ignoring the integrity and volatility of the sample, and does not make full use of all sample information.This limitation will inevitably lead to the loss of the original sample information, thus affecting the accuracy of the results.Unfortunately, the decision-making interval in the traditional evaluation model based on the intuitionistic normal cloud model is often subjectively determined based on expert rating scores.So, it is not consistent with the idea that the engineers hope to evaluate it based on the objective measured data; this also limits its application scope in practical engineering to some extent.On the other hand, the accuracy of engineering geological parameters obtained via statistics based on multiple measured data should depend on the sample size and the discreteness of the data itself.In general, the trend of data variation and the relative deviation rate reduces with the increasing number of samples, while the accuracy of the parameters increases.Therefore, based on the principle of data variability in statistics, this paper introduces the concept of reliability to replace the "confidence level interval", completely relying on human subjective evaluation in the traditional intuitionistic cloud model.The calculation model is given as where p denotes reliability; δ is the coefficient of data variation; α represents the significance level (also known as the risk level in engineering); tα is the confidence coefficient; and K is the sample number. The above method can avoid information loss and distortion and effectively ensure the rationality and applicability of the evaluation results.After obtaining the reliability p, let the lower limitation u = p.For the upper membership limitation of 1−v, the traditional intuitionistic cloud rarely has 1−v = 1 in practical applications because the sample interval value is determined via artificial scoring, and the accuracy of human subjective judgment struggles to reach 100%.However, the interval of the sample values in the proposed model is determined based on measured data, which avoids subjective scoring, so that the randomness of sample value and non-membership v is close to 0. Since the upper and lower limitations of membership need to be specified for each sample, the corresponding membership limitation matrix can be formed: where [ , 1 hm hm u v − ] is the membership interval of the index m of sample h.To specify the connection degree of the sample to each grade, combined with the index weight vector w, the aggregated membership interval vector R is obtained as follows: where w′ is the transpose matrix of the matrix w = [w1, …, wm]. Step 4: Calculate the numerical characteristics of the intuitionistic connection cloud, generate N cloud droplets in a finite interval, and then specify the intuitionistic connection degree.Since the computational efficiency of the one-dimensional cloud model may decrease with the increasing number of indicators and samples, the multi-dimensional cloud is used here to deal with the discussed issue.The connection degree of cloud droplets , , , ) exp 2 3 ' ln 4 ln( ) 9 ln 3 where , , , and are the expectation, entropy, hyper entropy, and order of grade i (i = 1, 2, …, n) of the evaluation index j (j = 1, 2, …, m), respectively; and denote the interval upper and lower limitations of grade i of index j; η is an ambiguity-corrected parameter (here, η = 0.01); Step 5: Calculate the multidimensional intuitionistic connection degree of the sample and determine the shrinkage-swelling grade.Let the h-th (h = 1, 2, …, P) sample to be tested be , , , . Combined with the weight, the multidimensional intui- tionistic connection degree h i z of the sample belonging to grade i can be calculated according to the following Equation ( 16), and its classification Lh is determined according to the maximum membership principle. Data and Model Implementation To verify the reliability and rationality of the model proposed, in this paper, a series of sample data of untreated and lime-treated expansive clays was obtained from the foundation treatment area of Hefei Xinqiao International Airport [10].The measured values of samples are listed in Table 2.Each parameter in Table 2 was obtained based on at least six sets of experiments.The data in Table 2 are taken from the measured data in the engineering implementation design stage; i.e., the significance level α was assigned 0.05 based on the engineering practice experience, and tα was obtained at 1.645 according to the t distribution table.The corresponding reliability of the evaluation index of each case was obtained by Equation ( 8).However, it is difficult to calculate the coefficient of variation δ and the reliability through Equation ( 8) when the mean value is 0; i.e., for the case discussed above, with the sampling numbers K, the theory of hypothesis testing is used to calculate the reliability of the index, i.e., ( ) 1, the grades for measured index values were identified.According to the discussed step 2, the index weight vector was assigned as w = [0.1912,0.2132, 0.1912, 0.2132, 0.1912].Subsequently, the integrated membership interval was obtained as shown in Table 4. From Equations ( 12)-( 15), the numerical characteristic values of the connection cloud for the j-th indicator at grade i can be determined.For instance, the obtained numerical characteristics of the connection cloud for grade I are listed in Table 5.Based on the numerical characteristics of all indicators at each grade, 2500 cloud droplets are generated according to Equations ( 11)-( 16) to simulate multi-dimensional connection clouds and intuitionistic connection clouds for each grade.Based on the indicators of liquid limit and plasticity index, the two-dimensional connection cloud and intuitionistic connection cloud were generated for comparative analysis, as shown in Figure 3.The one-dimensional clouds, as shown in Figure 3c-f, are the projections of the twodimensional cloud in the directions of liquid limit or the plasticity index.Taking case A as an example by which to illustrate the evaluation procedures, the random number generated from the uniform distribution U [u, 1−ν], weight values, and the corresponding numerical characteristics parameters of each grade and each index value were substituted into the Equation ( 16), and the intuitionistic connection degrees z(I) = 0.0000, z(II) = 0.0204, z(III) = 0.3543, and z(IV) = 0.0936 were obtained.According to the principle of maximum membership degree, the shrinkage-swelling grade was specified as III.Since the generation of uniform random numbers is for randomness, considering the concept of the Monte-Carlo simulation [19], step 5 was repeated 500 times to obtain 500 intuitionistic connection degrees, and their mean was taken as the optimal estimation of the overall true value µ to reduce the influence of random errors on the classification results. Results and Comparison The evaluation results and comparisons with other methods are shown in Table 6.It can be seen from Table 6 that the evaluation results of the proposed model are completely consistent with those of specifications and are almost consistent with other methods.This indicates that the intuitionistic connection cloud model based on an improved rough set used to assess the shrinkage-swelling grade is effective and feasible.The results of the comparative analysis also show that the accuracy of the proposed model reached 100%, while in the connection cloud model and normal cloud model, individual sample grade misjudgment occurred.For example, among the five index values of case A, C1 and C3 were in grade III, C2 was in grade IV, C4 was between grade II and III, and C5 was between grade III and IV; so, it was more reasonable to specify the shrinkageswelling grade as grade III.Additionally, the intuitionistic connection degree takes into account the reliability of samples.This is conducive to coping with complex problems with numerous uncertain information.At the same time, the improved information entropy weighting method based on the rough set does not rely on any prior knowledge and can assign weight via objective data simultaneously considering the importance of the index.In general, an intuitionistic connection cloud model with universality and interpretability can deal with the interval-valued index containing multiple uncertainties and further characterize the hesitant characteristics of the problem relative to the traditional cloud models. These results also suggest that the application of the specification method may rely too much on the reliability of the parameters and cannot describe the uncertainty of information acquisition and cognitive uncertainty.However, the evaluation of the shrinkageswelling grade will encounter subjective ambiguities, which is not conducive to the application of field engineers.In addition, various categories of uncertainty, such as fuzziness, randomness, and hesitation, may be inevitably encountered in evaluations, so the analysis of shrinkage-swelling grade cannot rely only on methods based on a single type of uncertainty.So, the proposed method coupled with IFS and the connection cloud model can therefore overcome the defects of the previous evaluation method using uncertainty analysis theory to some extent. Existing research shows that shrinkage-swelling features of untreated and limetreated expansive clays are influenced by the liquid limit, the total expansion rate, plasticity index, natural moisture content, free expansion rate, and decision-maker hesitation information factors, so the actual evaluation of the shrinkage-swelling grade is a complex problem affected by the cross, blending, and dynamic uncertainty indexes.However, engineers often take the free expansion rate as the classification standard of the shrinkageswelling grade for the expansive clay; that is, the foundation deformation obtained from the expansion rate measured under the pressure of 50 kPa based on the Chinese National Standards GB/50112 is used to specify shrinkage-swelling grade.Thus, the specification method with only a single type of information has certain limitations.Case studies and comparisons with the other methods in Table 6 indicate that the proposed method, based on an intuitionistic connection cloud model, can obtain more accuracy and reliability of classification relative to the other methods.On the other hand, the application result of the proposed model is only based on examples.Currently, this may make it difficult to provide concise equations or tables similar to specifications for engineers; so, to establish a concise formula form, a large number of engineering practices need to be investigated and summarized.In addition, the parameter values in the case study are relatively small compared with the standard range.These parameters, shown in Table 2, are the measured values of untreated and improved expansive clay in the foundation treatment area of Hefei Xinqiao International Airport.Although the parameter index values have some limitations, this does not affect the validation of the model's application.The data for the case study are centrally distributed in a certain interval relative to those of the classification standard, as shown in Table 1, which are the possible range of parameters.This just reflects the characteristics of expansive clay as a special soil; that is, its engineering characteristics are regional.Therefore, in order to verify the reliability of the proposed method, it is necessary to conduct more verification analyses with a larger parameter fluctuation range only, especially for untreated expansive clay. Description of Interval-Valued Index As is known to all, to improve the reliability of sample index values and reduce random errors, engineers often determine index value via repeated sampling, so the measured index value is an interval value.Measured values of free expansion rate C5 for each sample were taken here to illustrate this problem.Figure 4 illustrates the statistical characteristics of measured values obtained from six sets of measured data.It can be seen that the standard deviation for the untreated expansive clay is the largest, while the coefficient of variation for 7% lime-treated expansive clay is the largest.Obviously, data processing using the mean or endpoint value was inappropriate.However, the connection cloud model and normal cloud model often use the average value of the evaluation indicator to process the data; the connection expectation model always depicts the indicator using the endpoint values of the actual interval value.Evidently, these values do not take into account all the characteristics of the data, and the corresponding result may be different from the actual information of the samples. Superiority of the Proposed Model The membership degree of the cloud model changes continuously in the interval [0, 1].The sample of connection degree closer to 1 has a greater possibility of belonging to the given grade, and the sample of connection degree closer to 0 presents a greater possibility that the sample does not belong to the given grade.Moreover, membership 0.5 is the mean value of the membership interval [0, 1], which is the boundary point between belonging and not belonging; it is the fuzziest point.For example, as seen in Table 1, it is obvious that the threshold value 35 of index C3 between grade I and grade II is subordinate to both grade I and grade II, the degree of belonging to grade I and grade II is 0.5, and the ambiguity at this point is the highest.So, it is more likely that this point belongs to grade I and grade II.It can be seen from Figure 3e that the membership y(III) and y(IV) at this point are both 0.5 in the connection cloud model; so, the connection cloud model can well describe the transformation trend between adjacent grades.However, in the connection cloud model, if the indicator value x is the expected value, then y(x) = 1, which is an accurate representation and ignores the hesitant nature.In contrast, the intuitionistic connection cloud not only retains the advantages of connection cloud but also overcomes the above shortcoming such that the vertex of the intuitionistic connection degree is no longer a crisp value of 1 but fluctuates in an interval slightly less than 1.It is seen from Table 3 that the minimum reliability of the plasticity index is about 0.85, and the maximum intuitionistic connection degree is a random number uniformly generated between 0.85 and 1. Figure 3f is a demonstration of the above random uniform implementation.Thus, the intuitionistic connection cloud model can depict relationships between the interval-valued indexes and classification standards from membership, non-membership, and hesitation aspects, and can deal with multiple uncertainties without information loss or distortion relative to the normal cloud model and connection cloud model. The evaluation of the shrinkage-swelling grade is a complex problem with uncertainty indicators.The case study shows that the intuitionistic connection model can overcome the shortcomings of the conventional cloud models and has a better description of uncertainties.The model proposed here can significantly enhance reliability because it can consider hesitation characteristics of the interval-valued index.It has the following benefits over other methods. (1) Compared with the normal cloud model, the intuitionistic connection cloud model offers a useful basis for the depiction of the certain and uncertain relationships between the interval-valued indexes with classification standards from three perspectives, namely, identity, discrepancy, and contrary aspects.Additionally, it overcomes the disadvantage of the requirement of normal distribution.(2) The proposed model without information loss or distortion can simultaneously express ambiguity, randomness, and hesitation; so, it has the capability to depict high uncertainty in the evaluation of the shrinkage-swelling grade of untreated and limetreated expansive clays, enhancing the description of the evaluation result "not one is not the other".(3) The weights assigned according to the conditional information entropy method based on the rough set can effectively find the index importance.This weighting method can not only avoid prior knowledge and a large number of data requirements but also reduce the errors caused by human factors. Conclusions The evaluation of the shrinkage-swelling grade of untreated and lime-treated expansive clays involves numerous uncertain evaluation factors and human cognition.However, the traditional evaluation methods mostly rely on single uncertainty assumptions or data distribution and cannot also simultaneously depict the hesitation and randomness of the index.Therefore, there is a necessity to develop a novel model to improve the accuracy and reliability of evaluation results.The intuitionistic connection cloud model based on improved conditional information entropy weighting was introduced herein to analyze the shrinkage-swelling grade of expansive clay and lime-treated expansive clay with consideration of the hesitation, fuzziness, and randomness of the indicator in an infinite interval.A case study was furthermore conducted to verify the feasibility and validity of the proposed model.The main conclusions are drawn as follows. (1) The case study indicates that the intuitionistic connection cloud model used to evaluate the shrinkage-swelling grade is effective and feasible.And it can not only reflect the conversion situation at the classification threshold but also capture the hesitation characteristics at the evaluation interval, especially the central expectation point.The intuitionistic connection cloud model can depict hesitant characteristics of the fuzzy and random index, and the results obtained from the proposed model without valid information loss or distortion are more accurate and reliable relative to those of the traditional cloud models.(2) The weighting method based on conditional information entropy and the rough set greatly reduces the subjective deviation.It not only overcomes the problem of the weight of a few indexes potentially taking 0 or being unable to be assigned via the traditional objective weighting method but also considers the importance of indicators.(3) Different from the method of taking the mean value of traditional cloud models, the model proposed here expresses the measured value with an interval and determines the membership interval of the sample according to its reliability to replace the confidence level interval artificially determined in the intuitionistic normal cloud.So, the proposed model can explore the true characteristics of the original data to the greatest extent and promote an evaluation process and a final result closer to reality. Although the case study verifies the effective application of the proposed model to the shrinkage-swelling evaluation, the intuitionistic connection cloud model is a useful tool with which to depict the hesitation and randomness of information simultaneously.However, to make evaluation results more reasonable and credible, the question of how to better couple two types of weights still needs further investigation. ( 2 ) Generation of random numbers En′ from the normal distribution with entropy En and hyper entropy He. (3) Generation of random numbers xm satisfying the normal distribution of expectation Ex and standard deviation En′.(4) Generation of random numbers β following the uniform distribution of lower limitation u and upper limitation 1−v.(5) Specification of the intuitionistic connection degree of cloud drops through Equation (4).A corresponding drop is obtained as (xm, z(xm)).(6) Repeat 2 to 5 steps until all cloud drops are obtained. Figure 1 . Figure 1.Comparisons of normal cloud, connection cloud, and intuitionistic cloud. Figure 2 . Figure 2. Flowchart for the shrinkage-swelling evaluation of untreated and lime-treated expansive clays. where w(ci) is the weight of index I; I(D|C) is the conditional information entropy D to C; and sig(c) represents the importance of evaluation index c, c C ∀ ∈ .The domain U is di- vided into two parts according to the condition attribute C and the decision attribute D, and U|C = {C1, C2, ‧‧‧, Cm}, U|D = {D1, D2, ‧‧‧, Dn}. ξ represents the width of the left or right half branch of the connection cloud for grade i of the evaluation index j; and Figure 3 . Figure 3. Two-dimensional cloud maps and their projections for the plasticity index and liquid limit: (a,c,e) connection cloud; (b,d,f) intuitionistic connection cloud. Figure 4 . Figure 4.The statistical characteristics of the free expansion rate C5: (a) measured values; (b) coefficient of variation; and (c) standard deviation. Table 2 . Measured data of samples. Table 3 . Reliability of sample data. Table 4 . The integrated membership interval of samples. Table 5 . Numerical characteristics of connection cloud for grade I. Table 6 . Evaluation results and comparisons.
2024-06-26T15:05:58.701Z
2024-06-22T00:00:00.000
{ "year": 2024, "sha1": "7cd6649619ab4052f88eb735d7e9cd7a3e9575cf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/14/13/5430/pdf?version=1719050731", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bda1096cfcce6d2863018215131ee09e28c7706a", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
258644825
pes2o/s2orc
v3-fos-license
Dermoscopy for the Diagnosis of Creeping Hair Caused by Ingrowing Hair: A Case Report Creeping hair is a rare condition characterized by creeping eruption with a black line at the advancing end, mimicking cutaneous larva migrans. The condition is also referred to as cutaneous pili migrans, migrating hair, and embedded hair. A total of 52 cases have been reported since 1957 and most cases were published in English. Herein, we report a case in which creeping hair occurred in the iliac region and review the literature from 1957 to February 2021. A 35-year-old Chinese female presented with a black moving linear eruption that had migrated from the lower abdomen to the iliac region without causing any symptoms during a 3-year period. Cutaneous examination showed a 6.5-cm-long black linear lesion beneath the skin that was revealed to be a hair shaft. After removal of the hair, the eruption diminished and no recurrence occurred in 3 months of follow-up. The creeping hair that had migrated with its lower end forward was confirmed by observation under dematoscopy and light microscopy. A review of the literature revealed that creeping hair occurs most frequently in young and middle-aged patients and the reported cases are mainly from Asia. The top locations involve the foot. The causative hair includes head hair, beard, pubic hair, body hair, and one case of dog hair. A close-up examination and dermoscopic inspection are helpful for the diagnosis of creeping hair. INTRODUCTION Creeping eruption is a linear or serpiginous cutaneous track that is slightly elevated, erythematous, and mobile 1 .The most representative example of creeping disease is cutaneous larva migrans.In rare cases, hair-induced creeping eruption in the skin can mimic cutaneous larva migrans.This condition was first reported by Yaffee in 1957 2 , and is described in the literature as imbedded hair, bristle migrans, pili cuniculati (burrowing hair), pseudolarva migrans, migrating hair, moving hair, intradermal creeping of pubic hair, hair fragments in the skin, cutaneous pili eruption, creeping hair, ingrown hair, and ingrowing hair [2][3][4][5] .The lesion is characterized by a creeping eruption with a black-line-like hair at the advancing end with or without erythema 3.4 .In most cases, the causative hair shaft fragment migrating in the shallow epidermis is visible from the skin surface.In such a condition, dermatoscopy can be used to observe the lesion and aid in diagnosis 6,7 .However, in certain cases, the causative hair is burrowed in the deep dermis and the lesion appears as only raised erythema; thus, diagnosis may require a skin biopsy 8 .Herein, we report a case of creeping hair diagnosed by dermoscopy and review the relevant literature. CASE REPORT A 35-year-old Chinese female noted an asymptomatic black threadlike line on her left lower abdomen 3 years ago.Three days prior to presentation, she found that the lesion had advanced to her iliac crest and the eruption was very superficial. On physical examination, there was a fine, very superficial, black line about 6.5 cm in length observed through the skin surface.There were no signs of inflammation surrounding the lesion, although there was a small area of broken epidermis made by the patient herself (Fig. 1A).The black line was clearly visible under dermoscopy (Fig. 2A) and a Z-shaped angle was seen in the middle of the lesion without any signs of inflammation (Fig. 2B). We made a small incision in the region of the broken epidermis and used a pair of small forceps to remove a 2.0-cmlong black linear object (Fig. 1B).It was difficult to remove the linear object from the epidermal bed and it broke off from the main body of the lesion.Histopathological examination revealed a hair structure.Dermoscopy revealed that the black linear object was a naked hair shaft (Fig. 2C).The patient easily extracted the advancing end at home using a pair of tweezers while she waited for her pathological result.One week later, she brought us the advancing end that was about 1.0 cm in length, and dermoscopy revealed that it was a hair root (Fig. 2D).Hence, a shallow slit was made at the position of the Zshaped angle, and the remnant hair was easily extracted (Fig. 1C).The hair was a twisting hair that resembled pubic hair and was about 3.5 cm in length.After removal of the hair, the eruption diminished and there has been no recurrence during 3 months of follow-up. DISCUSSION To our knowledge, a total of 52 cases of hair-induced eruption have been reported since 1957 [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] .The characteristics of these cases, including ours, are summarized in Table 1.The lesion locations include the ankle, sole, toe, forefoot, palm, leg, suprapubic region, umbilical region, iliac crest, buttock, abdomen, breast, waist, cheek, neck, jaw, mandibular angle, and scalp.However, the foot and toe are the most common sites of occurrence.The disease tends to occur in young and middle-aged patients, and the male to female ratio is 2.26:1.The period from the onset of the eruption to the first clinical visit ranges from 12 hours to 10 years.Thirty-eight of the 52 reported cases of hair-induced creeping eruption were found and reported in Asia.The large number of cases reported in Asia may be because Asian hair has maximal tensile strength and the largest cross-sectional area compared with other ethnic groups 18 .The causative hair includes head hair, beard, pubic hair, body hair, and one case of dog hair.The causative hair length is 0.5 to 8.0 cm.Notably, our patient had a long asymptomatic period between the discovery of the lesion and the time at which she consulted a dermatologist.This is the first report of creeping hair occurring in the iliac region of a Chinese patient.The term creeping hair was first used by Sakai et al. 3 in 2006.In his report Sakai reviewed Japanese reports and found eight cases of migration of the hair shaft or hair fragment in the shallow skin located on the lower abdomen or pubic region, and six cases of migration toward the iliac region, creating a wavy linear eruption.There is only one previously reported case of creeping hair in the suprapubic region with the same location and migration direction as the present case 3,7 .Interestingly, two of the cases reported by Sakai were demonstrated to have creeping hair migrating with the terminal end at the advancing end.The same phenomenon was found in our case, as the advancing end of the lesion was the terminal end structure of the hair root.We consider that an ingrown hair in the pubic area curled back and grew inward and deeper after being released from the hair follicle, as the lower end of an ingrown pubic hair is naturally oriented toward the iliac region. 3Our observations using the naked eye and dermoscopy revealed that the causative hair was twisted and unlike the head or body hair.Thus, we diagnosed the cause of the creeping hair as an ingrown pubic hair in our case. The differential diagnoses for creeping hair include creep-ing eruption caused by parasitic diseases, interdigital pilonidal sinus, and pseudofolliculitis barbae.The lesion caused by parasitic disease is mobile and the tracks are sinuous, commonly associated with severe pruritus, and lacks a black line; furthermore, a parasite is always found at the advancing end of the lesion 5 .Interdigital pilonidal sinus is caused by short sharp hairs that penetrate the interdigital space of the hand; the causative hair fragment cannot be observed from the skin surface, and ultrasonography aids in the diagnosis of this disease 19 .Pseudofolliculitis barbae is a chronic inflammatory disorder of the follicular and perifollicular skin; the curved shape of the hair follicle enables the downward curvature and penetration of the growing hair tip into the skin, leading to pruritus and the development of papules, pustules, and post-inflammatory hyperpigmentation 20 .It is not difficult to correctly diagnose such disorders. In addition to a close-up examination, dermoscopy is a useful tool to distinguish between these similar disorders.The characteristic dermoscopic finding of creeping hair eruption is a straight linear black lesion.The black line is easily movable within a whitish 1-mm-wide linear eruption under the pressure of the lens 10 .Creeping hair can also present as a fine, very superficial, dark line that is easily movable under the pressure of the lens and surrounded by a 2-to 3-mm erythematous area 11 .Dermoscopy enables the visualization of morphologic hair structures that are not visible to the naked eye, including the head of the hair root, and the differentiation of an unbroken hair from a hair fragment. The treatment for creeping hair is simple.A small incision is made at one end of the black line and the hair is extracted from its epidermal bed with a pair of small forceps.In our experience, the hair is easily pulled out from the tail end of the Fig. 1 . Fig. 1.(A)An evident, fine, very superficial, black line was clearly visible through the skin surface, without any sign of inflammation on the surrounding skin.(B) A dark hair was extracted from its epidermal bed measuring 2.0 cm in length at the broken epidermis made by the patient herself located near the advancing end.(C) A dark twisting pubic hair was extracted from its epidermal bed measuring 3.5 cm in length at the Z-shaped angle position. Fig. 2 . Fig. 2. (A)A black line was clearly visible under the dermoscopy with a superficial erosion of the advancing end of the lesion (original magnification×50).(B) A Z-shaped angle was shown in the middle of the lesion without any signs of inflammation under the dermoscopy (original magnification×50).(C) A white and transparent membrane adherent to the surface of hair shaft and the hair fragment end was observed by dermoscopy (original magnification×100).(D) The hair root end was the advancing end of the lesion and it showed a bulbous shape observed by dermoscopy (original magnification×100).
2023-05-13T15:13:23.512Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "54a1c19d4253058571df8ef46038bde68e08a6d0", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5021/ad.21.036", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "ace3c877c420eb2380afafa4ac6f5619fef85575", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }