id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
271119327 | pes2o/s2orc | v3-fos-license | Urgent health concerns: Clinical issues associated with accidental ingestion of new metal-blade-containing sticks for heated tobacco products
INTRODUCTION Recently, a concerning pattern has emerged in clinical settings, drawing attention to the potential health risks associated with the accidental ingestion, mostly by children, of a new Heated Tobacco Product (HTP) stick, which contains a sharp metal blade inside. METHODS Following a webinar of the Joint Action on Tobacco Control 2 project, where data on adverse health incidents related to novel tobacco and nicotine products from EU Member States were presented, the Milan Poison Control Center (PCC) conducted a case series study on the accidental ingestion of blade-containing HTP sticks in Italy, between July 2023 and February 2024. The data in the medical records were analyzed to identify the age distribution, clinical presentation symptoms, performed diagnostic procedures, and medical management. RESULTS Overall, 40 cases of accidental ingestion of HTP sticks were identified and are described. A total of 33 (82.5%) children (infants and toddlers, mean age 12.3 ± 3.3 months) were hospitalized. Of these, 29 underwent abdominal X-rays, two children underwent esophagogastroduodenoscopy, and one child suffered from cut injuries to the tonsillar pillar and genian mucosa, requiring anesthesia for fibroscopy. The observed clinical cases associated with new HTP sticks containing a metal blade occurred over just eight months. This issue required the immediate implementation of corrective measures to mitigate health risks. The Ministry of Health issued an alert regarding the dangers related to the accidental ingestion of the stick and imposed more visible warnings on the package. CONCLUSIONS It is of the utmost importance to raise awareness among both the general public and medical practitioners to prevent further cases of accidental ingestion of HTP sticks by infants and toddlers, and ensure a prompt and informed response in emergency situations.
INTRODUCTION
Recently, a concerning pattern has emerged in clinical settings, drawing attention to the potential health risks associated with the accidental ingestion of bladecontaining HTP sticks, mostly by infants and toddlers.This new type of stick, designed for use with the new HTP Iqos Iluma, contains a sharp metal blade inside.The metal blade acts as a 'susceptor', facilitating the electromagnetic induction process by the device, which in turn heats the tobacco in the stick to temperatures typically below 350 o C degrees 1 .
A few cases of accidental ingestion of HTP sticks by children have previously been described in the scientific literature [2][3][4] .Since 2016, the global tobacco retail value has been driven by the growth of emerging tobacco products, including HTPs, particularly in high-income countries 5 .
Indeed, HTP use has increased worldwide 6 .In Europe, HTPs are especially used by younger people and current and former smokers 7,8 .HTPs were first introduced to the Italian market in November 2014 9 .In just seven years, they have become the second most popular tobacco product (after conventional cigarettes and preceding roll-your-own tobacco), with an estimated market share of 18% in 2023.According to the national report on tobacco use in 2023 10 , the exclusive HTP users in Italy were 3.7% of the population, while the dual users (HTPs and conventional cigarettes) were approximately 86%.With respect to the younger population, HTPs are used by 33.2% of the students aged 14-17 years.
The present study describes a nationwide case series of accidental ingestion by children of the metal-blade-containing HTP sticks, as reported in the medical database of Niguarda Hospital in Milan, Italy.
METHODS
The European project Joint Action on Tobacco Control (JATC) 2 11 conducted a study on the reporting of adverse health incidents after the use of electronic cigarettes and novel tobacco products among EU Member States.The study was part of Work Package 7, 'Electronic cigarettes and novel tobacco products evaluation'.
The findings and needs on this topic were discussed in a webinar held in October 2023 entitled 'Reporting on the health incidence after use of novel tobacco or nicotine products in European countries: Towards a harmonized approach'.The webinar revealed that there is currently no harmonized approach for the registration of adverse health incidents following the use of novel tobacco products and electronic cigarettes in Europe, nor is there a centralized collection of this information.The Italian National Institute of Health (ISS) contributed to the survey by contacting the Milan Poison Control Centre (PCC) of Niguarda Hospital (Italy) for data collection.The findings from the survey were presented during the webinar 12 .
Subsequently, between July 2023 and February 2024, a case series study of accidental ingestions by children of the metal-blade-containing HTP sticks was conducted by the Milan PCC, with the objective of collaborating with the aforementioned task of Work Package 7.
The Milan PCC is a 24-hour emergency service that provides consultancy and offers specialist advice for cases of acute intoxication throughout the country.The PCC offers advice to the Emergency Department of the Niguarda Hospital and can be contacted by both private individuals and health service personnel in the event of a suspected poisoning incident.All requests handled by the toxicologist are recorded in a computer database, which can be extracted for epidemiological purposes.
The data from the cases of accidental ingestion were subjected to analysis in order to identify the age distribution, clinical presentation and symptoms, diagnostic procedures performed, and medical management.
It is crucial to highlight that the sticks were only identifiable as blade-containing HTPs from July 2023 onwards.It is therefore not possible to rule out the possibility that previous cases recorded in the medical database as 'electronic cigarettes' or non-blade-containing HTPs may, in fact, include these specific new products, given that the product's introduction to the Italian market occurred as early as December 2022.
In response to the clinical issue raised by the Milan PCC, the Italian Ministry of Health requested that the Italian National Institute of Health (ISS) verify the safety of the product and the actual hazard posed by the sharp metal parts in the sticks.Furthermore, ISS was asked to assess the clarity, visibility and size of the warnings on the packaging containing the sticks.
RESULTS
A total of 40 affected patients were identified from 1 July 2023 to 29 February 2024.The majority of these patients were infants (aged 2-12 months) and toddlers (aged 1-4 years).The gender distribution was 50% female and 50% male, with a mean age of 12.3 ± 3.3 months (Table 1).Of the 40 patients, 33 (82.5%) were hospitalized.Of these, 29 underwent abdominal X-rays, and 24 X-rays were positive for the presence of the metal blade.Sixteen patients (40%) exhibited symptoms, including fifteen with repeated vomiting episodes (ranging from three to six episodes) and one with tonsillar pillar and genian mucosa cut lesions.In four cases, the blade was expelled through vomiting.
Two toddlers underwent an endoscopic procedure known as an esophagogastroduodenoscopy (EGDS).In certain instances, the surgeon attempted to remove the metal blade endoscopically, a procedure that required sedation and observation.
One infant sustained lacerations to the tonsillar pillar and genian mucosa, necessitating fibroscopy under anesthesia.In seven patients, the blade was discovered in the patient's clothing, diaper, or on the floor of their residence, and hospitalization was not required, as ingestion was ruled out.Furthermore, the results of the ISS analysis on the product's safety demonstrated the potential for accidental ingestion of the blade-containing stick, which was not adequately visible on the warning label on the packaging.
DISCUSSION
Although the risks and toxicological management associated with exposure to tobacco and nicotine are well known, the cases reported in this study highlight an unexpected clinical issue related to the accidental ingestion of the sharp metal blade contained in HTP sticks.
The patients treated by the PCC were infants and toddlers in an age group in which they were unable to indicate whether they had ingested the tobacco sticks and whether they had symptoms that were not detectable on clinical examination.Consequently, regardless of the certainty of ingestion, the patients were required to be hospitalized in order to perform the necessary diagnostic procedures to determine the presence of the metal blade and its possible removal.It is widely acknowledged that X-ray exposure in pediatric patients should be undertaken with caution, given the potential risks associated with radiation 13 .
The decision to remove the blade endoscopically was dependent on a number of factors 14 .Primarily, it was crucial to ensure the correct localization of the blade within the esophagus or stomach.This required the availability of an endoscopist and anesthesiologist in the hospital with expertise in the emergency management of pediatric patients.Additionally, a careful assessment of the potential risks associated with sedation in each individual case was essential.The aforementioned risks may be influenced by a number of factors, including the presence of food in the stomach following a recent meal or the existence of chronic or acute diseases, as identified through a comprehensive clinical and anamnestic evaluation during the medical examination.Furthermore, the severity of the clinical manifestation upon admission to the emergency department must be carefully considered.
It is crucial to acknowledge that heated tobacco sticks are smaller in size than conventional cigarettes, which renders them more easily accessible and ingestible by infants and toddlers.The packaging of HTPs containing a metal blade indicates the presence of the blade in small, almost illegible letters, which may contribute to a lack of awareness of the risk.Each individual package has a 0.5 cm × 4 cm warning label, which reads: 'Caution.Do not ingest or disassemble.This product contains sharp metal parts that can cause serious injury if ingested.Keep out of reach of children' (Figure 1).
In the event of ingestion, even if there is only a suspicion that both tobacco and the metal blade have been swallowed, the child must be hospitalized and undergo appropriate tests to determine whether nicotine poisoning or cut injuries are present.
Notwithstanding the potential dangers of the blade, incidences of lacerations were documented in one case, involving a child who had vomited the blade.It is possible that in other cases the blade remained trapped in the casing of the heated tobacco stick or the gastric mucosa was protected by the presence of food.
The exact cause of the gastric symptoms remains unclear, though it is possible that they were caused by nicotine or gastric irritation from the blade.However, it is important to highlight that one of our patients had cut injuries to the oral cavity directly attributable to the sharp edges of the blade.
Two components of the HTP sticks were identified as potentially dangerous: the tobacco filler, and the metal blade.The tobacco filler contains nicotine, an alkaloid with a stimulating effect on the central nervous system and the cardiovascular system, and an irritant effect on the stomach and intestines 15 .The blade is flat, thin, flexible, one centimeter long, and has sharp edges.From a toxicological perspective, the metal is not absorbable if ingested acutely and thus poses no risk from the material of which it is composed.However, the composition of the blade is not fully known, and it is unclear whether the induction heating process has resulted in any modifications.
According to the experience of the medical staff of the PCC, most of the emergency care department physicians were unaware of the presence of the blade in HTP sticks.In fact, they contacted the PCC only for information about treating potential tobacco poisoning.This resulted in a delay in the proper treatment of the patient and risk of overlooking potential internal injuries caused by the blade.
Response policy actions
In response to a clinical issue raised by the Milan Poison Control Center, the Italian Ministry of Health's Prevention Department commissioned an analysis of blade containing HTP samples on 5 December 2023.The objective was to ascertain the actual hazard posed by the sharp metal parts present in the sticks and to evaluate the adequacy of the warnings in terms of clarity, visibility and size.
On 18 December 2023, the Ministry of Health, in response to the data presented in the PCC report, issued an alert to draw attention to the presence of such metal parts and the risks associated with ingestion by young children or individuals with cognitive disabilities.The alert was disseminated to regional health authorities, the Italian Society of Emergency Medicine, the Italian Society of Pediatrics, the Italian Society of General Medicine and Primary Care, the Italian Federation of General Practitioners, and to all Poison Control Centers 16 .The same alert was disseminated throughout the JATC 2 network to inform EU Member States about this health issue.
Following the analyses conducted by ISS, on 15 March 2024, the Prevention Department of the Ministry of Health imposed new, larger, and more visible warnings on the package (Figure 2), which were to be implemented within two months.The new warning must cover approximately 19% of the reverse side of the package.In the interim, the manufacturer was obliged to disseminate the revised warning to retailers, who were required to affix it in front of the shelf with the blade-containing HTPs 17 .This was a provisional solution for the disposal of old packaging.As of June 2024, the new packages with larger warnings are available on the market.
It is recommended that regulators consider banning HTPs containing metal blades, or at least require warnings of larger sizes on the packages of
Strengths and limitations
It is important to note that it was challenging to identify all the cases related to blade-containing HTPs among those involving electronic cigarettes and tobacco products.Indeed, over the years, tobacco derivatives have been classified generically by category in the PCC database (tobacco, cigarettes, nicotine, electronic cigarettes, etc.).The coding of this specific product was only recently introduced.Moreover, the choice of coding the product presents two critical issues: accurate identification by the individual requesting the consultation and adequate coding by the toxicologist carrying out the consultation.It is also important to consider that the emergency room physician, who is already experienced in the management of tobacco poisoning, may not contact a PCC if they are unaware of the presence of the blade.
In light of these considerations, we believe that the number of the reported cases in this study is underestimated.However, it is sufficient to highlight an emerging clinical issue and take corrective actions to limit health risks for consumers and little children.
CONCLUSIONS
The Ministry of Health disseminated an alert, about the potential dangers of metal-blade-containing HTP stick ingestion, to regional health authorities, the Italian Society of Emergency Medicine, the Italian Society of Pediatrics, the Italian Society of General Medicine and Primary Care, the Italian Federation of General Practitioners, and all Poison Control Centers.The observed clinical cases required immediate action to mitigate health risks, particularly given that exposure primarily affected the pediatric population.Modifying the packaging with more visible warnings and raising awareness among both the public and medical practitioners, as the Ministry of Health did, is crucial to prevent future cases and ensure a prompt and informed response in emergency situations.
It is imperative that public health institutions implement information campaigns and corrective actions to reduce the risk associated with the ingestion of these products.It is of the utmost importance to raise awareness among the general public and medical professionals, particularly pediatricians and emergency department physicians, about the potential dangers of metalblade-containing HTP stick ingestion by the pediatric population and individuals with cognitive impairment.
Figure 2 .Figure 1 .
Figure 2. Modified more visible warning, after the corrective measure by the Italian Ministry of Health (Decree of March 2024) | 2024-07-14T05:11:08.979Z | 2024-07-12T00:00:00.000 | {
"year": 2024,
"sha1": "2156cb0c0d6f01d6af568ba9dac10eadbd17866d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2156cb0c0d6f01d6af568ba9dac10eadbd17866d",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270038857 | pes2o/s2orc | v3-fos-license | Risk factors associated with nocebo effects: A review of reviews
Objective This meta-review aims to identify and categorize the risk factors that are associated with nocebo effects. The nocebo effect can exert a negative impact on treatment outcomes and have detrimental outcomes on health. Learning more about its potential predictors and risk factors is a crucial step to mitigating it. Methods Literature review studies about the risk factors for nocebo effects were searched through five databases (PubMed, Scopus, The Cochrane Library, PsycINFO, and Embase) and through grey literature. Methodological validity and risk of bias were assessed. We conducted a thematic analysis of the results of the forty-three included reviews. Results We identified nine categories of risk factors: prior expectations and learning; socio-demographic characteristics; personality and individual differences; neurodegenerative conditions; inflammatory conditions; communication of information and patient-physician relationship; drug characteristics; setting; and self-awareness. We also highlighted the main biochemical and neurophysiological mechanisms underlying nocebo effects. Conclusions Nocebo effects arise from expectations of adverse symptoms, particularly when triggered by previous negative experiences. A trusting relationship with the treating physician and clear, tailored treatment instructions can act as protective factors against a nocebo effect. Clinical implications are discussed.
Introduction
The nocebo effect is a phenomenon in which an individual experiences negative side effects from a treatment or procedure, even though it contains no medically active ingredients.This negative outcome is a result of the person's negative expectations and beliefs about the treatment, rather than any actual physical property of the treatment itself (Evers et al., 2018).The term "nocebo effect" was originally coined to indicate the negative counterpart of the placebo effect and to distinguish the adverse from the beneficial effects of placebos (Faasse et al., 2013): treatment-related nonspecific factors can elicit placebo effects when they have positive meaning and nocebo effects when they hold a negative connotation, leading in this latter case to a worsening of symptoms.For example, negative expectations can induce an increased risk of developing various health-related conditions, including respiratory diseases (Vlemincx et al., 2021), pain (Manaï et al., 2019), gastrointestinal symptoms (Ma et al., 2019), influenza-like symptoms (Pagnini, 2019), and postoperative morbidity (Maroli et al., 2022).
Although not always distinct, researchers examine two variants of nocebo effects: primary nocebo effects and nocebo side effects (Faasse et al., 2013).Primary nocebo effects refer to the effects as the primary negative outcome of a treatment/medical procedure intended as harmful.Such outcomes were described by Hahn as nocebo effects (Hahn, 1997), which were distinguished from 'placebo side effects', whereby a treatment primarily intended as beneficial can cause harmful outcomes.This is the case of nocebo side effects, namely, unpleasant symptoms that arise following a treatment that is primarily intended as beneficial, but of which specific side effects are anticipated (Meeuwis et al., 2021).Notably, recent evidence suggests that primary manipulations of nocebo and nocebo side effects do not produce equivalent results.Caplandies et al. (2017) demonstrated how instructions on the nocebo effect can produce different outcomes depending on whether the adverse effect is described as a primary effect or as a treatment side effect (Faasse et al., 2013).Nocebo effects are prompted in research but, unlike placebo effects, are not purposefully elicited in clinical practice, since this would undermine the basic ethical standards of beneficence and non-maleficence.
The phenomenon of nocebo effects, characterized by the adverse outcomes resulting from negative expectations and beliefs, has garnered increasing attention in recent years (Colloca and Benedetti, 2016).The potential impact of nocebo effects on treatment outcomes necessitates the development of protocols to effectively communicate the risks associated with specific treatments, thus minimizing the occurrence of these effects.Considering its implications for clinical practice, patient well-being, and healthcare costs, understanding the underlying mechanisms of the nocebo phenomenon is of paramount importance (Rodríguez-Monguió et al., 2003;Webster et al., 2016).The pervasiveness of the nocebo phenomenon and its potential consequences highlight the need to further elucidate the key factors that predispose, sustain, or exacerbate it.It is crucial to stay abreast of the rapidly evolving literature in this field (Weimer et al., 2022) and continually update our understanding to incorporate the latest findings.
The interest on nocebo effects has greatly increased over time -for example, a PubMed search through keywords provided 1 article in 1961, which increased to 17 in 2007, to reach 154 in 2022.Considering this context, the present paper aims to provide an up-to-date review of reviews published within the last 12 years.By summarizing and systematizing the newest findings from these reviews, we seek to identify and consolidate the primary factors that predict or act as catalysts for nocebo effects.These include psychosocial and clinical risk factors, as well as neurobiological moderators and mediators that can represent the underlying mechanisms.
Protocol registration and eligibility criteria
Our search strategy followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (Page et al., 2021).The protocol was registered on the International Prospective Register of Systematic Reviews.Only literature reviews (systematic review, meta-analysis, scoping review, and mini review) investigating nocebo effects and published from 2012 onwards were considered.We excluded unavailable full texts, conference proceedings, abstracts, commentaries, editorials, opinions, book chapters, journal articles, and debates.Literature was limited to studies in the English language involving humans, including adults over 18 years old.
Search methods for identification of reviews
The research included reviews published from 2012 to April 2024, in which nocebo risk factors were considered.We conducted a systematic search on PubMed (National Library of Medicine and National Institutes of Health), Scopus, The Cochrane Library, PsycINFO, and Embase.Grey literature documents, identified via Google Scholar and OSFHome, ArXiv, SocArXiv, PsycArXiv, and MedArXiv databases, were also included.Keywords and text words used in the search for each of the considered databases were "nocebo effects*" OR "nocebo mechanism*" combined with "risk factors" OR "mediators" OR "moderators".
Review selection
Two authors (F.G. and C.C.) independently screened the title, abstract, and key terms in the first instance for potential inclusion followed by a full-text screening and data extraction.Data extraction included the following procedures: Level 1 -Title and abstract screening: title and abstracts identified by the electronic database searches were screened for potential inclusion; in cases where a decision for exclusion or potential inclusion could not be made by the title/abstract, the full text was retrieved; Level 2 -Full-text screening: full-text articles of the included reviews were then retrieved and further screened based on inclusion and exclusion criteria; disagreements were resolved by discussion at each stage in the process; if an agreement could not be reached, a third reviewer (F.P.) was consulted.In addition, reference list searches of included reviews were manually undertaken and screened as per the same selection process to identify further studies of relevance for inclusion.
Data extraction and management
Data from the selected reviews were inserted into an Excel template by two independent researchers (F.G. and C.C.).The following data were extracted and included in the template: bibliographic information of papers (i.e., authors, country, and year of publication), aim, and the results reported (risk factors and outcome).Any disagreement was discussed with a third author to reach an agreement.Article authors were reached via e-mail in case of missing information.
Assessment of risk of bias in included reviews
All included reviews were quality appraised by two reviewers (F.G. and C.C.) independently.Discrepancies were resolved through dialogue and consultation with a further reviewer (F.P.) when necessary.Critical appraisal involves considering the risk of potential for selection bias, information bias, measurement bias, or confounding.Full-text articles selected for data extraction were assessed for methodological validity and risk of bias using the Risk of Bias Assessment Tool for systematic review and meta-analyses developed by the National Heart, Lung, and Blood Institute (Mangano, 2004).
Strategy for data synthesis
We used thematic synthesis to summarize the results (Barnett- Page and Thomas, 2009).A reviewer (F.G.) coded the results of the included reviews and recorded the concepts that stood out as risk factors.All included reviews were re-read to ensure that relevant data were captured and integrated appropriately into the preliminary themes and sub-themes.All authors reviewed the preliminary analysis to ensure that key data had been captured from the included reviews and discussed the concepts to identify similarities and differences.The bridging of concepts across reviews was performed by grouping similar concepts and creating new ones if necessary.Firstly, 'descriptive' macro-themes specific to each review were identified, and as a second step, a thematic analysis based on an integrative approach was conducted to conceptually combine the most frequent descriptive themes that recurred across the individual reviews.In this latter process, the reviewers' effort was to go beyond the meaning of the descriptive themes to generate new explanations and interpretative hypotheses that converged into broader categories summarizing the nocebo risk factors.Separate files were created (F.G.) for each identified category, together with the citations of the reviews that emphasized the category in question.Emerging themes were then identified, and then sub-themes, to construct the 'core category'.The coding scheme required a circular approach, re-reading the articles several times and integrating new information that was initially left out, to also examine the relationships between the themes (F.G. and C.C.).In this way, it was possible to create links between and within the categories.We adopted a weight-of-evidence approach (Regoli et al., 2019), whereby the strength of evidence for each risk factor was identified based on the number of studies investigating it.Our search strategy resulted in the identification of forty-three reviews, for each of which we summarized and synthesized the risk factors related to the nocebo effects.The selection process is reported in Fig. 1.
Included reviews
We identified 43 eligible reviews.Among the articles included, 23 were systematic reviews, 12 were meta-analyses, 6 were narrative reviews, and 2 were scoping reviews.Each review, along with its respective aims, results, and considered risk factors, is summarized in Table 1.
Risk of bias across reviews
The quality assessment revealed that none of the 43 reviews had received a poor-quality rating.All the reviews and meta-analyses were Intrinsic differences between drugs may result in a lack of response in some patients, while nocebos may also play a significant role in low biosimilar retention rates.Therefore, rheumatologists and allied healthcare professionals must be aware of and identify potential nocebo effects early on.To map the available literature on nocebo effects with biosimilars.
When patients are presented with biosimilars as potential treatment options, it is advisable to incorporate nocebo-reducing strategies to prevent negative expectations.Such strategies may include providing unbiased information on the risk-benefit profiles of the biosimilars, emphasizing their positive attributes, and promoting shared decisionmaking processes while empowering the patient.
■ To investigate the presence of the nocebo in generics and biosimilar substitution studies in some of the most common neurological diseases.
The full scope of the adverse consequences of the nocebo effect and placebo effect on patients with neurological conditions using generic and biosimilar medications is not yet fully understood based on current research.
Moreover, there appears to be a knowledge gap between healthcare professionals and patients when it comes to the usage of generic and biosimilar drugs.
To explore placebo and nocebo effects to four common symptoms: dyspnea, fatigue, nausea, and itching.
N = 2425
The importance of contextual elements, including confidence in PDN treatments, patients past adverse encounters, the duration of the intervention, and the information disclosed to patients before their participation confirm the magnitude of placebo and nocebo effects.
■ Prior Expectations and Learning ■ Setting ■ Neurodegenerative conditions Legend: = Missing data, the absence of expected sample size information.
Not Applicable (NA) = when sample size information is irrelevant or not provided in the source material.
F. Grosso et al. based on a focused question that was adequately formulated and described.Of these, seven reviews adequately fulfilled all eight criteria of the checklist, including a systematic literature search strategy, an independent screening of full-text articles, and an assessment of publication bias.Nine of the systematic reviews assessed publication bias, whereas the remaining thirteen either did not report it (five) or it was not possible to apply and/or report these criteria (eight).Ten of the included meta-analyses assessed the degree of heterogeneity.Instead, one of the main limitations related to the quality assessment of reviews was that not all categories could be applied to the different types of literature reviews, based on their respective objectives and methodologies.The risk of bias for each study included in the review is shown in the Supplementary Material.
Synthesis of results
Through the thematic analysis conducted across the 43 reviews, we have identified nine categories of risk factors predicting the nocebo effect.Results are summarized in Table 2.
1. Prior expectations and learning.Negative expectations and learning are among the most relevant risk factors associated with the nocebo effect.Expectations can be engendered by different factors, including direct information, suggestions, and social cues (Wager and Atlas, 2015).Learning in its classical meaning (i.e., conditioning) occurs when a person who has a previous exposure to a stimulus reacts to it, and then responds to the same stimulus through associative processes in a similar manner.However, expectancy and learning are not mutually exclusive (Colagiuri et al., 2015).Cognitive theories of conditioning postulate that this process can also be mediated by expectations, whereby previous experiences may build up to the point of shaping patients' expectations about the course of their illness (Dodd et al., 2017).In their review, Meeuwis and colleagues (Meeuwis et al., 2021) underlined the role of conscious expectations in nocebo effects on both open-label and closed-label verbal suggestions for the itch.They highlighted that open-label verbal suggestions influenced conscious expectations, while closed-label (concealed) suggestions directly impacted itch levels without involving conscious expectations, possibly hinting at a role of learning.Social factors can also drive expectations, as seeing someone report side effects after receiving medical treatment can increase the likelihood of a similar nocebo effect (Petrie and Rief, 2019).Meanwhile, previous positive experiences may enhance placebo analgesic effects, while negative experiences can induce nocebo effects (Colloca and Benedetti, 2016).This category receives additional validation from a recent umbrella review conducted by Frisaldi et al. (2023).Their extensive analysis has demonstrated that, in the context of nocebo effects, the manipulation of expectations,
Risk Factor(s) Synthesis
Prior Expectations and Learning
■ Expectations generated as the product of cognitive engagement involve the subjectively experienced likelihood of a future effect, often induced by verbal suggestions, which promotes the nocebo effect.■ Learning mechanisms can involve classical conditioning, a process whereby the repeated association of an unconditioned effect with a conditioned stimulus increases the likelihood of the nocebo effect.■ Negative beliefs about the overuse of medications are associated with increased side-effect expectations, and increased perceived sensitivity to medicines is associated with increased side-effect expectations.
Socio-demographic characteristics
■ Women are more affected by conditioning than by verbal suggestions, while men respond stronger to verbal suggestions.■ Old age, lower levels of education and socioeconomic status, and living in rural regions are all positively related to nocebo reactions.
Personality and individual differences
■ Anxiety, depression, a tendency toward somatization, and symptom amplification positively correlate with the nocebo effect.■ Type A personalities, characterized by being aggressive, competitive, hostile, and pessimistic, are more likely to experience adverse symptoms.■ The personality trait most frequently associated with the nocebo effect is neuroticism.
Neurodegenerative conditions
■ Clinical problems, such as dementia, psychosis, hallucinations, orthostatic hypotension, and sleep disorders, result in a substantial nocebo effect.■ Patients with mild cognitive impairment or dementia related to Alzheimer's disease, a loss of prefrontal functional connectivity, and executive control, are associated with a higher nocebo effect.■ Considering the pronounced neuronal degeneration in the dorsolateral prefrontal cortex, orbitofrontal cortex, and anterior cingulate cortex in Alzheimer's disease, it is reasonable to anticipate a disruption of placebo responsiveness in these patients.
Inflammatory conditions
■ Integrate findings with existing literature on placebo/nocebo effects, emphasizing the interconnectedness of psychological, neurological, and immunological factors within the framework of biosimilar challenges in inflammatory settings.■ The influence of patient-related factors, including comorbidities, changes in dopamine pathways, and somatoform disorders, on the manifestation of the nocebo effect.■ Investigate instances of the nocebo effect in patients transitioning to biosimilars, uncovering unfavorable therapeutic responses and potential benefits upon returning to the innovator drug.
Communication of information and patientphysician relationship
■ Inadequate non-verbal behaviors, such as lack of eye contact or body gestures, are associated with an increased perception of adverse symptoms.■ Not explaining the nocebo effects and focusing on losses rather than benefits can increase side effects.■ A cold communication style, characterized by directing gaze and body posture away from participants, a lack of empathic remarks, and a rigid attitude, as well as a missing treatment alliance, promotes the nocebo effect.
Drug characteristics
■ Drugs with generic brand labeling are associated with an increased nocebo effect.■ Oral medications induce a stronger nocebo effect.
Setting
■ Reduced patient trust in clinicians and perceptions of low professional status can lead to negative effects.■ An improper healing setting, including the type or quality of lighting, sound, architecture, interior design, technology, or missing facilities, can trigger nocebo effects.
Self-awareness
■ Nocebo effects can be triggered by non-conscious cues (i.e., operating outside of conscious awareness).
■ The relation between self-awareness and the likelihood of a nocebo effect is still a matter of debate.■ Gaps in patient awareness and understanding, may cause the perception of nocebo-related adverse symptoms.
F. Grosso et al. conditioning, or a combination of both has proven effective in eliciting nocebo responses across diverse domains, such as pain perception, skin dryness, nausea, and cognitive performance (Frisaldi et al., 2023).2. Socio-demographic characteristics.A recurrent result among the considered reviews is that women are more likely to experience nocebo effects across a range of medical conditions (Kravvariti et al., 2018).However, the explanation for any gender difference is likely complex and multifactorial.For example, in a narrative review by Manaï et al. (2019) on how to prevent, minimize, or extinguish nocebo effects in pain, women were more affected by conditioning than by verbal suggestions, whereas men responded strongly to verbal suggestions.However, it is hard to rule out the role of confounders such as anxiety, which is associated with higher nocebo effects and is more prevalent in females than in males (Vambheim and Flaten, 2017).Future research is needed to gain knowledge on whether the interaction between gender and cognition may influence nocebo effects in experimental or clinical settings (Data-Franco and Berk, 2013).Beyond gender, also older age, lower levels of socioeconomic status, as well as living in rural regions, are all factors that have been associated with adverse symptoms related to the nocebo effect, such as headache, back, and joint pain, intolerance to food, and sexual dysfunction (Data-Franco and Berk, 2013).Instead, mixed findings have been found regarding the level of education.As reported (Bavbek et al., 2015), graduation from college compared to high school or primary school may be associated with increased expectations or information-generated effects, since well-educated subjects will be more likely open to receiving information from different sources and may thus be more at risk of developing negative expectations.3. Personality and individual differences.Some studies have addressed the role of personality as a predictive factor of nocebo effects and adverse event reporting.Traits such as neuroticism, pessimism, and type A personality may increase the risks for such phenomena (Data-Franco and Berk, 2013).As reported by a systematic review conducted by Kern (2020) neuroticism is the most often reported personality trait with a positive correlation with the nocebo effect.Moreover, pessimism is relatively consistently associated with nocebo, with an equally positive correlation between low optimism and the tendency to perceive negative effects (Bagarić et al., 2021).All these characteristics are more frequently found in the type A personality (aggressive/competitive/hostile personalities), which appears to be three times more likely than behavior pattern B to be associated with a nocebo effect.Fear and anxiety are also positively associated with the nocebo effect and have been found to increase the probability of negative effects of treatment (Planès et al., 2016).A review of contemporary experimental research pointed out that highly anxious people are prone to heightened attention to their body and bodily sensations and could therefore be more susceptible to the nocebo effect (Manaï et al., 2019).Other studies also confirm that patients with conditions such as depression or anxiety have a higher tendency towards somatization, which has been shown to result in increased reports of side effects.Another characteristic that might be relevant to the nocebo effect is anxiety sensitivity, namely, a fear of anxiety itself because of the belief that anxiety can have detrimental physical, mental, and social consequences.Among anxiety-sensitive individuals, the expectation of unpleasant symptoms and a consequent increase in anxiety might further heighten fear and related bodily sensations, thus leading to stronger nocebo effects.While some preliminary findings have suggested that anxiety sensitivity is associated with the nocebo effect, further research examining this topic is required (Bagarić et al., 2021).Other traits that seem to positively correlate with the nocebo effects are suggestibility and pain catastrophizing.Suggestibility, especially in terms of trait-like characteristics facilitating body sensations (e.g., physical suggestibility), has been linked to nocebo effects.Catastrophizing, an important psychological factor for pain management therapies is also found to be relevant for nocebo (Blasini et al., 2017;Meeuwis et al., 2021).The nocebo effect is significantly more pronounced in clinical populations than in healthy ones.Multiple factors, such as higher baseline anxiety, may predispose clinical populations to experience greater nocebo effects.Combining these findings, nocebo effects seem to consistently manifest across various somatic outcomes, such as pain, nausea, and headache (Rooney et al., 2023).4. Neurodegenerative conditions.Some neurological modifications may also increase susceptibility to nocebo reactions (Benedetti et al., 2016;Skyt et al., 2020).In the field of brain diseases, the highest nocebo dropout rate has been observed in Parkinson's disease (PD) (Stathis et al., 2013).Human experimental evidence suggests that negative expectations could result in motor deterioration in patients with PD (Frisaldi et al., 2024).PET studies showed that high placebo effects were associated with greater dopamine (DA) and opioid activity in the nucleus accumbens, whereas nocebo effects were associated with a deactivation of DA and opioid release (Frisaldi et al., 2015).Both systems modulate several processes, including the regulation of reward and affective states.Thus, increased nocebo should be expected in PD, although DA replacement therapy results in changes in many aspects of neural activity within the entire basal ganglia cortical networks that are not yet fully understood.Not considering the nocebo dropout rate during short-term interventions for headache and multiple sclerosis (MS), the lowest nocebo dropout rate has been observed in restless legs syndrome (RLS) (Silva et al., 2017) and during disease-modifying therapies (DMTs) in MS (Papadopoulos and Mitsikostas, 2010).A meta-analysis conducted by Leal Rato (Leal Rato et al., 2019) and colleagues that included 236 randomized control trials identified that the magnitude of the nocebo effect in Parkinson's disease (PD) is substantial.The results demonstrated that most placebo-treated PD patients suffered adverse event symptoms (56%), providing evidence of a strong negative effect of an inert intervention compared to other neurological diseases.Overall, it is improbable that the nocebo effect is restricted to a particular disease or a singular pathophysiological process, but it seems that patient populations, such as those with PD, may be more susceptible to it.While the literature has shown the relevance of the nocebo effect in Parkinson's disease, a common belief among neurologists is that the nocebo effect may also be relevant in other neurological conditions, such as multiple sclerosis or epilepsy (Spanou et al., 2019). 5. Inflammatory conditions.In the context of inflammatory diseases, neurogenic inflammation assumes a pivotal role in pain hypersensitivity and locally exaggerated immune reactions, although its susceptibility to conditioning remains insufficiently explored (D'Amico et al., 2021;Linsenbardt et al., 2015).Immune-related mechanisms may underpin the intricate interplay between neurogenic inflammation and locally exaggerated immune responses (Meeuwis et al., 2021).Nocebo effects, particularly in the context of inflammatory conditions, involve intricate interactions with the immune system.Several immune-related mechanisms contribute to the manifestation of nocebo responses.This includes the release of pro-inflammatory cytokines, the bidirectional communication between the nervous and immune systems (neuroimmune interactions), and the field of psychoneuroimmunology, which explores the connections between psychological processes and immune responses (Frisaldi et al., 2015(Frisaldi et al., , 2023)).The release of inflammatory mediators and the neurobiology-associated nocebo effects also play a role (Zhang et al., 2022).A noteworthy nocebo effect in this field is an unexplained, unfavorable therapeutic response following a switch to biosimilars, often followed by a beneficial effect upon reverting to the innovator drug (Kristensen et al., 2018).Notably, the biosimilar retention rate at the third infusion demonstrated substantial efficacy, reaching 85% among patients with inflammatory bowel disease (IBD), rheumatoid F. Grosso et al. arthritis (RA), or axial spondyloarthritis (AS) (Pouillon et al., 2018).Within the spectrum of neurological conditions, the nocebo effect exhibits significant variation.Specifically, in chronic inflammatory demyelinating polyneuropathy (CIDP), the nocebo effect is notably smaller compared to other neurological diseases, as reported in a comprehensive systematic review and meta-analysis (Zis et al., 2020).One plausible explanation lies in the fact that CIDP primarily affects the peripheral nervous system, distinguishing it from other disorders that predominantly impact the central nervous system.Comorbidity with somatoform disorders and alterations in dopamine pathways within the brain have been suggested as potential contributors to this observed difference.The impact of the route of administration on the nocebo effect remains a subject of debate.In various analyses, the route of administration played a role in nocebo dropout rates.For instance, trials involving botulin toxin for the prophylactic treatment of primary headaches exhibited a significantly lower nocebo dropout rate compared to oral medication (Zhang et al., 2022).However, in multiple sclerosis, the route of administration did not significantly affect nocebo rates.In a meta-analysis, subcutaneous delivery of immunoglobulin primarily caused adverse events related to the injection site, yet the nocebo dropout rates did not show dependency on the route of drug administration (Ma et al., 2019).In the exploration of nocebo responses within the experimental endotoxemia model, individuals prone to nocebo effects reported significantly more bodily sickness symptoms.This observation suggests a link between the perception of symptoms and the influence on perceived treatment allocation (Benson and Elsenbruch, 2019).Overall mild, benign ailments commonly reported by healthy individuals may be misattributed as unwanted drug effects in pharmacological trials.In the realm of inflammatory bowel disease trials, nocebo responses take on a distinct character, manifesting as an increased reporting of adverse events when patients transition from an established, albeit expensive, biologic therapy to a more cost-effective approach with biosimilars (D' Amico et al., 2021;Zis and Mitsikostas, 2018).Furthermore, the specific medications under investigation, particularly those targeting the immune system, may influence nocebo responses (Frisaldi et al., 2023).6. Communication of information and patient-physician relationship.Preliminary evidence indicates that communication and education techniques might be effective in reducing nocebo effects induced through instruction (Rooney et al., 2023).Non-verbal communication, such as eye contact, posture, grimace, and movement style during the encounter with a patient is important, as these forms of communication can predispose patients to experience nocebo effects both consciously and subconsciously (Kravvariti et al., 2018).Compared to a cold communication style (i.e., directing gaze and body posture away from participants and no empathic remarks), a warm communication style (i.e., gazing at the patient, welcoming in a friendly manner, an open body posture, and adding empathic remarks) of clinicians resulted in positive expectations (e.g., expectations of shorter pain duration), decrease in anxiety and negative mood (Daniali and Flaten, 2019).Overall, a cold communication style resulted in higher anxiety levels and expectations of longer pain duration in patients.In this regard, the review of Hansen and Zech (2019) emphasizes the importance that medical personnel adopt explicit/implicit positive expressions towards their patients, otherwise, this could lead to reduced effectiveness of the treatment through nocebo-like mechanisms.Indeed, verbal, and nonverbal communications between physicians and nursing staff contain numerous unintentional negative suggestions that may trigger a nocebo effect: body posture, tone of voice, a shrug of shoulders, frown, or furrowed brow (Planès et al., 2016).Attitude as another component of non-verbal communication also plays a key role in predicting the occurrence of a nocebo effect: physicians who are more encouraging, kind, affectionate, and provide a clear diagnosis appear to be more effective, for example, in reducing levels of perceived pain and the time needed to improve than physicians who adopt a more rigid attitude and offered no consolation (Data-Franco and Berk, 2013;Planès et al., 2016).Furthermore, informed consent to therapeutic interventions is part of the encounter in which negative anticipation is frequently introduced.Potential adverse events, although rare, frequently monopolize discussions and tend to be framed negatively; for example, physicians will usually state the small percentages of patients who experience adverse events, rather than the large percentage of patients who tolerate the medication well (Planès et al., 2016).Negatively framed information is also associated with higher expectations of side effects, whereas framing and customization of information help to develop more functional treatment expectations and prevent nocebo effects induced by expectation (Smith et al., 2020).However, the physician's manner during the discussion might be a more pertinent risk factor for nocebo effects than the actual content of the information provided, as well as the interactions with nonmedical staff and fellow patients (Evers et al., 2018).However, it is important to keep in mind that anxious or pessimistic patients can also actively find negative information by themselves (pairs, the Internet, and leaflets on drugs).Many sources of medical information on the Internet and conventional media overstate the negative effects of treatments and patients seeking consultation in online forums and blogs might be susceptible to the nocebo effect due to the misguided beliefs stemming from this overly negative information.This, in turn, can lead to drug intolerance and non-adherence to medications (Evers et al., 2018;Planès et al., 2016).Social features such as the physician's reputation and references, attire, grooming, beliefs, and manners can also affect the patient's expectations of the treatment outcome.A qualitative systematic review (Daniali and Flaten, 2019)revealed that even experimenters' and/or clinicians' status may determine the nocebo effect: for example, higher professional status and higher confidence of experimenters/clinicians led to lower pain reports, more accurate pain ratings, and better physical and emotional states.The nature of the therapeutic alliance may also be a driver of the nocebo effect, with a hostile-dependent relationship being an exemplar (Dodd et al., 2017).Overall, several factors converge in the construction of an authentic interpersonal relationship that can buffer against nocebo effects: patient-oriented information, an empathic attitude on the part of the therapist that inspires trust, as well as empowerment to support self-efficacy and individual responsibility (Neumann et al., 2022;Smith et al., 2020).7. Drug characteristics.Not much research has investigated whether the type of medication received by the patient can significantly contribute to the nocebo effect.The available evidence seems to suggest that additional marketing features of a drug, such as price and labeling, are important factors that can influence the therapeutic effects (Planès et al., 2016).Patients and physicians generally consider generic drugs to have lower efficacy and be associated with more adverse effects than their brand-name counterparts.Hence, the use of general labeling has been associated with medication non-adherence.Furthermore, brand labeling of the medication can increase placebo effects whereas generic labeling of the medication is associated with higher rates of nocebo effects (Faasse et al., 2013).Nocebo effects are also most prevalent in the initial period of trying a medication that is new to the patient, and the fear of experiencing adverse events associated with generic medication might be rooted in the fact that these medicines are often newer to the market and physicians have less experience in using them than branded medication.However, as pointed out by several reviews (Petrie and Rief, 2019;Spanou et al., 2019), if the tablets had a generic label, the placebo tablets were less effective compared to active ibuprofen.Fewer side effects were attributed to placebo tablets with brand-name labeling compared to placebo tablets with a generic label.The nocebo effect can also be created by more subtle branding F. Grosso et al. cues when patients are switched from a branded to a generic medicine: drug switches from branded to generic can result in increased reports of side effects and complaints that the new drug is less effective (Petrie and Rief, 2019).Also, certain features of medications that are unrelated to their main pharmacological action, such as the color, odor, or route of administration, can influence therapeutic efficacy.Finally, injectable therapies induce stronger placebo effects and have lower rates of nocebo effects than oral medications as proven by studies of therapies for migraine or osteoarthritis pain (Kravvariti et al., 2018).8. Setting.Besides patients and physicians, other features of the healthcare setting might introduce positive or negative anticipation of treatment effects.Examples include physical properties of the medical setting such as the type or quality of lighting, sound, architecture, interior design, and technology, as well as the ease and affordability of access to care.Kravvariti et al. (2018) highlighted the clinical relevance of contextual factors as triggers of nocebo effects in the healthcare setting, in terms of environment, architecture, and interior design, which should not be overlooked.The use of facilities where evidence-based design such as furnishing, colors, artwork, light, outside views, temperature, soothing sound, and music are adopted, positively impacts patients' outcomes thanks to the creation of a proper healing setting, which can reduce nocebo-induced adverse symptoms These contextual factors act as a continuous outcome-relevant influence throughout the entire process, that is, during anamnesis, diagnosis, implementation advice, and the final evaluation (Neumann et al., 2022).9. Self-awareness.Some evidence suggests that the level of awareness regarding certain stimuli may influence outcomes.Indeed, there is a large literature suggesting that behavior can be motivated by stimuli that are not consciously perceived because they are presented at low intensities or masked from conscious awareness (Custers and Aarts, 2010)sometimes referred to as subliminal stimuli (Powers, 1973).However, the role of awareness not only for placebo but also for nocebo effects is still a matter of debate.In their comprehensive systematic review, Webster and colleagues (Webster et al., 2016) claim that there is little evidence that self-awareness increases the likelihood of a nocebo effect.Both placebo and nocebo effects can be triggered by non-conscious cues (i.e., operating outside of conscious awareness), mixing these results with the ones from conditioning mechanisms (Jensen et al., 2012).For example, Colloca and Benedetti (2016) in their review pointed out that patients undergoing pain treatment respond more positively when they are aware of receiving pain medication.Brain imaging studies have recently extended and corroborated these results by demonstrating that being aware of receiving a treatment potentiates the pharmacological analgesic effect of remifentanil in healthy subjects receiving acute thermal painful stimulation (Colloca and Benedetti, 2016).The focus on monitoring the side effects of one's own body and the consequent distraction from concentrating on the expected result of the drug leads to an increase in symptoms and the absence of a placebo effect (Hansen and Zech, 2019).
Neurobiological mechanisms
There are multiple neurobiological mechanisms implicated in the development of nocebo effects, although these are often studied without a clear connection to the identified risk and protective factors.In general, most knowledge about these mechanisms comes from the field of pain and analgesia, even though much less research has been done on nocebo effects than on placebo effects (Colloca and Benedetti, 2016;Planès et al., 2016).The following endogenous substances have been identified so far in this setting: cholecystokinin, dopamine, corticoids, and opioids.The mixed cholecystokinin (CCK) type A/B receptor antagonist, proglumide blocks nocebo hyperalgesia with no effect on cortisol and adrenocorticotropic hormone supporting the direct role of CCK in the hyperalgesic nocebo effect.Multiple hypotheses have been put forward explaining the nocebo effect including endogenous substances and psychosocial mediators (Manchikanti et al., 2011).Overall, it appears that there is an interaction and a link between cholecystokinin (CCK), pain, and anxiety.This may help explain the mechanisms underlying how certain individual differences, particularly anxiety, may facilitate the nocebo effect.Benedetti et al. (2005) found that expectation-induced hyperalgesia can be blocked by administering proglumide, a CCK-receptor antagonist.Recent studies (Frisaldi et al., 2015) have shown that nocebo pain effects induced in post-operative patients by negative expectations regarding a saline infusion could be prevented by the CCK antagonist proglumide, a nonspecific CCK-1, and CCK-2.
The role of certain neurodegenerative diseases as risk factors for nocebo effects may be linked to dopamine.High placebo effects are associated with greater dopaminergic and opioid activity in the nucleus accumbens (significant decrease of the l-receptors' binding potential), whereas nocebo effects are associated with a deactivation of dopamine (Skyt et al., 2020).Another study about the role of corticoids (Benedetti et al., 2006) showed that high placebo effects are associated with greater dopaminergic and opioid activity in the nucleus accumbens (significant decrease of the l-receptors' binding potential), whereas nocebo effects are associated with a deactivation of dopamine.
Patient-physician relational aspects, as well as setting components, may exert their influence through certain biochemical mediators.A study about the role of corticoids (Planès et al., 2016) showed that verbally induced nocebo hyperalgesia was associated with hyperactivity of the hypothalamic-pituitary-adrenal (HPA) axis, as assessed through adrenocorticotropic hormone and cortisol plasma concentrations.The blocking of the nocebo effect is not mediated by endogenous opiates since the infusion of naloxone does not prevent the effects of proglumide (Planès et al., 2016).The opioidergic and the CCKergic systems may be activated by opposite expectations of either analgesia or hyperalgesia, respectively.Verbal suggestions of a positive outcome (pain decrease) activate endogenous l-opioid neurotransmission, while suggestions of a negative outcome (pain increase) activate CCK-A and/or CCK-B receptors (Planès et al., 2016).The opioid antagonist naloxone does not prevent the attenuating effect of proglumide on the nocebo effect.Although nocebo effects are often described as the negative counterpart of the placebo effect with opposite effects on pain, these findings suggest that placebo and nocebo effects do not involve opposite neurotransmission activity, at least not in the endogenous opioid system (Planès et al., 2016;Scott et al., 2008).
Neuroimaging techniques have also highlighted important contributions to our knowledge of nocebo hyperalgesia, especially when related to expectations.Inducing negative expectations results in both amplified unpleasantness of innocuous thermal stimuli as assessed by psychophysical pain measures (verbal subject report) and increased fMRI effects in the anterior cingulate cortex and a region including the parietal operculum and posterior insula.Together with the hippocampus and the prefrontal cortex, these are regions also involved in pain anticipation.Changes in the hypothalamic-pituitary-adrenal axis, including rises in adrenocorticotrophic hormone and cortisol, have been linked to pain perception and expectation.Neuroimaging studies have examined this phenomenon: a positron emission tomography study reported changes in μ-opioid and dopamine D2/D3 neurotransmission with the nocebo effect, and functional magnetic resonance imaging studies have suggested the involvement of specific brain structures, such as the anterior cingulate, insula, and the prefrontal cortex (Frisaldi et al., 2015).Due to the limited number of studies, more research is needed to draw firm conclusions.
Discussion
The present review of reviews offers a synthesized overview of the main risk factors associated with the nocebo effect, considering its impact and pervasiveness in the clinical context (Colloca and Miller, 2011).These factors span from individual-independent characteristics, such as contextual cues and physical properties of the treatment, to psychological and personal characteristics such as expectations, affective states, and personality traits.Crucial risk factors for the nocebo effect are prior expectations and learning, which often act synergistically towards its development and maintenance (Meeuwis et al., 2021).Instead, the role played by awareness is still a topic of open debate on which future studies should focus.Patients, as catalysts of the effect, often take an active role in shaping nocebo effects (Colloca and Miller, 2011).Moreover, findings on the impact of negative communication styles and attitudes suggest how a patient-centered approach that is rooted in demonstrating care and empathy can positively enhance a patient's experience within the clinical environment and activate psycho-sociobiological adaptations that can counteract the nocebo phenomenon (Evers, 2017;Blasini et al., 2018).Socio-demographic factors, such as gender and level of education, also require further studies that can confirm whether they are risk factors and to what extent.For example, a lower level of education could more easily lead to misconceptions and thus represent a potential risk factor (Bizzi et al., 2019).The nocebo effect has occasionally been referred to as the 'evil twin' of the placebo effect.If this were true, one would expect the risk factors for a nocebo effect to be the inverse of the predictors for a placebo effect (Glick, 2016).As Webster and colleagues have already pointed out in their systematic review (2016), the mechanisms advocated in our review appear to be like those previously identified for placebo effects.The results of our review can therefore be compared with those of Webster's analysis, which clustered the basic risk factors in six different categories (demographics; clinical characteristics; expectations; anxiety; personality; miscellaneous).Specifically, of these six categories, three were retained and confirmed in our study (socio-demographics, personality, and expectations).The category 'anxiety' as well as 'clinical characteristic', in our case, was merged into the category 'personality and individual differences', in which other characteristics (such as pessimism, anxiety, and catastrophizing) were included.The main difference between the two works is that in the present review, the category defined by Webster as 'miscellaneous' has been separated into seven new categories (neurodegenerative conditions; inflammatory conditions; communication of information and patient-physician relationship; drug characteristics; setting; and self-awareness).Furthermore, it should be emphasized that the nine identified categories are not to be conceived as rigidly separated from one another but instead characterized by 'permeable' boundaries: factors across categories may be interdependent and feed off each other to induce a nocebo effect.For instance, patients receiving intravenous therapy as a group in the same infusion suite can share stories, experiences, and opinions, which might influence individual perceptions and trigger nocebo effects.At the same time, personality and individual differences can also determine how patients interpret the information that they receive.The discussion on psychoneuroimmunology (PNI) is crucial in unraveling the complexities of the nocebo effect.PNI, which delves into the interplay among psychological, neurological, and immunological factors, plays a pivotal role in understanding how behavior and immunity reciprocally influence each other (Reza et al., 2023).This field challenges the conventional view of the immune system as an autonomous entity and offers a comprehensive bio-psycho-social perspective on health and illness.By examining the dynamic interactions among the nervous, endocrine, and immune systems, PNI contributes significantly to our comprehension of the interplay between psychosocial variables, health, and illness (Zachariae, 2009).In this context, the present review emphasizes various factors influencing the nocebo effect, with particular attention to the impact of prior adversity and its role in vulnerability.While our primary focus has centered on psychological and contextual factors, it is imperative to acknowledge the broader ecological perspective presented in the work of Harvey (2023).The evolutionary viewpoint on threat-responsive neuroinflammation provides valuable insights into the physiological dimensions of the nocebo effect.Harvey's investigation of shared neuroimmune pathways in pain, somatization, anxiety, and PTSD contributes to understanding cross-domain sensitization in the nocebo phenomenon (Harvey, 2023).The integration of ecological perspectives enriches our understanding of the multifaceted nature of the nocebo effect, offering a broader context for personalized recovery approaches that encompass physical, mental, and social aspects (Rossettini et al., n.d.;2018).This integrative approach aims to shed light on the therapeutic prospects emerging from these phenomena within the conceptual foundations of PNI-based mind-body therapies.
This review is not immune from some limitations.First, there is considerable heterogeneity in both the studies and the contexts in which the nocebo effect is examined.This variation is primarily observed in healthcare settings dealing with neurodegenerative or inflammatory diseases.It also extends to the general population, where individuals are exposed to inert substances to assess various baseline or experimental factors.Secondly, we noticed how the mixed findings regarding the category 'self-awareness' make it necessary to carry out further studies to claim that this factor can predispose to the nocebo effect.Overall, given the increasing number of studies, subsequent meta-analyses are needed to aggregate in a more analytical and precise way those data useful to understand how each specific risk factor may have different effects.Another limitation arises from the inclusion of solely Englishlanguage studies, introducing the possibility of a publication bias.This bias may also originate from the tendency of published results included in reviews to predominantly report significant effects.However, placebo and nocebo effects can be viewed as significant or non-significant results, based on the specific research question.Moreover, the inclusion of grey literature is meant to mitigate this phenomenon.
. Conclusions and future directions
The current review of reviews provides a comprehensive and up-todate summary of the risk factors that predispose to or contribute to the nocebo effect.It is presented here as the most exhaustive synthesis available in the literature, providing a critical starting point for further investigation into the counterpart of the placebo effect.The findings highlight some of the risk factors underlying the nocebo effect and its potential impact on patient outcomes.Delving into the connections among the psyche, neural, and endocrine functions, as well as immune responses, psychoneuroimmunoendocrinology focuses on applying this knowledge to medical treatment across various conditions (González-Díaz et al., 2017).These include immune disorders, autoimmune diseases, neoplastic conditions, and endocrine disorders.Psychoneuroendocrinology is a field of study that explores the intricate connections between psychological processes, the nervous system, and the endocrine (hormonal) system (Barbiani and Benedetti, 2020).It examines how psychological factors, such as stress and emotions, can influence the endocrine system and, in turn, impact various physiological functions and health outcomes.In the context of nocebo effects, psychoneuroendocrinology investigates how psychological factors, such as expectations, beliefs, and emotions, can activate the endocrine system to induce adverse reactions or symptoms.This field examines the interplay between psychological states and hormonal responses that contribute to the manifestation of nocebo effects, wherein the anticipation of negative outcomes leads to the actual experience of adverse symptoms or side effects.Notably, the clinical implications of psychoneuroimmunoendocrinology are particularly pronounced in the context of nocebo effects (Colloca et al., 2019).Epigenetic factors and significant stressors, operating through diverse pathways and neurotransmitters, play a pivotal role in modulating the psychoneuroimmunoendocrine axis, contributing to the onset of disease (González-Díaz et al., 2017).This reinterpretation emphasizes the relevance of psychoneuroimmunoendocrinology, highlighting its crucial role in understanding and addressing clinical implications, especially concerning the manifestation of nocebo effects across various pathologies.
Addressing the gap between clinical research, focused on minimizing or eradicating placebo mechanisms, and clinical practice, requiring an understanding of an intervention's maximum potential, is crucial (Petrie and Rief, 2019).The exploration into identifying individuals prone to nocebo effects at the commencement of treatment underlines the need for further empirical evidence through future studies.Patient expectations serve as a valuable starting point for integrating these factors into clinical practice.Educating patients about their expectations has shown promise in improving outcomes, as seen in reduced disability outcomes after cardiac surgery and decreased postoperative pain through preoperative education about coping strategies (Colloca and Barsky, 2020).These implications extend to research design in clinical trials, emphasizing the necessity for no-intervention groups, standardized information presentation, and cautious interpretation of meta-analyses lacking uniformity (Colloca and Finniss, 2012).This comprehensive understanding of placebo and nocebo effects informs a nuanced approach to treatment, incorporating patient expectations and communication strategies for enhanced clinical outcomes (Colloca and Barsky, 2020).
The implication of this approach is that to be truly patient-centered, medicine must pay attention to the predictive process underlying the perception of symptoms, and thus assess which efficient courses of action can lead the brain to predict the health of the organism (Ongaro and Kaptchuk, 2018).Several strategies can reduce or even prevent the nocebo effect, including a reformulation of the information provided to subjects on side effects, the creation of a reassuring and protective environment when prescribing drugs, and the promotion of a trust-oriented relationship with the referring clinicians (Manaï et al., 2019).As suggested by the Bayesian brain hypothesis, what we perceive is not the world as it is, but the brain's best guess about it, continually refined by incoming sensory evidence (Friston, 2010).Applying this hypothesis to the placebo and nocebo effects, which seems a promising interpreting framework (Pagnini et al., 2023), one feels symptoms, including pain, when the hypothesis with the lowest prediction error represents an abnormal somatic event (den Bergh O et al., 2017).Furthermore, in the context of chronic pain, the brain does not merely passively perceive pain, but can also play a part in its intensification.A wide-ranging and debated issue concerns the decision of whether to provide full information to the patients about their medication and/or therapy.While this might promote more active patient engagement, informing about side effects might also cause harm (Daniali and Flaten, 2019).To manage this ethical dilemma, it is necessary to consider adopting a shared approach to reduce expectation-induced side effects while respecting patient integrity (Cohen, 2014).Factors related to the environment that might increase the adverse effects of a drug or therapy should also be considered.Overall, this study underscores the urgent need for continued research into the nocebo effect and its clinical implications, as well as the importance of improving communication and the doctor-patient relationship to better understand patients' expectations and beliefs about the adverse effects of an intervention.
■
Socio-demographic characteristics ■ Personality and individual differences ■ Neurodegenerative conditions (continued on next page) F. Grosso et al.
Table 1
Summary table.
Table 1
events resulting in treatment withdrawal by placebo-arm participants in randomized controlled trials (RCTs) of patients with rheumatic and musculoskeletal diseases (RMDs), and to evaluate the potential contribution of nocebo effects in healthcare to these events. | 2024-05-26T15:52:33.466Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "9858f746d595bc07bae9a9b9906477261969a9f0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.bbih.2024.100800",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfbbfaf351d8ce92405996d3d4c2e1e26bb43f08",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": []
} |
258509929 | pes2o/s2orc | v3-fos-license | ELFN1-AS1 promotes GDF15-mediated immune escape of colorectal cancer from NK cells by facilitating GCN5 and SND1 association
The ability of colorectal cancer (CRC) cells to escape from natural killer (NK) cell immune surveillance leads to anti-tumor treatment failure. The long non-coding RNA (lncRNA) ELFN1-AS1 is aberrantly expressed in multiple tumors suggesting a role as an oncogene in cancer development. However, whether ELFN1-AS1 regulates immune surveillance in CRC is unclear. Here, we determined that ELFN1-AS1 enhanced the ability of CRC cells to escape from NK cell surveillance in vitro and in vivo. In addition, we confirmed that ELFN1-AS1 in CRC cells attenuated the activity of NK cell by down-regulating NKG2D and GZMB via the GDF15/JNK pathway. Furthermore, mechanistic investigations demonstrated that ELFN1-AS1 enhanced the interaction between the GCN5 and SND1 protein and this influenced H3k9ac enrichment at the GDF15 promotor to stimulate GDF15 production in CRC cells. Taken together, our findings indicate that ELFN1-AS1 in CRC cells suppresses NK cell cytotoxicity and ELFN1-AS1 is a potential therapeutic target for CRC. Supplementary Information The online version contains supplementary material available at 10.1007/s12672-023-00675-6.
Introduction
Colorectal cancer (CRC) represents a prevalent form of malignant tumors within the digestive system and poses a substantial burden on global public health, particularly in China, where its incidence and mortality rates are notably high [1,2]. Metastases makes this disease difficult to control and leads to a low five-year survival rate [3]. Actually, before distant metastasis occurs, tumor cells must escape from immune surveillance within the human body [4], but the underlying mechanism of immune escape in CRC remains unclear. Increasing evidence has indicated that Long non-coding RNAs (lncRNAs) [5] regulate cancer metastasis-related signaling pathways via interacting with microRNA, mRNA and proteins, such as HOTAIR [6], AC020978 [7] and TMEM220-AS1 [8]. This highlights the dual nature of lncRNAs with both tumor suppressor and oncogene functions in cancers. For example, lncRNA LINC01569 functioned as a competing endogenous RNA (ceRNA) that competes with RAP2A for miR-381-3p binding, resulting in a complex that affected metastasis of CRC cancer cells [9]. Conversely, LINC00675 interacted with vimentin and enhanced its phosphorylation of Ser83 that resulted in a reduction of gastric cancer cell metastasis [10]. In addition, many lncRNAs are also involved in cancer cell avoidance of immune detection although the underlying mechanisms are still unclear.
The innate immune response represents the initial barrier utilized by the body to eradicate malignant cells. Among innate immune cells, natural killer (NK) cells are considered the most potent cytotoxic effectors [11,12]. However, in cancer tissues, NK cell activity is inhibited and the insufficient infiltration of NK cells is correlated with the survival of patients with cancers [13]. lncRNAs also affect the ability of cancer cells to escape from immune surveillance by NK cells. For instance, lncRNA GAS5 has been shown to augment the cytotoxic capabilities of NK cells towards liver cancer cells through the regulation of the miR-544/RUNX3 axis [14]. In CRC, PTTG3P overexpression facilitated M2 macrophage polarization and low-expressed PTTG3P altered the infiltration of NK, CD8 + T and TFH cells [15].
ELFN1-AS1 (ELFN1 antisense RNA 1, also named MYCLo-2), is a newly discovered antisense lncRNA of ELFN1 that has a reported role as an oncogene in various solid tumors including esophageal and ovarian cancers as well as CRC [16][17][18]. ELFN1-AS1 is regulated by MYC and has a role in the tumorigenesis and transformation of cancer [19]. In CRC, ELFN1-AS1 has been identified as a promoter of colon cancer progression through the modulation of the miR-4644/Trim44 axis [20] and also facilitates cell invasion, migration and proliferation by sponging miR-1250 to upregulate MTA1 [21]. However, whether ELFN1-AS1 is required for the escape of CRC from immune surveillance remains unknown. In the present study, our findings indicated that ELFN1-AS1 serves a suppressor role in NK-cell surveillance and we revealed the underlying mechanism by which ELFN1-AS1 contributes to NK cell inaction.
Materials and methods
Comprehensive details and methodologies pertaining to various conventional molecular biological experiments and bioinformatics analyses can be found in the supplementary information.
Blood samples collection and NK cells isolation
Fresh peripheral blood samples (10) were obtained from healthy volunteers with an average age of 25.5 years. The study was approved by the Ethics Committee of the Affiliated Hospital of North Sichuan Medical College. All of the operation processes were in accordance with The Code of Ethics of the World Medical Association. Normal NK cells in peripheral blood were isolated and expanded using a Human NK Cell Enrichment Set-DM (BD Biosciences, USA) and an NK Cell Robust Expansion kit (Stemery, China) according to the manufacturer's instructions. The purity and amplification of NK cells were identified using flow cytometry. The operations were performed as described in reference [22].
Plasmids, primers and shRNAs
Lentiviral particles containing (i) the sequence of ELFN1-AS1 (termed Lv-ELFN1-AS1) or empty vector (termed Lv-NC), (ii) short hairpin RNAs (termed sh-RNAs) or their scrambled control, a nontargeting RNA sequence (termed sh-NC) were all designed, constructed, amplified and purified by Sangon Biotech. Detailed information about the plasmids construction and transfection was described in supplementary. Primers and shRNA sequences are listed in Table S1.
-AS1 promotes the escape of CRC cells from NK cell surveillance in vitro and in vivo. A NK cells isolated from the peripheral blood by negative magnetic separation. B qRT-PCR was used to assess the ELFN1-AS1 expression levels in stably ELFN1-AS1-silenced or -overexpressed CRC cell lines. After co-culture with NK cells, the cell colony formation (C) and apoptosis (D) of ELFN1-AS1 knockdown CRC cell lines cell; the colony formation (E) and apoptosis (F) of ELFN1-AS1-overexpressing CRC cell lines. G Volumes and weights of subcutaneously xenografted CRC tissue in nude mice (n = 6). H Tumor metastases in the lung of nude mice. All data are from at least three independent experiments, *P < 0.05 indicated a significant difference 1 3
Cell co-culture
The CRC cells were co-cultured with the NK cells at a ratio of 1: 10 for 12 h; the supernatants of conditioned CRC cells were co-cultured with NK cells for 24 h. The operations were performed as described in reference [22].
Detection of cell proliferation, apoptosis and NK cell surface markers
See supplementary information.
Animal experiments
For in vivo tumor growth assays, 5 × 10 6 HCT116 cells were collected and subcutaneously injected into the left arm pit of male nude mice (male, 5 weeks of age, 6 mice/group). Purified 5 × 10 7 NK cells were injected via the tail vein at day 7 following tumor cell inoculation and were injected once every 5 days thereafter. Tumors were measured with calipers and calculated with the formula: Volume (mm 3 ) = [width 2 (mm 2 ) × length (mm)]/2. At day 21, tumors were dissected and weighed.
For in vivo pulmonary metastasis assays, approximately 5 × 10 5 HCT116 cells were injected into nude mice via the tail vein. Purified 5 × 10 6 NK cells were injected via the tail vein at day 5 after tumor cell inoculation and the NK cell injections were repeated every 5 days. After 4 weeks, the mice were sacrificed. The lungs were fixed in 4% paraformaldehyde and stained with H&E. Pulmonary metastasis were counted and quantified in a random selection of high-power fields. The animal experiments were performed as described previously [22]. All animal studies were approved by the Medical Experimental Animal Care Commission of Affiliated Hospital of North Sichuan Medical College and were in accordance with the National Research Council's Guide for the Care and Use of Laboratory Animals.
Signal pathway array
Total protein of CRC cells in ELFN1-AS1-silenced cells and controls was extracted using RIPA lysis buffer containing 1% PMSF and used for signal pathway array analysis conducted by Shanghai Univ-bio Biotechnology, Art. No: ARY003B.
Blockade of signaling
NK cells were treated with a chemical inhibitor (DB07268, 9 nM, 2 h) or DSMO (Beyotime, China), then cultured with the supernatants derived from HCT116 or HT29 cells to detect the NK cell surface markers or co-cultured with HCT116 or HT29 cells to detect the apoptosis of CRC cells.
Antibody treatment
To explore the underlying mechanisms, NK cells were co-cultured with supernatants containing anti-GDF15 antibody or controls from CRC cells to detect the NK cell surface markers, or co-cultured with anti-GDF15 antibody pre-treated HCT116 or HT29 cells to measure the apoptosis of CRC cells. CRC cells overexpressing ELFN1-AS1 impairs NK cell cytotoxicity by downregulating NKG2D and GZMB. A Western blot analysis was used to assess the expression of epithelial mesenchymal transition makers in ELFN1-AS1-knockdown CRC cell lines. B Representative flow cytometry gates for assessing the expression of receptors in NK cells. C Flow cytometry was used to assess the expression of receptors in NK cells that were co-cultured with CRC cells. D Flow cytometry was used to assess the expression of NKG2D and GZMB in NK cells that were co-cultured with CRC cells. E AnnexinV-FITC/PI double-staining was used to detect the apoptosis of HCT116 and HT29 cells induced by the NK cells that were stimulated by CRC cell supernatants. All data are from at least three independent experiments, *P < 0.05 indicated a significant difference 1 3 2.9 Co-immunoprecipitation (Co-IP) and RNA binding protein immunoprecipitation (RIP) 1 × 10 6 CRC cells were seeded in 6-well plates and incubated at 37 °C with 5% CO 2 for 24 h and used to generate an IP lysate containing 10% PMSF after incubation for 30 min at 4 °C. The supernatant was collected and antibodies were added as follows: for Co-IP, anti-GCN5 and SND1; for RIP, anti-SND1. The cells were shaken and incubated at 4 °C overnight and magnetic beads coupled to protein A and G were then added and incubated at 4 °C for 2 h. The protein precipitates were subjected to three washes with wash buffer, after which they were resuspended in loading buffer and subsequently boiled for 10 min. The captured proteins were separated by SDS-PAGE and then subjected to Western blotting. For RT-PCR arrays, the RNA immunoprecipited by protein A/G beads were extracted by Trizol (Beyotime, China) followed by RT-PCR to detect ELFN1-AS1.
In situ hybridization and immunofluorescence
See supplementary information.
Detecting the interaction between exogenous GCN5 and SND1
5 × 10 5 HCT116 cells with ELFN1-AS1 silence or not were inoculated into 6-well plate and cultured for 24 h at 37 °C and 5% CO 2 . The plasmids with Flag-GCN5 and HA-SND1 were co-transfected into CRC cells using PEI. After 48 h, we added the immunoprecipitation (IP) lysate containing 10% PMSF into culture plates for 30 min at 4 °C, and then collected the supernatant after centrifuging at 12,000g. 10 μl Magnetic beads coupled with Flag or HA were added into the supernatant, and then incubated on shaking table at 4 °C for 3 h. The magnetic beads were washed three times with pre-cool wash buffer, add loading buffer and boiled for 10 min at 100 °C to detect the effect of protein IP. The pull-down proteins were separated by SDS-PAGE and were then subjected to western blotting.
Statistical analysis
All data analysis were conducted using SPSS v18.0. Data are expressed as means ± standard errors of the mean (SEM). When data were normally distributed and had homogenous variances, the Student's t-test was used for comparisons between two groups and one-way ANOVA followed by Dunnett's post-hoc tests were used in comparisons between 3 or more groups; when the data violated the normality or homogeneity of variances, Mann-Whitney test followed by Tamhane's T2 test was performed in the comparisons between two groups and Kruskal-Wallis test followed by Dunnett's T3 tests was performed in the comparisons between 3 or more groups. P < 0.05 was considered statistically significant. Statistical analysis was performed as described previously [23].
ELFN1-AS1 expression is frequently increased in CRC tissues and is associated with poor patient survival
We initially found a significantly upregulated lncRNA ELFN1-AS1 in CRC tissues from circlncRNAnet (Fig. S1A). ELFN1-AS1 has many alternatively spliced isoforms in different cancers (Fig. S2A). ORF Finder analysis illustrated that ELFN1-AS1 was unable to encode protein (Fig. S2B), Fig. S2C, D exhibited the sequence and secondary structure of ELFN1-AS1. Estimations of subcellular locations indicated that ELFN1-AS1 is expressed predominantly in the cytoplasm, cytosol, ribosome and exosome (Fig. S2E). Cytoplasmic/nuclear location analysis indicated that ELFN1-AS1 was localized to the cytoplasm in GM12878, HUVEC and K562 cells, whereas to the nuclear in Hela-S3 cells (Fig. S2F). Data from Annolnc2 indicated that ELFN1-AS1 was rarely expressed in normal samples (Fig. S1B) while highly expressed in colon adenocarcinoma (COAD), leukemia and ovarian serous cystadenocarcinoma (Fig. S1C). In multiple cancer cell lines, ELFN1-AS1 also exhibited a high expression in comparison to normal cell lines (Fig. S1D). Analysis from GEPIA2 revealed that ELFN1-AS1 levels were significantly upregulated in the COAD and rectum adenocarcinoma (READ) as compared with those in the normal tissues (Fig. S1E). Moreover, COAD and READ patients with high expression of ELFN1-AS1 exhibited significantly lower overall 1 3
Fig. 3 CRC cells downregulates NKG2D and GZMB expression in NK cells through JNK signaling.
A Signal pathway array was conducted to assess the phosphorylation of several proteins in NK cells that were co-cultured with HCT116 cells. B Western blot analysis was used to assess the JNK signaling activity in NK cells that were co-cultured with ELFN1-AS1-overexpressing or -knockdown CRC cells. (C) Flow cytometry was used to assess the expression of NKG2D and GZMB in JNK signaling-blocked NK cells that were co-cultured with HCT116 or HT29 cells. D AnnexinV-FITC/PI double-staining was used to detect the apoptosis of HCT116 and HT29 cells induced by the NK cells treated with a JNK inhibitor. All data are from at least three independent experiments, *P < 0.05 indicated a significant difference survival (OS) when compared with that of patients with low ELFN1-AS1 (Fig. S1F). But receiver operating characteristics (ROC) reflected that ELFN1-AS1 had weak prognostic performance in identifying overall survival analysis (Fig. S1G). Together, these data indicated that ELFN1-AS1 expression was frequently increased in CRC samples when compared with normal tissues and was associated with poor patient survival.
ELFN1-AS1 promotes CRC cell escape from NK surveillance in vitro and in vivo
CRC pathogenesis and NK cells activity are closely linked [24]. GEPIA2021 results indicated that activated NK cells were present at significantly higher levels than resting ones in normal tissues, whereas significantly suppressed in COAD and READ tumor tissues (Fig. S3A). Furthermore, in COAD and READ, patients with higher numbers of NK cells showed a longer disease-free survival (DFS, P = 0.02) (Fig. S3B) and overall survival (OS, P = 0.06, close to the point of difference) (Fig. S3C), while patients with a higher number of resting NK cells exhibited a worse OS (P = 0.05) (Fig. S3D) and DFS (P = 0.06, close to the point of difference) (Fig. S3E). In tumor and normal tissues of COAD and READ, ELFN1-AS1 tended to be negatively associated with CD56 (the surface marker of NK cells) (Fig. S3F), but not significantly associated with CD16 (Fig. S3G). Based on these results, we sought to explore the relationship between NK cells and ELFN1-AS1 levels in CRC cells.
NK cells from the peripheral blood were isolated by negative magnetic separation (Fig. 1A) and we knocked down ELFN1-AS1 expression in HCT116 and HT29 cells using shRNA virus or transfected the lentivirus harboring ELFN1-AS1 sequence into HCT116 and HT29 cells (Fig. 1B). To determine whether ELFN1-AS1 could affect the cytotoxicity of NK cells against CRC cells, ELFN1-AS1-overexpressing or -knockdown cells were co-cultured with the isolated NK cells. Colony formation assays and flow cytometry revealed that ELFN1-AS1 knockdown significantly decreased the colony formation capacity (Fig. 1C) and increased the apoptosis of HCT116 and HT29 cells (Fig. 1D). However, more cell colonies (Fig. 1E) and a lower level of apoptosis (Fig. 1F) were observed when ELFN1-AS1 was overexpressed. Without the co-culturing with NK cells, the colony formation capacity and apoptosis of CRC cells showed no significant differences after ELFN1-AS1 overexpressed (Fig. S3H and I). To further determine whether ELFN1-AS1 could enhance the immune escape of CRC cells from NK cells in vivo, we injected ELFN1-AS1-knockdown HCT116 cells into the flanks of BALB/c nude mice for tumorigenesis or into the lateral tail vein for pulmonary metastasis. Following CRC cell injection, NK cells were injected into lateral tail vein for testing the naturally cytotoxicity. As expected, in comparison with the control cells, ELFN1-AS1silenced HCT116 cells exhibited impaired tumorigenesis (Fig. 1G) and metastasis capacity (Fig. 1H). This demonstrated that the expression of ELFN1-AS1 in CRC cells is related to the natural cytotoxicity of NK cells against tumor cells. Taken together, these data indicated that high levels of ELFN1-AS1 promoted the immune escape of CRC cells from NK cells in vitro and in vivo.
ELFN1-AS1 overexpression in CRC cells impairs NK cytotoxic activity by downregulating NKG2D and GZMB
There is evidence that ELFN1-AS1 promotes the epithelial mesenchymal transition (EMT) of multiple tumor cells [25].
In current study, we found that decreasing ELFN1-AS1 downregulated vimentin and E-cadherin in CRC cells ( Fig. 2A).
Tumor cells with EMT alterations are also reported to influence the activity of immune cells [26]. Here, we collected supernatants from ELFN1-AS1-overexpressing HCT116 and HT29 cells that were then added to NK cell cultures and analyzed expression of NK cell receptors using flow cytometry (Fig. 2B). The levels of activated receptor NKG2D and effector granzyme B (GZMB) were significantly decreased in the NK cells when cultured with the supernatants derived from ELFN1-AS1-overexpressing CRC cells, whereas other receptors had no significant alterations (Fig. 2C). Moreover, the supernatants derived from ELFN1-AS1-silenced CRC cells could not inhibit the appearance of NKG2D and GZMB on the surface of NK cells (Fig. 2D). NK cells stimulated by supernatants from ELFN1-AS1-silenced CRC cells had higher cytotoxicity compared to the NK cells cultured by the supernatant from control CRC cells (Fig. 2E). Taken together, these Fig. 4 ELFN1-AS1 promotes the expression and secretion of GDF15 in CRC cells to escape from NK cell surveillance. A Correlation between ELFN1-AS1 and GDF15 in CRC tissues (data from circlncRNAnet). B qRT-PCR and Western blot was used to assess the mRNA and protein levels of GDF15 in ELFN1-AS1-overexpressing HCT116 cells. Following co-culture with CRC cells treated with an anti-GDF15 antibody, Flow cytometry was used to assess the NKG2D and GZMB expression in NK cells (C) and AnnexinV-FITC/PI double-staining was used to detect the apoptosis of HCT116 and HT29 cells induced by the NK cells (D). All data are from at least three independent experiments, *P < 0.05 indicated a significant difference results suggested that the CRC cells with high ELFN1-AS1 may impair the cytotoxicity of NK cells by downregulating NKG2D and GZMB expression.
ELFN1-AS1 in CRC cells downregulates NKG2D and GZMB expression in NK cells through JNK signaling
Pathway array elucidated that phosphorylation of several proteins in NK cells were increased after co-culture with ELFN1-AS1-silenced CRC cells (Fig. 3A). Among these, JNK signaling has been reported to regulate the expression of NKG2D [27]. As expected, JNK1 phosphorylation in NK cells could be induced by co-culture with ELFN1-AS1-silenced CRC cells and suppressed by co-culturing with CRC cells expressing high ELFN1-AS1 levels (Fig. 3B). Cultured with the supernatant from CRC cells, the ratio of NKG2D + and GZMB + NK cells were significantly decreased in the JNK pathway inhibitor pretreated NK cells as compared to DMSO (Fig. 3C). Consistent with this, blocking JNK signaling significantly reduced the NK cell cytotoxicity against HCT116 and HT29 cells (Fig. 3D), indicating that NKG2D and GZMB expression in NK cells were primarily regulated by JNK signaling.
ELFN1-AS1 promotes the expression and secretion of GDF15 in CRC cells to escape NK cell surveillance
Tumor cells can impair immune cell activity through cytokine secretion such as TGF-β [28]. We found that the expression of GDF15 (a secretory ligand protein belonging to the TGF-β superfamily) was positively correlated with ELFN1-AS1 in CRC tissues (Fig. 4A). Then, ELFN1-AS1 was confirmed to promote GDF15 expression in CRC cells (Fig. 4B). To evaluate whether ELFN1-AS1 in CRC cells regulated the activity of NK cells via GDF15, we added a GDF15-specific antibody into the supernatant of ELFN1-AS1 overexpressing CRC cells and then co-cultured with NK cells. The NKG2D and GZMB expression in NK cells was suppressed by the supernatants from ELFN1-AS1 overexpressing CRC cells and was significantly restored with the addition of GDF15 antibody (Fig. 4C). Moreover, the NK cell-induced apoptosis of ELFN1-AS1-overexpressing CRC cells was significantly increased by GDF15 antibody pre-treatment (Fig. 4D). Taken together, these results indicated that ELFN1-AS1 in CRC cells promoted the immune escape of the cancer cells from NK cells by facilitating GDF15 synthesis and secretion.
ELFN1-AS1 regulates GDF15 expression in CRC by GCN5-mediated H3K9 acetylation
Histone modifications, such as H3K9ac, H3K14ac and H3K27me3, play key roles in the gene regulation. For instance, GDF15 expression was reported to be regulated by H3K27me3 [29]. Paralleling with the GDF15 downregulation induced by ELFN1-AS1 silencing, the levels of H3K9ac and H3K14ac were decreased while H3K27me3 was not altered in CRC cells (Fig. 5A). Therefore, we constructed H3, H3K9A and H3K14A mutant plasmids with HA tag respectively and transferred them into CRC cells. HA tagged H3K9A significantly decreased GDF15 expression in comparison with wild-type but H3K14A had no effect (Fig. 5B). H3K9ac modification is primarily regulated by the histone acetylase GCN5, but GCN5 expression was unchanged in ELFN1-AS1-silenced CRC cells (data not shown). We further observed a reduction of GCN5 on chromatin in CRC cells when ELFN1-AS1 was knocked down (Fig. 5C). Knocked-down GCN5 expression in CRC cells by RNA interference significantly downregulated GDF15 expression as well as the H3K9ac (Fig. 5D). Similarly, GDF15 protein levels in the supernatants of GCN5silenced CRC cells were also significantly decreased (Fig. 5E). Dual-luciferase reporter experiment showed that the relative luciferase activity of plasmid harboring promoter sequence of GDF15 was upregulated when co-transfected with Flag-GCN5 (Fig. 5F). Moreover, NKG2D and GZMB expression on NK cells were significantly increased after co-culture with the supernatant from GCN5-silenced CRC cells (Fig. 5G). Taken together, these results demonstrated that ELFN1-AS1 altered H3K9ac enrichment by regulating the recruitment of GCN5 to chromatin and this process promoted GDF15 expression in CRC cells.
ELFN1-AS1 mediates GCN5-SND1 interaction leading to the alteration of H3K9ac in CRC cells
ELFN1-AS1 was present in both the nucleus and cytoplasm of CRC cells but higher levels were found in the cytoplasm (Fig. 6A), this was further confirmed using immunofluorescence tracking of ELFN1-AS1 in HCT116 and HT29 cells (Fig. 6B). 5 ELFN1-AS1 regulates GDF15 expression in CRC via GCN5-mediated H3K9 acetylation. Western blot was used to assess: A levels of GDF15, H3K9ac, H3K14ac and H3K27me3 in ELFN1-AS1-knockdown CRC cells; B levels of GDF15 in CRC cells containing H3K9A or H3K14A mutations; C levels of GCN5 in the cytoplasm or on chromatin in ELFN1-AS1 knockdown CRC cells; D Levels of GDF15, H3K9ac in GCN5silenced CRC cells. E ELISA was used to detect the GDF15 in the supernatants from GCN5-silenced CRC cells. F The relative luciferase activity after co-transfection of plasmids. G Flow cytometry was used to assess the expression of NKG2D and GZMB in NK cell following their coculture with GCN5-silenced CRC cells. All data are from at least three independent experiments, *P < 0.05 indicated a significant difference Interestingly, there is convinced evidence that GCN5 could be recruited by SND1 to chromatin [30]. Therefore, we measured interactions between endogenous GCN5 and SND1 and in ELFN1-AS1-silenced CRC cells, this interaction was attenuated (Fig. 6C). Furthermore, immunoprecipitation assays revealed an impaired interaction between exogenous GCN5 and SND1 in ELFN1-AS1-silenced CRC cells (Fig. 6D). Moreover, we observed a colocalization of GCN5 and SND1 in CRC cells with ELFN1-AS1 silence or not (Fig. 6E). As well as GCN5 silencing, expression of GDF15 was significantly decreased by SND1 silencing in HCT116 and HT29 cells (Fig. 6F). Based on evidence that phosphorylated SND1 would enter the nucleus from the cytoplasm, we determined whether ELFN1-AS1 could directly bind to SND1 and at which domain. RNA binding protein immunoprecipitation assays revealed that the ELFN1-AS1 binding sites on SND1 were localized to the SN2 domain (Fig. 6G). We also found an increased apoptosis in SND1-silenced CRC cells induced by NK cells. Wild type SND1, but not SN2Δ plasmid transfections successfully rescued the NK cell-induced apoptosis of CRC cells (Fig. 6H). Taken together, these results indicated that ELFN1-AS1 mediated the GCN5-SND1 interaction in CRC cells via binding the SN2 domain of SND1, resulting in increased GDF15 levels in CRC cells.
Discussion
LncRNAs are involved in multiple physiological and pathological processes, the underlying mechanisms remain largely unknown [31][32][33]. In this study, data showed that ELFN1-AS1 was significantly up-regulated in CRC tissues and promoted immune escape of CRC cells from NK cells. GDF15 is secreted by CRC cells and was one of the key mediators of NK cell activity. We also determined that ELFN1-AS1 regulated the expression of GDF15 through the SND1-GCN5/GDF15 axis. These results indicated that ELFN1-AS1 plays an oncogenic role in CRC progression. Escape from immune surveillance is a pivotal feature of tumors with distant metastasis [34][35][36] and activation of immune surveillance is an important strategy for tumor targeted therapy. NK cells play important roles in immune surveillance and can directly kill tumor cells via perforin and granzyme release [37,38]. There is increasing evidence that the numbers of infiltrating NK cells in tumor tissues are positively correlated with tumor patient survival [39][40][41][42]. Consistent with previous research, from TCGA data analysis we found that activated NK cell levels are decreased and resting NK cell levels are increased in CRC tissues, suggesting an altered proportion of NK cell subsets. In addition, a high proportion of resting NK cells also significantly correlated with the poor survival rate of CRC patients. These data indicated that the alteration of NK cell immunity induced by CRC tumor microenvironment may be a major mechanism for tumors to escape immune killing.
Recent studies reported that lncRNAs could regulate immune surveillance [43,44]. Here we used gene microarrays and identified a specific lncRNA ELFN1-AS1 that was up-regulated in both CRC tissues and cells. ELFN1-AS1 has been linked to the development of multiple tumors such as esophageal [18] and ovarian [17] cancers. In colorectal cancer, ELFN1-AS1 expression was increased and promoted the proliferation and metastasis of tumor cells [21]. MYC-regulated ELFN1-AS may function in cell proliferation and the cell cycle by regulating MYC target genes [45]. Tumor immunity studies also demonstrated that some lncRNAs induce immune cell dysfunction within the tumor microenvironment [46]. Notably, in this study, we found that ELFN1-AS1 tended to be negatively associated with the surface marker CD56 on NK cells in COAD and READ, which suggested the upregulated ELFN1-AS1 may contribute to NK cell suppression. Both in vivo and in vitro, the NK cell cytotoxicity was impaired after co-culture with high level ELFN1-AS1-expressing CRC cells, implying ELFN1-AS1 could promote the immune escape of CRC cells from NK cells. Moreover, we found that the NKG2D and GZMB receptors on NK cells were significantly downregulated and the JNK signaling in NK cells was inhibited after co-culture with high level ELFN1-AS1-expressing CRC cells. JNK signaling is involved in the development and differentiation of immune cells [47]. NKG2D in NK cells can be activated by JNK signaling [48] and elevated NKG2D in turn induces activation of JNK kinase [49] and gradually activates JNK signaling pathways [50]. Inhibition of JNK MAP kinase also blocks granzyme B movement to the immune synapse [51] and the JNK pathway controls expression of CCL5 Fig. 6 ELFN1-AS1 mediate the interaction of GCN5 and SND1 leading to the alteration of H3K9ac in CRC cells. A qRT-PCR was used to assess the cytoplasmic and nuclear distribution of ELFN1-AS1. B Immunofluorescence staining of ELFN1-AS1 in HCT116 and HT29 cells. Co-IP was used to determine: C the interactions between endogenous GCN5 and SND1 in ELFN1-AS1 knockdown HCT116 cells; D the interactions between exogenous GCN5 and SND1 in ELFN1-AS1-silenced HCT116 cells. E Immunofluorescence staining of GCN5 and SND1 in ELFN1-AS1-silenced HCT116 cells. F Western blot was used to assess the levels of GDF15 in SND1-silenced CRC cells. G RIP was used to determine the interaction of ELFN1-AS1 and different domain truncations of SND1 proteins. H Co-culture of wild type SND1 or SN2∆ truncations with NK cells influenced apoptosis of HCT116 and HT29 cells containing SND1 knockdowns. All data are from at least three independent experiments, *P < 0.05 indicated a significant difference that is co-released with granzymes in NK cells [52]. Collectively, our results and previous reports suggested that ELFN1-AS1 in CRC cells might directly affect JNK signaling in NK cells to suppress the surface expression of NKG2D and GZMB resulting in a marked deficiency in tumor cytotoxicity.
Previous studies have demonstrated that tumor cells secrete NK cell inhibitory factors such as TGF-β1 and cytokines [53,54]. In our study, we verified that the level of GDF15 (a secretory ligand of the TGF-β superfamily) was regulated by ELFN1-AS1 in CRC cells. In cervical cancer cells, GDF15 directly promoted cell proliferation and significantly increased cell cycle progression [55]. It is associated with human NK cell dysfunction that leads to the immune escape of cancers [56,57], as well as TGF-β [58]. In addition, GDF15 is a MYC target and a positive feedback of GDF15/MYC/GDF15 was also verified [59]. Combined with the role of ELFN1-AS1 in MYC-regulated cell phenotypes, we considered that GDF15 was the secretory protein induced by ELFN1-AS1 from CRC cells. Our data also demonstrated that anti-GDF15 antibody could reverse the inhibition of NK cells induced by high ELFN1-AS1 expressed CRC cells via restoring the activity of NKG2D and GZMB in NK cells. This suggested that GDF15 production was an important mechanism used by ELFN1-AS1 to modulate CRC tumor cells to avoid NK cell cytotoxicity.
We also explored biological regulation networks between ELFN1-AS1 and GDF15 in CRC. Additional reports also indicated that EZH2 could impact GDF15 expression via H3K27me3, suggesting that histone modifications are involved in GDF15 regulation [60]. Histone modifications play a pivotal role in gene expression: H3K9ac has been correlated to active enhancers, H3K18ac is generally associated with active gene expression, and H3K27me3 was negatively correlated with transcript levels [61]. Our data indicated that ELFN1-AS1 regulated GDF15 primarily via the H3K9ac modification and not H3K14ac or H3K27me3. Histone acetylation promotes transcription by relaxing chromatin [62], H3K9ac is regulated by the GCN5-SND1 complex and contributes to cancer development [63]. In this process, GCN5 is recruited to the promoter regions to increase chromatin accessibility and acetylates H3 on the chromatin around double-strand breaks (DSB) [64]. Meanwhile, SND1 interacts with GCN5 and plays a role as a recruiter and coactivator [63,65]. Our data verified that silencing of ELFN1-AS1 attenuated the enrichment of GCN5 on chromatin in CRC cells, leading a decrease of H3K9ac enrichment. SND1 is primarily located in the cytoplasm and is translocated into the nucleus following phosphorylation to form the GCN5-SND1 complex. Here, we identified an interaction between ELFN1-AS1 and SND1. The human SND1 protein contains 4 repeated staphylococcal nuclease-like domains (SN1 to 4) and the downstream TSN domain (Tudor plus SN5 fragments) were identified in the human SND1 protein. Consistent with this, our data demonstrated that the SN2 domain mediated the binding between SND1 and ELFN1-AS1, which linked to GDF15 expression. Moreover, as expected, SND1 silencing in CRC cells directly downregulated GDF15 secretion and co-culture with the SND1 silenced CRC cells restored the cytotoxicity of NK cells against CRC cells. Silencing of GCN5 had similar effects, indicating that ELFN1-AS1 may mediate the production of GDF15 though the GCN5-SND1 complex. GEPIA2 results also demonstrated that the expression of GCN5, SND1 and GDF15 was respectively correlated with ELFN1-AS1, (Fig. S4A-C) and this finding corroborates with our direct experimental observations.
We also analyzed the expression and regulation network of ELFN1-AS1 using RNA microarrays. HPA RNA-seq normal tissues analysis from lncbook indicated that ELFN1-AS1 was highly expressed in brain, rectum and stomach tissues (Fig. S4D). In contrast, in lncexpdb, elevated expression of ELFN1-AS1 was found in stomach, rectum and colon normal tissues (Fig. S4E). ELFN1-AS1 levels in ENCODE primary cell lines indicated a maximum transcripts per million (TPM) in kidney epithelial cells (Fig. S4F) indicating a different expression profile of ELFN1-AS1 in multiple tissues and cells. Methylation analysis also indicated that the methylation levels of promoter (Fig. S5A) and body (Fig. S5B) regions of ELFN1-AS1 are both aberrant in CRC and READ compared with normal tissues. Expression of ELFN1-AS1 is also associated with sample type (Fig. S5C-D). These suggested that aberrant methylation might be responsible for the high expression of ELFN1-AS1 in CRC tissues. Co-expression and KEGG pathway analysis revealed that ELFN1-AS1 was involved in the metabolic pathways and pathways in cancer (Fig. S6A-D), suggesting a major biological function for ELFN1-AS1 in the metabolism of CRC. All of these describe an important role for both the direct action of ELFN1-AS1 on cancer cells and an indirect action on normal cells.
Despite the significant findings in our study, several limitations remain. The present investigation mainly focused on the ELFN1-AS1/GCN5-SND1/H3K9ac/GDF15 axis, but additional mechanisms involving ELFN1-AS1 may also exist in colorectal cancer (CRC). For instance, the alteration of GCN5-SND1 cellular location may existed and also correlated with the ELFN1-AS1 levels; histone acetylation may be balanced by HAT and HDAC [66], in parallel to GCN5-SND1; Furthermore, the relationship between ELFN1-AS1 and histological type, stage, and RAS/RAF-MSI of CRC, as well as whether ELFN1-AS1 promote long-range chromatin looping as CCAT1-L in the activation of MYC [67], will be investigated in future studies. For further investigation, we will also consider more diverse metastasis models such as the CRC liver metastasis model using intra-splenic injection [68]. | 2023-05-06T13:34:22.660Z | 2023-05-06T00:00:00.000 | {
"year": 2023,
"sha1": "3ed35c3f4032e374d089d01beafe92cacf455b63",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "3ed35c3f4032e374d089d01beafe92cacf455b63",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
154072363 | pes2o/s2orc | v3-fos-license | Review of medal predictions for South Africa in the Delhi 2010 Commonwealth Games
Objectives. This paper reviews South Africa’s performance in the Delhi 2010 Commonwealth Games relative to predicted medal success. Methods. Forecasts based on the nation’s previous success are compared against medals won in Delhi. Results. Actual performance is in line with predicted performance in terms of gold medals but total medals won are below expectations. Conclusion. The findings are of potential value to relevant sports authorities and follow up research is proposed.
Introduction
This paper reviews South Africa's recent performance in the Delhi 2010 Commonweath Games relative to the medal forecasts undertaken for the nation prior to the event. 1 The initial research was a relatively novel concept given that host nations in the Olympic Games have almost exclusively been the focus of performance predictions.
2-5
Moreover, forecasts often tend to be made on the basis of macroeconomic variables such as population and gross domestic product, 2,3 with little attention given to a nation's traditional sporting prowess.
Methods
The methodology used to make the forecasts for South Africa in Delhi 2010 is documented in the original research paper. 1 In short, the forecasts were based on different scenarios which took into account South Africa's previous performances in the event since rejoining the Commonwealth in 1994.Forecasts were constructed on a sport-by-sport basis and overall.Post Delhi 2010, the actual performance of South African athletes was scrutinised alongside the forecasts.This provided an indication of the accuracy of the predicted performance and practical implications of the research.
Results
Table I provides a comparative view of South Africa's performance in the Commonwealth Games on a sport-by-sport basis and overall relative to the number of gold medals and total medals predicted.The data presented in Table I can be categorised into four clusters, as outlined below.
Cluster 1: Performance within the predicted range South Africa's gold medal performance in Delhi was within the predicted range for 13 out of the 17 sports and overall.Moreover, the forecast was accurate at predicting how many total medals South Africa would win in 12 out of the 17 sports.Swimming, which was the sport in which South Africa won the majority of its medals in Delhi, features in this cluster.
Cluster 2: Performance below but proximate to the minimum forecast
The number of medals won by South Africa was one less than the minimum forecast in two instances in terms of gold medals and for one sport (weightlifting) in terms of total medals.
Cluster 3: Performance below the minimum forecast by at least two medals
In athletics, the forecast was for South Africa to achieve a minimum of four gold medals whereas the actual number was two.In terms of total medals, performance was at least two medals below the minimum forecast for three sports (athletics, boxing and shooting) and overall.Athletics and shooting emerged as the sports in which South Africa most underperformed relative to the total medal forecast.
Cluster 4: Performance at least two medals above the maximum forecast
The actual performance in lawn bowls exceeded the expected maximum performance by two gold medals.A similar outcome was observed in wrestling in terms of total medals.
According to the forecast, South Africa would win a gold medal in six sports and a medal of any colour in eleven sports.The matrix in Fig. 1 identifies the forecasted performance in the individual sports versus actual medal success in those sports.The top left quadrant of the matrix highlights sports in which South African athletes were not expected to win a medal and did not win a medal.The top right quadrant corresponds to sports in which medal success was not predicted but occurred.Looking at the top two quadrants, South Africa did not win any medals in the sports where success was not predicted.In other words, the forecast correctly predicted the sports in which South Africa would not win a gold medal and any medal.
Sports that South Africa was forecasted to medal in that did and did not materialise appear in the bottom left and bottom right quadrants respectively.The accuracy in predicting sports in which South Africa would medal varied between the gold and total medal forecasts.For sports where a gold medal was predicted, the forecast accuracy was 50% (3 out of 6 sports).The corresponding statistic for total medals was 64% (7 out of 11 sports).
Conclusion
Attempting to forecast the likely performance of a non-host nation competing away from home in a major multi-nation sports event has
Review of medal predictions for South Africa in the Delhi 2010 Commonwealth Games CORRESPONDENCE:
Girish M Ramchandani Sport Industry Research Centre Sheffield Hallam University A118 Collegiate Hall Collegiate Crescent Sheffield S10 2BP UK Tel: +44 (0) 114 225 5461 E-mail: g.ramchandani@shu.ac.uk been an interesting experiment.The analysis of the actual performance of South African athletes in Delhi has revealed some interesting points in relation to the accuracy of the predictions.The key fi ndings are summarised below: • The gold medal forecast was for South Africa to achieve between 12 and 15 gold medals in Delhi.They managed to win 12 gold medals, which falls within the predicted range.
• The forecast for total medals was 40 -43 but South Africa won 33.The lower than anticipated success in athletics and shooting explains why their total medal count was below the predicted range.
• The forecast was more accurate at identifying those sports in which South Africa would not win a gold medal or any medal compared with sports in which it would medal.
The fi ndings from the predictive element of the research and subsequent testing have two practical implications.First, the results of this research may be of value to relevant sports authorities in South Africa to identify how their athletes fared in Delhi 2010 relative to an independent appraisal of anticipated performance.Second, the research has provided an indication of the extent to which using nation's traditional performance in a sporting competition of international signifi cance to predict future performance with reasonable certainty is viable.Further research with a wider sample of nations and/or the same nation over time would help to further validate the fi ndings from this research.The predicted range for each sport above is based on the minimum and maximum medal forecast for that sport across the three forecast scenarios.However, the 'overall' predicted range refl ects the combined total medal count across all sports within each individual scenario.For this reason, the minimum and maximum values for each sport may not sum to the respective 'overall' fi gures.
TABlE I. South Africa's predicted and actual performance in Delhi 2010
Fig. 1.Predicted versus actual medal performance for South Africa by sport.
. Predicted versus actual medal performance for South Af- rica by sport.
1. Predicted versus actual medal performance for South Africa by sport. | 2019-05-15T14:30:50.925Z | 2010-12-30T00:00:00.000 | {
"year": 2010,
"sha1": "14b20a69974a562046c32ba23baa2ce9c5d7e5fa",
"oa_license": "CCBYSA",
"oa_url": "https://journals.assaf.org.za/index.php/sajsm/article/download/309/247",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "14b20a69974a562046c32ba23baa2ce9c5d7e5fa",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
229941880 | pes2o/s2orc | v3-fos-license | PRILOG PROMIŠLJANJU OBLIKOVANJA HRVATSKIH GRANICA U POVIJESNOJ I SUVREMENOJ PERSPEKTIVI CONTRIBUTION TO THE CROATIAN BUNDARY SHAPING IN HISTORICAL AND CONTEMPORARY PERSPECTIVE
Autorice se bave određenim aspektima oblikovanja granica u prošlosti i suvremenosti na prostorima današnje Hrvatske iz sociološke i povijesne perspektive. Nastoje kontekstualizirati neke okolnosti pod kojima se granice između zemalja uspostavljaju, održavaju i mijenjaju. Granice se obično konstruiraju kako bi isključivale Druge i/ili strance, a kao društveni proizvod čije se značenje s vremenom mijenja one uvijek određuju pripadnost. One se s jedne strane tiču ograničenja, podjela, konflikata i isključivanja između etničkih/nacionalnih skupina, ali i procesa proširivanja, uključivanja i redefiniranja u skladu s političkim interesima na lokalnim, regionalnim, nacionalnim i nadnacionalnim razinama. Upotrebom interdisciplinarnog pristupa u komparativnoj perspektivi preispituju se uloge i značenja rubnosti hrvatskih granica u ranom novom vijeku, kada su, gotovo na istim mjestima kao i danas, granice predstavljale civilizacijsku periferiju i konfesionalno definiranu barijeru prema Drugome (Antemurale Christianitatis). Provedena analiza pokazala je da uloga granice Republike Hrvatske, među ostalim, pridonosi obrani Fortress Europe (Šengenskog područja) od neregularnih migracija, odnosno migranata kao Drugih. U radu je na izabranim slučajevima utvrđeno da političke elite, ovisno o svojim interesima, oblikuju i upravljaju granicom, čineći je (ne)propusnom za kretanje kapitala, usluga i ljudi. Pritom je svakodnevica lokalnog stanovništva na granici često bila, a i danas jest u opreci s proklamiranim politikama ograničenja nametnutima odozgo, pretvarajući granicu od prepreke u mjesto razmjene i suradnje. S druge strane, pokazalo se da fragmentacija europskog prostora i jačanje nacionalnih interesa umjesto proklamiranom idealu „Europa bez granica“ vodi osnaživanju „Europe granica“. Ključne riječi: oblikovanje granica, propusnost granica, europska periferija, Hrvatska, Drugi, Antemurale Christianitatis, Tvrđava Europa
Izgradnja granica povezana je s procesima isključivanja Drugih i/ili Stranaca. Vlastiti identitet pojedinci i skupine grade razgraničavanjem sebe od INTRODUCTION The concept of borders seems central to social science and humanities research, acting as a sort of alternative to static cultural and socio-biological concepts, irrespective of the type of identity researched: ethnic, racial, national, gender or other. In a certain manner, borders symbolise the need for order, security and belonging, while also affecting individual and group behaviour. Among other things, they express the need of an individual to distinguish the known from the unknown (us from them), implying two universal features of human society -social inclusiveness and exclusiveness. As long as human beings strive for autonomy and self-direction, they will seek to create, maintain and transcend borders (O'Dowd, 2001, 67). For a better understanding of border shaping processes on the European periphery, it is important to reflect on their historical, socio-political, geographical and economic context. To comprehend and explain these processes, it seems important to use a multidisciplinary methodological and theoretical framework based on the insights of historical sciences and of other disciplines such as sociology. When it comes to border shaping on the European periphery, Croatia's experience provides an interesting example of intertwining historical and contemporary phenomena. In terms of the Braudelian concept of a long term (longue durée), the processes of formation and deconstruction of multinational states and empires as well as the redrawing of boundaries have defined the shape of borders in the countries of Eastern and South-eastern Europe for centuries. Borders always determine, and sometimes define belonging (Campani, 2004, 59). According to R. Shields (2006, 225), boundaries are a space composed of physical and abstract fences that can be further viewed as abstract concepts or political measures. Apart from being merely physical, they are a combination of physical traces, legal provisions, government practices and cultural symbols. As put by Shields, although they can be removed or their significance may decrease; if not forgotten, they will continue to exist virtually and can potentially be updated in the future.
Individuals and groups form their own identity by demarcating themselves from the opposite identity in the interrelation of geographic, political, economic and social elements influencing the human awareness of oneself and one's own living environment. Identity is represented by auto-stereotypes, which are contrary to hetero-stereotypes, i.e. the images of the Others (Leerseen, 2009a, 179).
The identity of border societies was based on a treasury of cultural memories as a constituent element, but was also subject to the circumstances of a given period. Over the past decades, even some scientific disciplines have reflected the impact of ideologies through certain paradigmatic shifts, dichotomies or discrepancies, as is the case with the study of the development of the Croatian language.
The paper examines the roles and meanings of the marginality of Croatian borders in the early modern period, during which the borders represented the periphery of powerful neighbouring civilisations (the Venetian Republic, the Habsburg Monarchy and the Ottoman Empire) and a religious barrier to the Other, as well as in the contemporary era, where the present Croatian borders play a similar role within the European Union. Although the physical line separating two sides from each other is important for the displacement of borders, they often become blurred and porous.
The interdisciplinary approach has allowed the use of diachronic comparison to re-examine the roles and meanings of the marginality of Croatian borders from a historical and sociological perspective. The aim of the paper is to contextualise and compare broader state frameworks, whether imperial or supranational, which influenced the shaping of borders in the early modern period and in contemporary Croatia. The paper assumes that borders considerably reflect the needs of the ruling elites, both domestic and foreign ones, and facilitated the achievement of certain interests during the Habsburg period and in the contemporary era. Border analysis begins with the presentation of examples relating to imperial, national, supranational (EU) and local levels. For that purpose, the paper relies on the concept of path dependency, which explains how macro social phenomena, such as institutional structures, the distribution of power,
THEORETICAL AND CONCEPTUAL FRAMEWORK
The term granica in the Croatian language and its comparison with the English terms such as border and/or boundary require a semantic analysis, i.e. an examination of the meaning. While the words borders and boundaries may to a certain extent be seen as synonyms in vernacular English, in the field of social sciences, especially in geography and sociology, borders and boundaries differ in meaning. The word borders refers to the line of demarcation between countries, states or some other territorial units, while the word boundaries refers to the boundaries between social and cultural groups. Within other disciplines, the known literature has sought to answer only a part of complex issues relating to the complicated concept of boundaries. Available sociolinguistic, archaeological, anthropological, ethnographic, imagological, environmental, urban and numerous other studies of border societies, as were the societies in historical Croatian lands, have only partially examined the complexity of the boundary phenomenon (Ziegler, 1998;Heršak, 2001;Leerssen, 2009b, Pageaux, 2009Gvozdanović, 2010;Šarić, 2010;Petrić, 2012;Mlinarić, Miletić Drder, 2017). Anthropological and sociological studies have traditionally perceived the boundary as a social product whose meaning is subject to change (Eriksen, 1993).
In ethnicity research, F. Barth (1997) emphasises the boundaries separating cultural groups, and not the cultural traits of these groups. According to Barth, it is the ethnic boundary that is responsible for defining a group rather than its cultural contents. Barth argues that ethnic groups should not be equated with cultural entities, as that would mean that borders between groups may be easily preserved and that ethnic groups are isolated entities, which is most often not the case. Ethnic boundaries do not divide groups completely because across these boundaries there is some interaction, commodity exchange, flow of information to da one nisu prirodna činjenica (Barth, 1969.;Eriksen, 1993.).
Therefore, instead of emerging from an objectively defined culture, ethnicity is a result of political and historical processes of selection of those cultural elements which best serve the establishment of boundaries towards others (Sekulić, 2007, 352.). As an advocate of social constructivism, in his book Ethnicity without Groups, R. Brubaker (2004) also points out that ethnic phenomena are not based on cultural differences; they are rather social constructs that are closely related to socio-economic and political structures. Actually, these structures are reflected on ethnicity by attaching to it certain social significance. The idea that the circumstances and social context, and not identities by themselves, are a deciding factor in the emergence of boundaries brings Brubaker closer to the situational understanding of ethnicity. L. O'Dowd (2001) deals with the analysis of contemporary European borders, emphasising that they can be perceived as barriers, bridges, resources and symbols of identity. A. Wimmer (2008) argues that ethnic boundaries cannot, however, be constantly redefined and altered arbitrarily, as suggested by radical constructivist interpreters of F. Barth. In his book Ethnic Boundary Making, Wimmer (2013) opposes the radical constructivist approach which perceives ethnicity as a fluid and situationally variable phenomenon with inconsistent dynamics of influencing the choices of identity. Moreover, he is interested in the circumstances under which people can develop a deep emotional connection based on ethnic and racial determinants and why some other forms of connection cause "instrumental" actions and feelings. According to Wimmer (2008), boundaries include both a categorical and a social or behavioural dimension. While the categorical dimension refers to the processes of social classification and collective representation, the social one is related to the everyday network of relationships resulting in the processes and actions of merging or separation. On the individual level, categorical and behavioural aspects appear as two cog-ciranima kao mi i oni. Wimmer zaključuje da se samo u slučaju preklapanja ovih dviju shema, tj. kada je pogled na svijet usklađen s načinom djelovanja u svijetu, može govoriti o postojanju društvenih granica (social boundary) (Wimmer, 2008., 975).
The concept of path dependency (Wimmer, 2008) which we rely on in this paper, allows us to analyse the influence of the macro level on the micro level and vice versa. The model is procedural and aims at answering many questions, such as: when does ethnicity become relevant to society and in what contexts, which strategies are followed by individual actors when establishing borders, why certain boundaries are politically significant and to what extent boundaries correlate with cultural differences. (Wimmer, 2008(Wimmer, , 1010(Wimmer, -1011 The stability of ethnic boundaries depends on the modes of transmitting ethnic membership. The most stable boundaries are found among peoples who identify individuals through multi-generational, unilineal (single-chain) descent lines while the most unstable ones are those defined by behavioural rather than genealogical membership criteria (Wimmer, 2008, 984).
War conflicts in the Croatian territory during the 1990s and the subsequent processes of globalisation and European integration prompted researchers (Banovac, 2002;Katunarić, Banovac, 2004;Banovac, Mrakovčić, 2007;Gregurović, 2007;Katunarić, 2007;Sekulić, 2007;Valenta, Gregurović, 2015) to focus on the issues relating to ethnicity, ethnic identities and boundary shaping in multi-ethnic environments as well as on those associated with globalisation processes. Starting from a constructivist approach to collective (ethnic) identities, the authors mentioned are trying to illuminate and explain from various sides the processes that are responsible for the emergence and marking of today's borders and the formation of identities, in which historical and political processes seem to play a decisive role (Sekulić, 2007). 1 1 "…the present situation should be analysed as the outcome of contestation, classification and competition, as a result of historical and political processes" (Sekulić, 2007, 368).
NEW BOUNDARIES, OLD FOUNDATIONS
From today's perspective, early modern period boundaries seem like a multilayer network of cultural, political, legal, geographic, religious and numerous other demarcations that were present in the subsequent periods as well, albeit with modified terminology. The deconstruction and displacement of the boundaries enables us to perceive the patterns of their creation based on the centre-periphery opposition. 2 Both in the social and in the spatial sense, the boundaries of the Croatian lands reflected the whole complexity and significance of their geopolitical and geostrategic position. There should be a critical approach to the articulation of messages from old historical sources containing records of those boundaries, irrespective of the type of media that recorded them. Their multiple interpretative potential could be manipulated by the authors of the sources (maps, travel books, legal documents, etc.) who approached the historical facts "from the outside" and from the top down, especially if they were foreigners (Mlinarić, Miletić Drder, 2017, 48-52). The rules of the field as well as the subjectivity of the source are particularly obvious in the cartographic code of historical border records. Apart from various determinants of territorial demarcations throughout history, such as the distribution of tribes, the extension of the sovereignty of a certain master or dynasty, a single vernacular language or equivalent economic and social development, geographic space was also of great significance in defining the boundaries. An example is the medieval political space of the Croatian 2 On the examples of moving the borders between the Ottomans and the Habsburgs on the Sava River (the 1699 Treaty of Karlowitz), suppressing the border south towards the Ottoman territory in parallel to the Sava (the 1718 Treaty of Passarowitz) and returning the border to the Sava River (the 1739 Treaty of Belgrade), both states developed a concept of defining their position and determining the physical border using markers, which were placed in the field by the Commission for Boundary Demarcation, in accordance with written agreements. Similar practices, recorded on the Ottoman-Venetian border in the hinterland of Dalmatia, following the demarcation from Linea Nani, via Linea Grimani to Linea Mocenigo, point to the strategic importance of the border zone as a shield of economic, demographic and, above all, political interests of the centres in Vienna, Venice and Istanbul (Mlinarić, Miletić Drder, 2017, 48-49;Slukan Altić, 2003, 215). The last-mentioned borders constitute the basis for the demarcation between Croatia and Bosnia and Herzegovina, functioning as present state borders. đivanje određenih geografskih elemenata u pejzažu i pratile su prirodne granice razdvajanja poput gorskih grebena, tokova rijeka ili sl., sve u duhu prosvjetiteljskog pridavanja važnosti prirodnim cjelinama. Prirodne granice ne treba prenaglašavati, ali ih niti ignorirati. Elementi fizičke odvojenosti nekih otoka, primjerice Malte, Suska i dr., utjecali su na definiranje i oblikovanje identiteta njihovih stanovnika. Za razliku od primjerice Srednje Europe, klasičnog primjera otvorenog krajolika, neomeđenog morem ili neprohodnim planinama, hrvatske su zemlje zapravo i bile dijelovi većih geografskih cjelina, gdje je to bilo moguće (Šarić, 2010.). U povijesti granice imaju važnu ulogu u promatranju i svrstavanju, posebno u ranome novom vijeku kada se znanja o svijetu proširuju usporedo s vizualnim predočavanjem granica, s postupnim uočavanjem nepodudaranja prirodnih i političko-pravnih granica (Schmale, 1998., 50-75). Prije toga je granica označavala ograničenje vlasti lokalnoga karaktera, dok su se horonimima obilježavale tradicionalne povijesne pokrajine, za koje se načelno primjenjivao prostorni obuhvat određen načelom superimpozicije, odnosno pozicijom i opsegom prostora koje je zauzimao naziv te državne ili administrativne jedinice na karti. Čak su oblik i veličina slova mogli sugerirati važnost pojedine administrativne cjeline (Slukan Altić, 2003., 47). Ucrtavanje konkretnih linija razgraničenja na kartama koincidira s osmanskim osvajanjima, premda se i u slučaju kada je to susjedstvo "nepoželjno" granica nije ucrtavala graničnom linijom nego se naglašavala nekim drugim bilješkama. 3 Veće zanimanje za 3 Primjer je izbjegavanje ucrtavanja linije razgraničenja s Osmanlijama na zapadnjačkim kartama pa i na Glavačevoj karti Hrvatske (Stjepan Glavač: Zemljovid Hrvatske, 1673.). S obzirom na to da karta tek oponaša povijesnu stvarnost i u tome je vrlo uvjerljiva, ona je pogodan medij za stvaranje hibridnih "slika prošlosti" pa tako i prošlih granica, koje u svojoj subjektivnosti kombiniraju elemente geografsko-političke realnosti, ali su ujedno i autorov imaginarni i simbolički kognitivni doživljaj prostora (Mlinarić, 2014., 91-92 lands, which was separated by the right of sovereignty according to the principle of Cuius regio eius religio, the system of subjection of beneficiaries of another's land. Borders implied delimiting certain geographic elements of the landscape, following the natural boundaries such as mountain ridges, river flows, etc., all in the Enlightenment spirit of attributing significance to natural entities. Natural boundaries should be neither overemphasised nor ignored. The elements of physical separation of some islands, such as Malta, Susak and others, have influenced the definition and shaping of their residents' identity. Unlike, for example, Central Europe, which is a classical example of an open landscape, unbounded by the sea or impassable mountains, the Croatian lands were, in fact, parts of larger geographical units, where this was possible (Šarić, 2010). Throughout history, borders have played a significant role in observation and classification, especially in the early modern period, when the knowledge of the world expanded together with the visual presentation of borders, with a gradual recognition of the disparity between natural and political and legal boundaries (Schmale, 1998, 50-75). Prior to this, borders indicated the restriction of powers of a local authority, while horonyms were used to mark traditional historical provinces, whose spatial coverage was determined by applying the principle of superimposition, or the position and scope of the space occupied by the name of a particular state or administrative unit on the map. Even the shape and the size of the letters could suggest the significance of an individual administrative unit (Slukan Altić, 2003, 47). The mapping of specific boundary lines coincided with the Ottoman conquests, although in cases where a certain neighbouring area was considered "undesirable", the boundary would not be marked by a boundary line, but was rather emphasised by some other notes. 3 A growing interest in specific boundary 3 An example is avoiding the charting of the boundary line with the Ottomans on Western maps, including the map of Croatia by Glavač (Stjepan Glavač: Zemljovid Hrvatske, 1673). Given that a map merely mimics reality, albeit in a very convincing manner, it is a suitable medium for the creation of hybrid "images of the past", including past boundaries that combine the elements of geographical and political reality in a subjective way, but also represent the author's imaginary and symbolic cognitive perception of the space (Mlinarić 2014, 91-92). Another way of bypassing the truth (the Ottoman border) was the invention of maps which simultaneously combined the historical right of the Croats to govern the areas of former Catholic kingdoms in konkretne granične linije pratilo je osmansko povlačenje s prostora jugoistočne Europe i formiranje komisija za razgraničenje, koje su od kraja 17. stoljeća na terenu označavale kote mirovnih dogovora i iscrtavale ih na precizne karte. Njih su pratili i precizni izvještaji, poput primjerice Marsiglijeve komisije koja je radila na temelju odluka mira u Srijemskim Karlovcima iz 1699. godine, o čemu postoji precizna i bogata pisana dokumentacija (Beigl, 1901.). Granica se, posebno u kartografiji, počinje promicati kao instrument reda i organizacije u političkom svijetu određenom regulativama (na načelima jednakosti naroda i suverenosti) tek nakon Vestfalskoga mira 1648., kojim je utemeljen međunarodni sustav suverenih (nacionalnih) država (Schmale, 1998., 50-75). 4 Ujedno u kartografiji kao komunikacijskom mediju granice su jedan od najizazovnijih kartografskih sadržaja koji preispituje mogućnosti medija i postavljaju mu realne granice u artikulaciji geografskih stvarnosti (Mlinarić, Miletić Drder, 2017., 48-63). Nasuprot tradicionalnim binarnim oprekama u orijentalistički determiniranom poimanju odnosa Istoka i Zapada do tada, sagledanih kroz vojne, diplomatske ili ekonomske kategorije, historiografija ranonovjekovnog Sredozemlja prednost daje bliskosti i međuzavisnosti sociokulturnih procesa širom cijelog prostora. Različitosti i Istosti su u kulturnim kategorijama, na način kako ih objašnjavaju suvremeni autori (Rothman, 2006., 119-120), obilježili prevladavajući politički interesi u Srednjoj Europi (habsburški interesi) i na Sredozemlju (mletački dominantni obrasci), dok u današnje vrijeme dominiraju politički interesi uglavnom zapadnoeuropskih zemalja.
Although beyond the narrow focus of this paper, the interim period of establishing national states in South-eastern Europe (in the 19th and early 20th century) was marked by the formation of modern national identities founded on ethnic principles of distinguishing between "us" and "the Others". The inclusion of ethnic Others in the national body implied their partial assimilation. The relevant examples are colonisation processes in Central Europe, in which the states sought not only to satisfy the socio-economic needs of the population but also to reinforce the ethnic and national positions and borders.
Although the European Union is constantly promoting a Europe without borders, the borders are still crucial for the reflection on the contemporary processes of European integration. They are constantly emerging as a significant factor of economic, cultural and political development, representing, among other things, a semiotic system filled with mythical-magical images (Sidaway, 2005, 193). On the one hand, the notion of exclusivity relating to the new processes on the European borders, such as the concept of closing the borders (the phenomenon of the Schengen border) represents a new phenomenon, although certain traits and elements of these "novelties" have been obvious since the Middle Ages. This primarily refers to the superiority and tutelage of foreign sovereigns or political, economic, military and other patrons, ranging from the Holy League or the Military Frontier 5 in the early modern period to the present NATO alliance or the EU. The concept of boundaries does not imply complete closeness and clarity. Various variations emerge in different societies and institutional contexts.
Along with the processes of European integration, the European Commission highlights the importance of strengthening European regions which, apart from carrying symbolic value, affect the development of physical infrastructure based on a regional approach to development. According to B. Banovac and M. Mrakovčić (2007, 339), unlike the European Union, the European regions have proved to be stronger integration mechanisms with a far more specific effect, given that the European Union lacks the integration potential that marked the national state in the 19th century. The authors argue that the integrating force of Euro-regionalism is particularly emphasised in border regions where there are frequent cases of an ethnic or cultural group residing in several national territories. Thus, Euro-regionalism becomes an increasingly important mechanism of social integration in contemporary European integration processes (Banovac, Mrakovčić, 2007, 340). The strengthening of particular European regions is also encouraged from the macro level, i.e. by the European Commission. On the one hand, regions have a symbolic value, and on the other, they affect the construction of a physical infrastructure that is based on a regional approach to development. Apart from serving as spatial metaphors, suggesting bridge building among the borderland residents, the Atlantic Arc, the Mediterranean region and the Baltic Sea region envisage and encourage the development of euro-routes, economic corridors and bridges that integrate the space of the European Union regardless of state borders. New frameworks for thinking about cross-border cooperation are being promoted at the EU level, which Scott terms a "visionary cartography" (Scott, 2002.), the indications of which may be found in the preceding periods. 6 6 A relevant example is the Venetian appeal to the homogeneity of the residents on both sides of the Adriatic in providing resistance to the Ottoman Empire (Mlinarić, Gregurović, 2011, 360). se oblikuju kao mjesta društvenih i ekonomskih nejednakosti i samo potvrđuju perpetuiranje negativnog stereotipiziranja ili indiferentnosti (Meinhof, 2003., 789).
MULTIPLE DEMARCATION ON THE EUROPEAN PERIPHERIES
Everyday life of the people in border areas is structured around identity constructions depending on the constant presence of the Others, those "on the other side" of the border. The boundary line, which is variable throughout history, provides a steady reference framework for everyone residing in its vicinity. The division or sharing of borderland communities resulting from historical and political shifts such as wars, resettlements, migration, etc. is mostly manifested as socio-economic inequalities, even when it comes to the European countries. The dividing axes that previously generated historical tensions are still formed as points of social or economic inequalities, confirming the perpetuation of negative stereotyping or indifference (Meinhof, 2003, 789).
The definition of "the Other" depended on the multiple liminality of the Croatian lands i.e. on their marginal position/placement, but also on the overlapping imperial interests. Its prominence, distinction or even separation from the community and total marginalisation depended on whether it was a matter of distant or "external" Others (on the other side of the border) or those "internal", i.e. "ours" (Mlinarić, Gregurović, 2011, 353, 361). On the one hand, the conditions for establishing, maintaining and altering borders are related to conflicts, exclusion, and division between ethnic / national groups, and on the other, they concern the processes of expansion, inclusion and redefinition in accordance with political relations and interests. A determinant of distinction that used to be more important than ethnicity was linguistic or religious affiliation, political affiliation, different systems of subjection, as well as physical or geographic proximity (being a neighbour) to the Other. In terms of early modern Europe, "the Other" refers to the Ottomans in the neighbouring Bosnia and on the Croatian borders (including the Ottoman Serhat), but also to various social groups that were socially marginalised from the perspective of the elites, such as the Uskoks, refugees, serfs or the infected. Within the western system of "Ancien Régime", where the nobility functioned as a world of their own, the borders acted not as lines of separation but merely as variable exterior lines of feudal areas. Certain exogenous factors, such as wars, epidemics or systems of inheritance could interfere with the structure of social hierarchy. In addition, this hierarchy was distorted by the permanent endeavour of individuals to assume a higher social position within their social class or even beyond. The Others represented different categories of those who were perceived as closer or distant according to any of the criteria of membership or provenance, and a particular group's residence adjacent to the Others (most often the Ottomans) could also be perceived as a criterion for differentiation and classification as the Other.
Bez obzira na etničke sukobe i rat 1990-ih, procesi razgraničavanja i stvaranja granica, u Hrvatskoj se posljednjih tridesetak godina ne temelje isključivo na etničkom ključu (Gregurović, 2007.), nego su uglavnom uvjetovani situacijskom logikom na lokalnoj razini (Valenta, Gregurović, 2015.) i sociokulturnim obilježjima. Temeljna razlika u procesima uspostavljanja granica na "europskoj periferiji" od novog vijeka do danas jest u tome što se danas, osim iznimno (rat nastao raspadom Jugoslavije uključujući Domovinski rat 1990ih u Republici Hrvatskoj), na granicama više ne ratuje. Time se izbjegavaju direktne demografske, perceived as barriers to the free flow of capital, goods, services and people which is essential to maintain the competitiveness of the EU economy on a global scale. The introduction of a single EU market reveals that borders act not only as economic but also as administrative, legal, political, cultural and even psychological barriers. However, the erasure of barriers to enable free movement of labour force did not entail the end of "borders", but rather the creation of a new form of regulation at the EU and global level. One of such regulations involves the restriction of access to labour markets which was introduced by certain "old" Western European member states in order to prevent the massive influx of workers from "new" member states. Workers from the new member states could work in other EU countries only if granted a work permit, with the maximum duration of employment restriction amounting to seven years (2 + 3 + 2). The eastern and the southern border of the EU reflect a massive structural asymmetry juxtaposing different economic systems with different histories of economic development (O'Dowd, 2001, 70-73). The symbolism of Fortress Europe is pronounced when the border functions as a barrier to irregular migrants, refugees and asylum seekers. Restrictive policies and the rhetoric of anti-immigrant parties are in conflict with reality given that immigration flows have not stopped even after some of the countries raised barbed wire fences and walls on their borders. In addition, in order to be competitive and maintain economic development, the EU requires the immigrant labour force. By introducing restrictive policies, the EU transforms borders into barriers to irregular migration, i.e. migrants who flee their countries due to wars, hunger and poverty, while opening them to the categories of migrants which are needed in the EU labour markets. Although Croatia is not yet part of the Schengen Area, as a border country of the EU, it is tasked with defending Fortress Europe or its external border from irregular entry of migrants who wish to migrate to the EU countries. Those migrants are mainly seeking asylum or some other form of international protection. In European societies, they are mostly perceived as the Others or Foreigners -those who do not belong. The regulation of borders emerges as a major issue in the internal politics of the EU and its member states (O'Dowd, 2001).
COEXISTENCE AND CROSS-BORDER COOPERATION ON THE BLURRED BORDERS
When cultural differences and ethnic boundaries overlap, there is a chance that one could amplify the other. However, in certain cases, boundaries can be blurred and unclear, eventually even disappearing even if ethnic boundaries coincide with cultural differences (Wimmer, 2008., 983). There are several strategies that an individual may utilise in order to change their own position within the existing boundary system. Aside from assimilation and crossing over "to the other side", A. Wimmer (2008) also notes the strategy of blurring boundaries which an individual uses to try and surmount ethnicity as the main principle of categorisation and social organisation. Another method of blurring ethnic boundaries is emphasising the civilisational similarities between the two entities from the different sides of the boundary.
Slični fenomeni "mekih trbuha" i poroznosti "zelene granice" zamijećeni su i u novije vrijeme, što je rezultiralo traženjem alternativnih ekonomsko-komercijalnih ruta i koridora u najnovijim migrantskim kretanjima na zapadno balkanskoj ruti, ali i suvremenim kanalima transporta ilegalnih 7 Kao što je jasno vidljivo na karti dijela mletačko-osmanskog kordona iz 1795. godine (Karta sanitarnog kordona, 1795.). has established roundabout systems of survival, especially in times of severe economic, nutritional and general existential threat to their lives. With the introduction of semi-or completely illegal commercial systems (such as smuggling -contraband), a form of a regulated cross-border economy was preserved, e.g. the transhumance of the Lika-Dalmatia (Habsburg-Venetian) borderlands. By circumventing the legal restrictions imposed from above, the local population of Dalmatia practised cross-border trade with the Ottomans or even engaged in cross-border smuggling and thus compensated for the shortage caused by Venetian customs on local resources (e.g. salt). While local groups of robbers and smugglers crossed the borders illegally, the transhumant cattle breeders even managed to legally bypass the dividing up of the territory between the two sovereigns by concluding cross-country agreements on the grazing fees with owners of lands across the border (Zaduženja za travarinu, 1799). This raised the existing practice of individual border crossings and exploitation of goods in another country from the micro level to the macro level, and legalised it with formal agreements regarding fees for grazing. The porosity of state borders enabled additional activities conducted by priests in the Dalmatian hinterland, as they were allowed to visit the Catholic congregation in Ottoman Bosnia. They sometimes also served as informers to cartographers for drafting more precise maps of Bosnia in a way similar to that in which other local monks, by knowing the language and local geography, succeeded in overcoming the barriers of poor knowledge of Croatian space in early modern cartography (Mlinarić, Miletić Drder, 2017, 20-23, 46, 59;Slukan Altić, 2003, 102). Borders were also subject to the porosity test in an economic and public-health sense during the establishment of sanitary cordons, when the military security of a hard border was opposed by its economic unprofitability and purposes of preventive healthcare. An example of the complexity and ambivalence of borders were the early modern era sanitary cordons 7 that separated the Austrian and Venetian lands from the Ottoman lands in Bosnia. They simultaneously prevented the commercial flow, the movement of people, goods and ideas but 7 This is visible on a map that shows part of the Venetian-Ottoman cordon of 1795 (Karta sanitarnog kordona, 1795). roba, od opijata do ilegalnih migranata (URL 1). Porast tolerancije u svakodnevici na samoj ranonovovjekovnoj granici (mikrorazina) nije bio izravno povezan s očuvanjem kulturnog identiteta u današnjem smislu koliko s pragmatizmom i nuždom suradnje u krizna vremena oskudice i nesigurnosti. Suživot ranonovovjekovnog stanovništva hrvatskih zemalja, koje se etnički, konfesionalno i kulturno razlikovalo, u praksi je obično imao primat nad imperijalnim i graničnim konfliktima, nasiljem i podjelama. Granica je tako postajala mjesto kontakata u barem jednakoj mjeri kao i podjele s izraženim etničkim i konfesionalnim pragmatizmom u svakodnevnom životu. Primjer je bila i konfesionalna identifikacija odnosno pripadanje. Iako je vjera bila značajan čimbenik u životu običnog čovjeka, točno određena denominacija gledano "odozdo" bila je stvar izbora i potrebe preživljavanja, a prelasci, kako fizičke (migracije) tako i konfesionalne granice (konvertiranje), bili su česta pojava. Premda u islamu striktno zabranjeno, pokrštavanje muslimana bilo je i jedan od oblika manifestacije pograničnog straha i nesigurnosti, ali i oblik tolerancije i prihvaćanja Drugog. Slična se praksa primjenjivala i s druge strane granice u slučaju konvertiranja na islam. Pokrštavanje muslimana ponajprije je bilo potaknuto egzistencijalnom nuždom "uklapanja" na pograničju, a u manjoj mjeri subjektivnom potrebom realizacije vlastita konfesionalnog identiteta u primjerice katoličkoj Mletačkoj Dalmaciji. Ono se nije kažnjavalo na isti način na dalmatinskim graničnim prostorima kao što je to bila praksa na "starijim" osmanskim prostorima. Kod prelaska nekadašnjih kršćana na islam na osmanskim posjedima u zaleđu, osim ekonomsko-fiskalnih posljedica, proces je primarno uzrokovao prihvaćanje novih obreda, premda je dublja duhovna transformacija ipak mogla uslijediti kasnije (Rothman, 2006., 123-124). Neprekidna prekogranična interakcija mogla je pridonijeti stvaranju mi osjećaja ili osjećaju zajedničkoga "graničnog" identiteta. To je bilo, a i danas je posebno izraženo na prostorima gdje su u prošlosti državne granice presijecale određenu etničku skupinu ili skupine pa su prekogranične veze olakšane postojanjem povjerenja. Zajednička prošlost zauzvrat olakšava buduću ekonomsku i političku suradnju (O'Dowd, 2001., 75). also prevented the spreading of diseases and thereby "guarded" the West in a different manner. Depending on their interests, political elites, both then and now, balance their positions within the framework of political restrictions, conflicts and divisions, and pragmatically shape and manage, even manipulate the border at different levels, making it more or less porous for the movement of capital, services, people and ideas.
Similar phenomena of "soft underbellies" and porosity of a "green border" were noticed in earlier times as well, which resulted in a search for alternative economic-commercial routes and corridors in the latest migrant movements on the West Balkan route, but also in the contemporary channels for transport of illegal goods, from opiates to illegal migrants (URL 1). The rise of tolerance in everyday life on the early modern era border (micro-level) was not directly connected to preserving the cultural identity in today's sense as much as to pragmatism and the necessity of cooperation during scarcity and insecurity in time of crisis. In practice, coexistence of the early modern period population on the Croatian lands, which differed ethnically, religiously and culturally, took priority over the imperial and border conflicts, violence and division. Thus, the border became a place of contact as much as, and perhaps even more than, division, with marked ethnic and religious pragmatism in everyday life. An example of this was religious identification or affiliation. Although faith was a significant factor in the life of the common man, the exact denomination as seen from "below" was a matter of choice and need for survival, while transitions, in both the physical (migration) and religious (converting) sense, were a common occurrence. Although strictly forbidden in Islam, the Christianisation of Muslims was also one of the manifestations of fright and insecurity in the borderlands, but also a form of tolerance and acceptance of the Other. A similar practice was employed on the other side of the border with converting to Islam. The Christianisation of Muslims was primarily caused by the existential necessity of "fitting in" at the borderlands, and to a lesser extent by the subjective need of realising one's own religious identity, for example in Catholic Venetian Dalmatia. It was not sanctioned in the same way in the Dalmatian borderlands as in the "older" Ottoman lands. The conversion of former Christians to Islam in the Ottoman lands in the Na povlačenje određenih vrsta granica (etničkih, klasnih, regionalnih, rodnih, plemenskih itd.) moglo je utjecati i institucionalno okruženje (Wimmer, 2001.). Institucionalni kontekst posebno je važan kada je riječ o nacijama-državama jer posebne vrste političkih institucija utječu na procese oblikovanja etničkih granica. Tako Wimmer nadalje primjećuje da se suvremeno iscrtavanje, održavanje i mijenjanje granica temelji na etničnosti, rasi ili nacionalnosti. Razlog tomu vidi u sustavnoj homogenizaciji subjekata u kulturnom i etničkom smislu od državnih elita. U modernim državama-nacijama jedino teritoriji naseljeni "nacijom" trebaju biti integrirani u političku zajednicu, tj. državu. Definiranje etničkih granica nacije zbog toga je središnje političko pitanje i državne elite se potiče da slijede strategije izgradnje nacije i stvaranja manjina. U tome leži odgovor na pitanje zašto su političke elite u Hrvatskoj 1990-ih posegnule za etničnošću kao temeljem održavanja tzv. avnojskih granica. Upravo zato jer su željele sustavno homogenizirati stanovništvo u kulturno i etničkom smislu te tako lakše mobilizirati mase da podrže ideju stvaranja neovisne i suverene države Hrvatske. S druge je pak strane, etnička i nacionalna homogenizacija olakšala vojno novačenje potrebno za obranu zemlje od agresora.
The marking of certain types of boundaries (ethnic, class, regional, gender, tribal, etc.) could also be influenced by the institutional environment (Wimmer, 2001). The institutional context is particularly important when it comes to nation-states because specific types of political institutions influence the processes of forming ethnic boundaries. Thus, Wimmer further notes that the contemporary mapping, maintenance and alteration of borders is based on ethnicity, race or nationality. He identifies the reason for this in the systematic homogenisation of the subjects in a cultural and ethnic sense by state elites. In the modern nation-states, only the territories inhabited by the "nation" should be integrated into the political community, i.e. the state. Defining ethnic boundaries of a nation is therefore a central political issue and the state elite are encouraged to follow the strategies of nation-building and the creation of minorities. This is why the political elites in Croatia used ethnicity as the basis for maintaining the so-called AVNOJ borders in the 1990s. Precisely because they wanted to systematically homogenise the population in a cultural and ethnic sense and thus make it easier to mobilise the masses to support the idea of creating an independent and sovereign state of Croatia. On the other hand, the ethnic and national homogenisation facilitated the military recruitment needed to defend the country from the aggressor.
Early modern period practices found solutions that bypassed the rules imposed by those in power on two different sides of the border, mainly in the interest of the local population and with plenty of empathy towards the vulnerable population, which was a modus vivendi at the local level. At the macro-social level, this can be compared to the willingness of today's EU institutions to accept migrants, but also to enact certain stavnici civilnog sektora i samoorganizirani građani pokazali veliku dozu empatije i solidarnosti s izbjeglicama pružajući im pomoć pri prelasku i boravka na hrvatskom teritoriju (URL 1, URL 2, URL 3).
S obzirom na to da je smještena na razmeđi Sredozemlja i Srednje odnosno Jugoistočne Europe, geostrateški je položaj Hrvatske, povijesno gledajući, odredio njezin razvoj kao i razvoj njezinih granica. Pokazalo se da su granice hrvatskih zemalja u ranome novom vijeku bile civilizacijska periferija snažnih susjeda (Mletačke Republike, Habsburške Monarhije i Osmanskog Carstva), a na sličnim postavkama funkcionira i današnja gra-legal measures related to the distribution of the burden in accepting migrants with their relocation and instating a quota system. The EU members decide on the national level whether to adopt or reject decisions on the admission of migrants. While at EU level we do not notice the same solidarity between member states and the distribution of burden when it comes to refugees and migrants, where individual countries are particularly exposed to immigration pressure due to their geographic position, the situation is reversed at micro social level. Solidarity was particularly evident on the Balkans route, where representatives of the civil sector and self-organised citizens showed a great deal of empathy and solidarity with refugees, assisting them while travelling and staying in the Croatian territory (URL 1, URL 2, URL 3).
CONCLUSION
What both modern state borders as well as those from previous periods have in common is that they are not only administrative and political in nature, but also represent economic, legal, cultural and even psychological barriers. On the other hand, they are places for cross-border interaction that often grows into cooperation and establishes a common borderland identity, thus facilitating economic and political cross-border cooperation.
The paper relies on the concept of path dependency, which explains the influence of macro-social phenomena such as institutional structures, distribution of power and political alliances on micro-social behaviour. When it comes to the micro-social level, both in the past and today, the border does not completely separate the groups living in its area since social interaction, commodity exchange and information flow continue. Despite the patterns imposed from the top, everyday life has usually created its own rules, bypassed norms and made borders more porous than the rules of the times proscribed, and the determinants of differentiation such as language, culture, religion, ethnic affiliation, etc., faded next to the importance of political and legal subordination of past times, and the importance of political affiliation today. It can also be noted that the adoption of different strategies at the micro level sometimes reversibly reflected on the macro structures and the nica Republike Hrvatske unutar Europske unije. U ranome novom vijeku granica hrvatskih zemalja na simboličnoj, ali i formalnoj razini bila je civilizacijska periferija Zapada i ujedno služila kao štit od nekršćanskih utjecaja. Barijera prema Drugome u svrhu ojačavanja predziđa (Antemurale Christianitatis) bila je njezino glavno obilježje. U današnje bi vrijeme na sličan način hrvatska granica trebala biti u funkciji obrane Tvrđave Europe/Fortress Europe od neregularnih migracija odnosno od neželjenih Drugih. Zbog jačanja nadzora vanjskih granica Europske unije (podizanjem žičanih ograda, zatvaranjem graničnih prijelaza i sl.), dokidaju se neki ustaljeni oblici prekogranične suradnje i limitira granična propusnost na štetu lokalnog stanovništva.
Bearing in mind its location on the boundary between the Mediterranean, Central and South-eastern Europe, the geostrategic position of Croatia has historically determined its development as well as the development of its borders. It was shown that the borders of the Croatian lands in the early modern period represented the periphery of powerful neighbouring civilisations (the Venetian Republic, the Habsburg Monarchy and the Ottoman Empire) and that the present Croatian border has a similar role within the European Union. In the early modern period, the border of the Croatian lands represented at the symbolic and formal level the periphery of western civilisation and also served as a shield from non-Christian influences. The barrier to the Other for the purpose of strengthening the bulwark (Antemurale Christianitatis) was its main feature. Today, in a similar way, the Croatian border should defend Fortress Europe from irregular migration, i.e. from the unwanted Others. Due to more intense controls at the EU's external borders (erection of wire fences, closing of border crossings, etc.) some established forms of cross-border cooperation are being abolished and border porosity is limited to the detriment of the local population.
Regardless of globalisation and the processes of European integration, the number of European states has not decreased, nor has the number of borders. The creation of the European Union and its further expansion, among other things, is a response to the increasing fragmentation of the European space and to the changes that have occurred in Europe with the multiplication of the number of nation-states. Despite the slogan Europe without borders, which was aimed at emphasising the importance of a single European market, what emerged is a Europe of borders. The European Union countries face many challenges regarding borders. The biggest challenge in recent times is the regulation of migration movements, particularly irregular migration and migration of refugees. How will Croatia, as a border country of the European Union that is still not part of the Schengen Area, manage the border which is to become the external border of the European Union, will in part depend on the historical experience of living on the border. | 2021-01-02T08:03:53.338Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "efb1fb57d71bd0911e8322154b5e5a26f622cc49",
"oa_license": "CCBY",
"oa_url": "https://morepress.unizd.hr/journals/geoadria/article/download/1506/3494",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a01d4429b930dc7ef63122fd5d6b09a2adabadd0",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
270639185 | pes2o/s2orc | v3-fos-license | Myoglobin adsorption and saturation kinetics of the cytokine adsorber Cytosorb® in patients with severe rhabdomyolysis: a prospective trial
Background Rhabdomyolysis is a serious condition that can lead to acute kidney injury with the need of renal replacement therapy (RRT). The cytokine adsorber Cytosorb® (CS) can be used for extracorporeal myoglobin elimination in patients with rhabdomyolysis. However, data on adsorption capacity and saturation kinetics are still missing. Methods The prospective Cyto-SOLVE study (NCT04913298) included 20 intensive care unit patients with severe rhabdomyolysis (plasma myoglobin > 5000 ng/ml), RRT due to acute kidney injury and the use of CS for myoglobin elimination. Myoglobin and creatine kinase (CK) were measured in the patient´s blood and pre- and post-CS at defined time points (ten minutes, one, three, six, and twelve hours after initiation). We calculated Relative Change (RC, %) with: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1 - \left( {concentration(pre - post)\,/\,concentration\left( {pre} \right)} \right)*100$$\end{document}. Myoglobin plasma clearances (ml/min) were calculated with: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( {bloodflow*\left( {1 - hematocrit} \right)} \right)*\left( {concentration\left( {pre - post} \right)\,/\,concentration\left( {pre} \right)} \right)$$\end{document} Results There was a significant decrease of the myoglobin plasma concentration six hours after installation of CS (median (IQR) 56,894 ng/ml (11,544; 102,737 ng/ml) vs. 40,125 ng/ml (7879; 75,638 ng/ml) (p < 0.001). No significant change was observed after twelve hours. Significant extracorporeal adsorption of myoglobin can be seen at all time points (p < 0.05) (ten minutes, one, three, six, and twelve hours after initiation). The median (IQR) RC of myoglobin at the above-mentioned time points was − 79.2% (-85.1; -47.1%), -34.7% (-42.7;-18.4%), -16.1% (-22.1; -9.4%), -8.3% (-7.5; -1.3%), and − 3.9% (-3.9; -1.3%), respectively. The median myoglobin plasma clearance ten minutes after starting CS treatment was 64.0 ml/min (58.6; 73.5 ml/min), decreasing rapidly to 29.1 ml/min (26.5; 36.1 ml/min), 16.1 ml/min (11.9; 22.5 ml/min), 7.9 ml/min (5.5; 12.5 ml/min), and 3.7 ml/min (2.4; 6.4 ml/min) after one, three, six, and twelve hours, respectively. Conclusion The Cytosorb® adsorber effectively eliminates myoglobin. However, the adsorption capacity decreased rapidly after about three hours, resulting in reduced effectiveness. Early change of the adsorber in patients with severe rhabdomyolysis might increase the efficacy. The clinical benefit should be investigated in further clinical trials. Trial registration ClinicalTrials.gov NCT04913298. Registered 07 May 2021, https//clinicaltrials.gov/study/NCT04913298. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13613-024-01334-x.
Introduction
Rhabdomyolysis is a serious condition characterized by muscle damage and lysis of skeletal muscle cells, which can lead to acute and chronic kidney injury, electrolyte disorder, hypovolemia and acidosis [1][2][3].
A triad out of elevation of creatine kinase (CK), myalgia /muscle swelling and brown (tea-colored) urine can be observed in affected patients.Since there is no established definition for rhabdomyolysis, it is characteristically diagnosed by an increase of CK five times above normal [4].The triggers for rhabdomyolysis are diverse.It is most frequently caused by muscle damage by trauma, vascular obliteration or physical overexertion.Furthermore, infections and sepsis, drugs, hypokalemia, or hereditary muscle disorders can lead to rhabdomyolysis [1,5].Due to muscle damage, high levels of CK, myoglobin, urea, potassium, and other organic acids are released into the bloodstream, causing acidosis, Conclusion The Cytosorb® adsorber effectively eliminates myoglobin.However, the adsorption capacity decreased rapidly after about three hours, resulting in reduced effectiveness.Early change of the adsorber in patients with severe rhabdomyolysis might increase the efficacy.The clinical benefit should be investigated in further clinical trials.Trial registration ClinicalTrials.govNCT04913298.Registered 07 May 2021, https//clinicaltrials.gov/study/NCT04913298.
The pathomechanism of AKI caused by rhabdomyolysis is complex and not quite recognized.Myoglobin can precipitate with the Tamm-Horsfall protein, particularly if the urine is acidotic and occludes the renal tubule systems as a result [11,12].Further, direct renal toxicity of myoglobin due to the oxidation of hydroxyl radicals has been described [2,5].Third, myoglobin has a vasoconstrictive effect on renal arterioles and intravascular volume depletion leads to aggravation of AKI [1,2,13].
To date, there is no specific therapy for rhabdomyolysis.In addition to treating the cause of rhabdomyolysis, therapy includes adequate volume therapy, application of diuretics, urine alkalization and, if necessary, RRT.Since myoglobin seems to be the main cause of kidney damage in rhabdomyolysis, one therapeutic approach is extracorporeal removal from the bloodstream.A few studies and case reports already shown successful elimination of myoglobin through the use of a high-flux dialyzer or a medium-/ high cut-off dialyzer [14][15][16].Another option is to use the cytokine adsorber Cytosorb® (CS).CS eliminates hydrophobic molecules up to a size of 55 kDa and is approved for the removal of cytokines, bilirubin, myoglobin, ticagrelor and rivaroxaban.It is largely used in patients in hyperinflammatory conditions like sepsis [17].There are still few data describing the use of CS in patients with rhabdomyolysis.Dilken et al. presented a significant reduction of myoglobin (17 kDa) and CK, even though the CK with a size of 80 kDa should be beyond the adsorption spectrum [18].Scharf et al. also showed a significant decrease of myoglobin by CS in a retrospective study of 43 patients with severe rhabdomyolysis [19].
To date, we do not have any information about the actual adsorption capacity and saturation kinetics of Cytosorb® for myoglobin.However, this information is of considerable relevance to ensure targeted therapy for these patients.Therefore, we conducted a prospective study to evaluate the adsorption performance and saturation kinetics of CS for myoglobin and CK.In addition, differences in the CS clearance regarding to initial concentration of myoglobin are another point of interest.The aim is to be able to assess the appropriate duration of use and to be able to suggest changing intervals for effective therapy.
Study setting
This is a single-center, prospective observational exploratory study to investigate the adsorption rate and saturation kinetics of CS for myoglobin and CK in patients with severe rhabdomyolysis.The patients were included between May 2021 and August 2022 during their stay in two intensive care units (ICU) at the Ludwig-Maximilians-University hospital in Munich.The local institutional review board approved the study (registration number 21-0236).The study was registered with clinicaltrials (NCT04913298).Prior to inclusion in the study, written informed consent was obtained from patients or their legal representatives as approved by the review board.
Study population
Adult patients (≥ 18 years) with the need for continuous RRT due to anuric/oliguric acute kidney injury, diagnosed according to the KDIGO consensus criteria, and CS-application, were included [20].In addition, patients had to be diagnosed with rhabdomyolysis and plasma myoglobin levels > 5000 ng/ml.As mentioned above, the exclusion criterion was no consent from the patient or their legal representatives to participate in the study.The indication of CS application was at the discretion of the attending physician and independent of the study.As there was no previous data at the time of the study design, the number of cases to be included was set at 20 patients as an exploratory study.This was expected to capture potential variability in adsorption capacity.No sample size estimation was performed in the absence of available data.
Blood sampling and characteristics of RRT
CS was installed post-dialyzer.The patients received treatment with continuous RRT with multiFiltrate® device.Either continuous veno-venous hemodialysis (CVVHD) with citrate anticoagulation (CiCa) with Ultraflux® AV 1000 S filter or a postdilution continuous veno-venous hemodiafiltration (CVVHDF) with Ultraflux® AV 600 S filter and heparin anticoagulation was used.Both filters have a cut-off of approximately 30 kDa [21].Blood samples (EDTA tubes) were taken at the extracorporeal circuit directly before the cartridge (= pre-CS) and directly after the cartridge (= post-CS) at defined time points.The sampling times were ten minutes after the start of CS treatment, and one, three, six, and twelve hours after initiation.Furthermore, myoglobin and CK plasma levels were measured shortly before the initiation of CS and after six and twelve hours.EDTA anticoagulated plasma was obtained by centrifugation of whole blood in the intensive care unit immediately after sampling.The separated plasma samples were immediately frozen and stored stably at -80 °C until measurement.
Laboratory measurements
Clinical chemistry parameters were tested in plasma using the standard clinical chemistry modular analyzer Cobas®8000 (Roche Diagnostics, Mannheim, Germany) at the institute of laboratory medicine.We tested Myoglobin using a specific electrochemiluminescence immunoassay and quantified CK using a kinetic assay covering all isoforms.
Data collection
For data evaluation, demographic data and clinical and laboratory variables were collected from the laboratory and patient information system.Different laboratory parameters were measured shortly before CS initiation in the clinical routine.
Statistical analysis
The statistical analysis was performed using IBM SPSS statistics (Version 29.0.IBM Corp., Armonk, NY, USA).A paired T-test with associated samples was used to compare the concentrations pre-and post-CS after testing a normal distribution of the studied parameters (Shapiro-Wilk-test).For variables without a normal distribution, the Wilcoxon-Test was performed.To compare differences in myoglobin elimination in patients with or without a very high baseline myoglobin (> 50.000 ng/ml), the U-test was used.The relative change (RC) of the parameters by CS at different time points was calculated using: In addition, the myoglobin plasma clearances of CS was calculated using:
Demographic and clinical data
A total of 20 patients were included in the exploratory study.The main diagnoses at admission to the ICU in 40% of the patients were major surgical procedures such as liver or lung transplants or vascular surgery.25% of the patients were admitted due to an acute respiratory distress syndrome (ARDS).The main causes for rhabdomyolysis were sepsis (30%) and compartment syndrome (20%).In 25% of the patients, a specific reason for rhabdomyolysis remained unknown.There was no myocardial infarction in any of the patients.The median age was 52 years and 75% of patients were male.The Simplified Acute Physiology Score II (SAPS II) on the day of CS treatment was 78 points and the 28-days mortality was 50%.All patients were either anuric or oliguric.The median urine output during CS-treatment was 0 ml.In five patients, CS therapy was discontinued prematurely between six and twelve hours due to filter clotting (n = 3), death (n = 1) and change of dialysis modality (n = 1).Detailed patient characteristics can be found in Table 1.
Myoglobin and CK plasma concentration before and during CS-application
The median (IQR) myoglobin plasma concentration before initiation and at six and twelve hours was 56,894 ng/ml (11,
Discussion
Severe rhabdomyolysis accompanied with high levels of myoglobin is a critical condition leading to acute kidney injury and in consequence to the potential need of renal replacement therapy [5,7].Since myoglobin in particular appears to be the main cause of kidney damage in rhabdomyolysis, one therapeutic approach is the extracorporeal removal of myoglobin [1].In addition to causal and supportive therapy, various modalities of RRT as well as different dialyzers (high-/medium-cutoff ) or the Cyto-sorb® adsorber cartridge for myoglobin elimination have been evaluated in the past [14][15][16]22].Considering there are few analyses elaborating the use of CS in rhabdomyolysis, especially in terms of adsorption capacity and saturation kinetics, this study was performed.
CS is approved for use in rhabdomyolysis and its ability to eliminate myoglobin in patients with rhabdomyolysis has been previously demonstrated [19,23].This is consistent with our results, showing a significant extracorporeal reduction of myoglobin at all time points.Although effective myoglobin clearance occurs in the first three hours after CS initiation, our analysis showed a rapid decline in myoglobin clearance after three to six hours.This reflects the patients' plasma myoglobin concentration, which drops after six hours but rises to concentrations equal to or greater than baseline after twelve hours.These results implicate rapid saturation of CS, leading to inefficient adsorption for the remaining time of usage.Furthermore, brick saturation of CS is even more recognizable when considering patients with very high myoglobin concentrations.As we divided our cohort in two groups (initial myoglobin < and > 56,894 ng/ml (median baseline myoglobin concentration)), significantly lower clearance at one, three and six hours can be observed in the group with higher baseline myoglobin concentrations.
Recently, Albrecht et al. compared the myoglobin clearance of a high-cut-off dialyzer alone (n = 4) and in combination with CS (n = 4).They also describe early CS saturation with a median relative reduction of only 18% after two hours, which is quite similar to our results [23].On the contrary, Albrecht et al. describe no difference in the velocity of saturation of CS in patients with a high baseline myoglobin concentration, indeed there was solely one patient with myoglobin levels > 30.000 ng/ml and only four patients in the entire cohort [23].Nevertheless, there is a rapid saturation of CS not only for myoglobin, but also for other substances as bilirubin and bile acids [24].Dilken et al. therefore changed the CS after twelve hours as they noticed a saturation with ongoing rhabdomyolysis in their patient, which lead to a further decrease in myoglobin [18].
High cut-off dialyzers such as EMiC®2 are also suitable and approved for myoglobin elimination.Weidhase et al. reported significantly higher myoglobin plasma clearance in high cut-off CVVHD (EMiC®2) compared to high-flux CVVHDF (Ultraflux® AV 1000 S).The advantage was a constant myoglobin plasma clearance of approximately 8 ml/min over 24 h [15].In contrast, we observed a way higher myoglobin plasma clearance by CS in the first hour of application, which rapidly decreased to < 8 ml/ min after only six hours.Comprising, a shorter change interval should be discussed, for example after three to six hours instead of after twelve to twenty-four hours, as the fabricator advises [25].However, the side-effects of CS application, such as possible adsorption of anti-infective agents, a reduction in platelets count, and decrease in albumin concentration, as well as higher costs due to more frequent changes should also be taken into account [26][27][28][29][30][31][32].
Apart from the elimination properties of the different devices, the question of clinical benefit should be addressed.Gräfe et al. arises the question whether myoglobin elimination with CS integrated into RRT might lead to a faster kidney recovery compared to RRT alone in a propensity score matched cohort.They observed a 3.0 (0.6; 5.9) Fig. 4 Myoglobin plasma clearance of group 1 (red), 2 (brown), and the whole cohort (yellow) significant higher probability of kidney recovery and significant lower myoglobin levels in patients receiving CS therapy [33].Most recently, de Fallois et al. compared conservative management of rhabdomyolysis (without RRT) with extracorporeal therapies using different modalities, dialyzers, and an adsorber [34].There were no significant differences in myoglobin reduction between the RRTs or between RRT and conservative treatment, but no information was given on the changing interval of CS [34].In fact, patients without the need of RRT had the highest rate of myoglobin reduction, so preserving patients´ own renal function should be the primary goal in patients with rhabdomyolysis [34].Therefore, CS therapy as a stand-alone device should be discussed in the future to perhaps prevent the kidney damaging effects of myoglobin.However, currently no data exist on the use of CS as a stand-alone device in the context of rhabdomyolysis and future studies would be desirable.Of course, the risks of extracorporeal procedures such as catheter infection, bleeding and thrombosis must be considered as well as device-associated side effects and complications [35,36].
Notwithstanding that CK with a molecular weight of approximately 80 kDa should lie beyond the adsorption spectrum of CS (up to 60 kDa), extracorporeal elimination of CK was measured.There was a significant extracorporeal decrease of CK after ten minutes and one hour.However, the RC of CK was already considerably lower than the myoglobin clearance immediately after the installation of CS, and dropped to almost zero after three hours.Dilken et al. and Moresco et al. both describe a successful reduction of plasma concentration of myoglobin and CK in a case report [18,37].Also Albrecht et al., who as well analyzed extracorporeal samples, showed a short lasting but present relative reduction for CK [23].Therefore, a higher adsorption spectrum for CS than previously assumed should be considered and verified, especially with regard to further side effects.
To the best of the authors´ knowledge, this is one of the first prospective studies to quantify extracorporeal myoglobin and CK adsorption of the CS cartridge itself.In summary, there is a significant extracorporeal elimination of myoglobin, but a rapid saturation of CS leads to an ineffective adsorption after three to six hours.An even faster decline of the myoglobin clearance was detected, especially in patients with very high myoglobin levels.These findings are important in order to improve the efficacy of the CS used in patients with rhabdomyolysis in clinical practice.Early change of adsorber seems to be crucial to avoid ineffective adsorption due to saturation, especially in patients with very high myoglobin levels.Therefore, serum myoglobin concentrations could be monitored at shorter intervals during CS therapy in order to respond to rising myoglobin levels.However, with more frequent adsorber changes, clinicians should be aware of an increased risk of side effects as adsorption of anti-infective agents, a reduction in platelets count, and decrease in albumin concentration [26][27][28][29][30][31][32].Also, no significant reduction in CK can be expected from CS therapy.
This study has several limitations.First, the cohort of 20 patients appears to be small and inhomogeneous since reasons for rhabdomyolysis were quite diverse.However, despite the various causes, all patients have comparatively very high myoglobin levels and this is the largest prospective study in this field to date.In addition, the study objective was achieved with the patients included in the exploratory study.Second, both CVVHD and CVVHDF were used as dialysis modalities, yet this should have no impact on the myoglobin elimination of CS itself as the samples were collected extracorporeally right before and after CS and therefore unattached to possible myoglobin elimination by the dialyzer.Furthermore, some patients showed minor urine production during CS application, but the effect on the plasma myoglobin should be negligible for output of this magnitude.Since this study focused on elimination and saturation kinetics, the influence of CS on the outcome of the patients remains uncertain.Therefore, future randomized controlled clinical trials are needed to demonstrate the benefit of CS or other devices for myoglobin removal (e.g.EMiC®2) on the outcome of patients with severe rhabdomyolysis.
Conclusion
The Cytosorb® adsorber effectively eliminates myoglobin.However, the adsorption capacity decreases rapidly after about three hours, resulting in a reduced elimination.Early change of the adsorber in patients with severe rhabdomyolysis, especially in patients with very high myoglobin levels, might increase the efficacy.Therefore, and in order to investigate a clinical benefit of the therapy, further randomized controlled studies are necessary.
Fig. 1 Fig. 3
Fig. 1 Plasma concentrations of myoglobin and CK at the defined time points.Note Plasma concentrations of myoglobin and CK before initiation, six and twelve hours after CS.The boxes of the boxplots represent the interquartile-range (IQR) and the line the median.Whiskers were limited to 1.5 times the IQR.The cross represents the mean
Table 2
Myoglobin plasma clearance of the two groups | 2024-06-22T13:03:06.017Z | 2024-06-22T00:00:00.000 | {
"year": 2024,
"sha1": "4004bd9a67f25ffd7be30072161f0ad3999d3cf1",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1dd1691109bc0ef028b93444516c3879d4084191",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268417223 | pes2o/s2orc | v3-fos-license | Discovering and Articulating Frames of Communication from Social Media Using Chain-of-Thought Reasoning
Frames of Communication (FoCs) are ubiquitous in social media discourse. They define what counts as a problem, diagnose what is causing the problem, elicit moral judgments and imply remedies for resolving the problem. Most research on automatic frame detection involved the recognition of the problems addressed by frames, but did not consider the articulation of frames. Articulating an FoC involves reasoning with salient problems, their cause and eventual solution. In this paper we present a method for Discovering and Articulating FoCs (DA-FoC) that relies on a combination of Chain-of-Thought prompting of large language models (LLMs) with In-Context Active Curriculum Learning. Very promising evaluation results indicate that 86.72% of the FoCs encoded by communication experts on the same reference dataset were also uncovered by DA-FoC. Moreover, DA-FoC uncovered many new FoCs, which escaped the experts. Interestingly, 55.1% of the known FoCs were judged as being better articulated than the human-written ones, while 93.8% of the new FoCs were judged as having sound rationale and being clearly articulated.
Introduction
The way in which we interpret information depends on how the information is framed (Entman, 2003;Reese et al., 2001;Scheufele, 2004;Chong and Druckman, 2012;Bolsen et al., 2014).For instance, if information about vaccines is framed to build our confidence in them, we can become vaccine enthusiasts.The notion of Frame of Communication (FoC) has emerged from the Theory of Communication, studied in social sciences.Discovering FoCs is challenging because the FoCs are not directly expressed in texts, but rather texts evoke them, as shown in Figure 1.Framing entails emphasizing specific aspects of a topic within a text, guiding the audience towards a particular understanding.
For the text illustrated in Figure 1, which is part of the discourse about COVID-19 vaccines on social media, the selected aspects are (1) the calculation people make about the personal costs and benefits of getting vaccinated; and (2) the complacency of getting vaccinated due to low perceived risk of infections.These aspects can be interpreted as problems related to vaccination.The two problems become salient to the FoC evoked by the text illustrated in Figure 1.In a widely cited definition, Entman (1993) notes that "to frame is to select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote problem definition, causal interpretation, moral evaluation, and/or treatment recommendation for the item described."This means that, as a minimum, in addition to discovering the salient aspects of an FoC, we need to promote a causal interpretation of these aspects by articulating the FoC.In the FoC evoked by the text illustrated in Figure 1, the problem of calculation is caused by the preference for getting COVID-19 and fighting it off.The problem of complacency is caused by the assumption that getting COVID-19 is preferable to getting vaccinated.The final articulation of the FoC com-1617 bines coherently both these causal interpretations of the problems.We note that the articulation of an FoC is expressing the reasons (or causes) of salient problems, but it is not explicitly mentioning the problems, instead it is implying them.Therefore the articulation of an FoC is a much harder NLP task than the discovery of FoCs and their salient problems.
Previous research addressing the problem of FoC discovery (Card et al., 2016;Naderi and Hirst, 2017;Field et al., 2018;Khanehzar et al., 2019;Kwak et al., 2020a;Mendelsohn et al., 2021) focused only on the discovery the salient problems implied by FoCs.This was due to the release of the Media Frames Corpus (MFC) (Card et al., 2015), which annotates fifteen dimensions of policy frames, addressing such problems as Constitutionality and Jurisprudence or Security and Defense.It is important to (1) discover when an FoC is evoked by a text; and (2) to be aware of which salient problems 1 are highlighted.However, without articulating the FoC, we cannot infer how the text should be interpreted.Moreover, without articulating FoCs, we ignore the many ways in which the same problem is framed in all texts that address it.But, as reported in (Van Gorp, 2010;Walter and Ophir, 2019;Vreese, 2005), the communication literature addresses mostly the inductive vs. deductive frame analysis, from which human inference of the articulation of the FoCs emerges.We believe that the reasoning capabilities of Large Language Models (LLMs) enable the automatic articulation of FoCs.This motivated us to design a method for Discovering and Articulating FoCs (DA-FoC).
Evidently, articulating FoCs involves reasoning with the problem(s) addressed in texts.Moreover, each articulated FoCs must be relevant, i.e. multiple texts should evoke it (Gamson, 1989).Therefore, discovering and articulating FoCs must consider that (1) FoCs may address one or more salient problems; (2) the FoC articulation needs to provide a rationale for each salient problem; and (3) the articulated FoC should be relevant.These requirements are very burdensome even for communication experts, who typically rely on codebooks emerging from their reasoning and painful inspection of large quantities of texts (Kwak et al., 2020b; Russell Neuman et al., 2014; Reese, 2007; Matthes 1 The dimensions of the Media Frames Corpus correspond to the problems highlighted by an FoC.The notion of Frame of Communication and Media Frame are used interchangeably in Communication Theory (Chong and Druckman, 2007).
The recent ability of LLMs to perform complex reasoning provides an unprecedented opportunity for using them to simultaneously discover and articulate FoCs.In this paper we explore how Chainof-Thought (CoT) prompting (Wei et al., 2022b) of LLMs can be used to reveal not only the problems addressed in texts but also the articulation of the FoCs.In addition, the CoT framework we used for DA-FoC benefits from in-context active curriculum learning, allowing the LLM to learn from its own mistakes.Because many FoCs discovered and articulated in this way may be paraphrasing each other, or they may be specializations of other FoCs, we also used CoT prompting to discover relations between FoCs.The relations between FoCs enabled us to select only FoCs that are relevant.
In designing our DA-FoC method, we focused on social media platforms where millions of users express their opinions and participate in conversations about issues of their interest.In their Social Media Postings (SMPs), often users select particular aspects, or problems, of an issue, revealing the reasons for their interest in the problem.In doing so, they evoke FoCs, as shown in Figure 1.In addition to using only SMPs, which present the advantage of text brevity, we considered only the discovery and articulation of FoCs regarding COVID-19 vaccines.This allowed us to rely on knowledge about salient problems characterizing vaccine hesitancy, reported in Geiger et al. (2021).It also allowed us to make use of the only reference dataset having expert-annotated FoCs which are articulated.In Weinzierl and Harabagiu (2022) 14,180 SMPs have been expert-annotated with 113 FoCs.We have enriched this dataset by asking communication experts to also judge which of the problems reported in Geiger et al. (2021) were implied in each FoC.Using this enriched dataset allowed us to train and test DA-FoC and to make the following contributions: ¡1£ We introduce the first method that does not only discover FoCs from texts available in SMPs, but also articulates the FoCs by using CoT prompting of Large Language Models (LLMs) with In-Context Active Curriculum Learning (ICACL), a promising new method for prompting LLMs.¡2£ We describe the first method of discovering relations between FoCs, identifying paraphrases, specializations, and contradictions between them.We make available all prompts, annotations, articulated frames, and relations discovered between frames on GitHub2 .¡3£ A by-product of our method is the identification of all social media postings evoking the same FoC, which informs its relevance.¡4£ We present the first DA-FoC method which uncovers not only many of the frames identified by experts on the same dataset, but it is also capable of uncovering many new frames, which are both clearly articulated and sound.
Because FoCs are known to be influential in shaping public opinions, the discovery of frames and their articulation can inform the messaging used in various communication interventions.For example, knowing which FoCs contain misinformation about vaccines is crucial to interventions meant to inoculate the public against misinformation.The discovery of FoCs will also impact argumentation mining, an NLP area that has recently received plenty of interest (Palomino et al., 2022;Sun et al., 2022;Ziegenbein et al., 2023).
Reference Dataset
To our knowledge, the only existing dataset of SMPs annotated with FoCs is COVAXFRAMES, reported in Weinzierl and Harabagiu (2022).This dataset includes FoCs related to COVID-19 vaccination hesitancy.Vaccine hesitancy, as reported in Geiger et al. (2021), is characterized by seven factors, or problems, that increase or decrease an individual's likelihood of getting vaccinated.For each of the FoCs annotated in COVAXFRAMES, four researchers have annotated the problems that they address.The problems are listed in Table 1 along with their definitions and the number of FoCs addressing each problem.The researchers obtained a very high inter-annotator agreement of 81%, with the remaining disagreements adjudicated through discussions.The newly annotated dataset became the reference dataset used by the method described in Section 3 and Section 4. The same training and testing splits were utilized as in Weinzierl and Harabagiu (2022).
The DA-FoC Method
The DA-FoC method has three distinct phases.In Phase A, FoCs are discovered and articulated using the CoT prompting with the In-Context Active Curriculum Learning (CoT-ICACL) framework illustrated in Figure 2. Since we noticed that some of the FoCs articulated in Phase A are paraphrases, while some FoCs were generalizations/ specializations of other FoCs, and also some FoCs contradicted each other, we used the same CoT-ICACL framework in Phase B to discover possible relations between FoCs.Because in Phases A and B we do not account for FoC relevance, in Phase C we tackle this necessary property, selecting the final set of FoCs.
Chain-of-Thought Prompting with
In-Context Active Curriculum Learning We considered the option of using CoT prompting of an LLM in three scenarios: 1.In a zero-shot learning scenario, the LLM prompt describes the task: in Phase A of the DA-FoC method, as detailed in Section 3.3, this involves the description of the task of FoC discovery and articulation, while in Phase B, as detailed in Section 3.4, this involves the definition of possible relations between the FoCs discovered in Phase A as well as the task of discovering them.This scenario is represented by Step 1 illustrated in Figure 2.However, the task of discovering and articulating FoCs is difficult because it requires not only knowledge, but also expert reasoning, as evidenced in the frame coding literature (Kwak et al., 2020b;Russell Neuman et al., 2014;Reese, 2007;Matthes and Kohring, 2008).Capturing the causal reasoning required by the articulation of FoCs or by the recognition of relations spanning FoCs is not possible in this scenario.
2. In a few-shot learning scenario, which corresponds to Steps 1-3 from Figure 2, following the task-specific prompting, we provide initial demonstrations of how the task is performed.Clearly, these demonstrations present how Phase-specific tasks are resolved and involve examples from the training data, as detailed in Section 3.3 and Section 3.4 respectively.
Step 3 ends the few-shot learning, prompting the LLM to discover and articulate FoCs or to identify relations between FoCs , providing also their rationales.But, LLMs typically have a very restricted context length, which means only a few demonstrations may be provided to an LLM for in-context learning.Additionally, we need to decide the order in which the demonstrations are presented to the LLM, since this order can have a significant impact on performance (Dong et al., 2023;Zhao et al., 2021;Brown et al., 2020).This entails, as shown in (1993); Bengio et al. (2009), this entails learning a list of examples ordered by values of difficulty.For this purpose, we relied on two hypothesis: Hypothesis 1: In Phase A of DA-FoC, when modeling the difficulty of discovering FoCs evoked by SMPs, our hypothesis was that the more similar the language of an FoC is to the language of the SMP that evokes it, the easier it is to discover, articulate and explain the rationale for the FoC.We have experimented with measuring the similarity between an SM P i and an F oC j by considering (a) Sentence-BERT (SBERT) (Reimers and Gurevych, 2019); (b) BertScore (Zhang* et al., 2020); (c) the Cross-Encoder introduced by Nogueira and Cho ( 2020) and (d) Misinfo-GLP (Weinzierl and Harabagiu, 2021).Appendix A details our experiments, which led us to conclude that the best distance should use SBERT.The function quantifying the difficulty of discovering and articulating from an SM P i an F oC j was defined as: The Euclidean distance is used because the same distance was employed in the objective function of SBERT (Reimers and Gurevych, 2019).
Hypothesis 2: In Phase B of DA-FoC, the difficulty of discovering possible relations among the FoCs resulting from Phase A used the hypothesis that FoCs articulated with similar language are more likely to be related.Therefore, the function f RD (F oC A , F oC B ) quantifying the difficulty of predicting a relation between a pair of FoCs is defined as:
Phase A of DA-FoC: Discovering and Articulating Frames of Communication
For Phase A of the DA-FoC approach, Steps 1, 2, 3 and 4 need to be tailored for the task of discovering and articulating FoCs.
Step 1 represents the task-specific prompting, which (a) instructs the LLM to use the definition of FoCs from Entman (1993) and (b) details of the task.The prompt is illustrated in Appendix B. The LLM is instructed to first produce a rationale for each FoC it may discover in each exemplified SMP, and then it is asked to articulate the FoC.Moreover, since more than one FoC may be evoked by the same SMP, the LLM is instructed to discover all FoCs evoked in an SMP.
Step 2 provides the demonstrations to the LLM.Demonstration Examples: A demonstration contains (a) an example SMP; (b) the rationale explaining why it evokes a FoC, highlighting the salient problems; and (c) the articulation of the FoC.A demonstration example is:
Social Media Posting Example:
One shot of COVID-19 vaccine is sufficient to make #pregnancy more risky and unsafe for unborn babies.
Rationale:
This social media posting contains a framing, as the problem of confidence in vaccine is challenged due to the perceived risk for pregnancies, affecting the unborn babies.
Frame of Communication:
The COVID vaccine renders pregnancies risky, and it is unsafe for unborn babies.
The few demonstrations provided to the LLM are selected when satisfying the requirements: (C1) all the problems addressed by the SMPs from the training data should be represented across the demonstration examples; (C2) some SMP examples should not evoke any FoC; (C3) some SMP examples should evoke more than one FoC; and (C4) overall, a small number of demonstration examples should be used, such that they can fit in the context allowed by the LLM.
Step 3 continues to use examples from the curriculum to generate prompts for the LLM.In each prompt only the SMP example is presented, the LLM automatically generating the rationale and articulating the evoked FoC.
Step 4 follows the Verify-and-Edit paradigm (Zhao Frame of Communication A: The side effects of the COVID-19 vaccine could be worse than the disease itself.
Frame of Communication B:
The side effects of the COVID-19 vaccine are worse than the symptoms of the disease.
Paraphrase (P-Rel)
Frame of Communication C: The COVID-19 vaccine does not fully protect against the virus.
Frame of Communication D:
The COVID-19 vaccine does not prevent getting or spreading the virus.
Specialize (S-Rel)
Frame of Communication A: The side effects of the COVID-19 vaccine could be worse than the disease itself.
Frame of Communication B:
The side effects of the COVID-19 vaccine are worse than the symptoms of the disease.Whenever necessary, the human expert edits the rationales and the FoC articulations.
Phase B: Discovering Relations between Frames of Communication
Three possible relations between the FoCs articulated by the LLM were observed, which are exemplified in Figure 3. Whenever a pair (F oC A , F oC B ) used different words to address the same problems that had the same causes, we argue that they share a Paraphrase Relation (P-Rel).When a pair (F oC D , F oC E ) address the same problem, but the cause articulated in F oC D provides additional information than the cause articulated in F oC E , we argue that they share a Specialize Relation (S-Rel).Unlike the P-Rel relations, which are symmetrical, the S-Rel relations are asymmetrical.Also, when a pair (F oC E , F oC F ) address the same problems, but the causes are contradictory, we argue that they share a symmetrical Contradiction Relation (C-Rel).
In Phase B of the DA-FoC approach, we tailor Steps 1-3 from CoT-ICACL, illustrated in Figure 2, for the task of identifying relations between the FoCs discovered in Phase A.
Step 1: We instruct the LLM about the task of discovering relations between FoCs, showcasing each type of relation.The prompt is illustrated in Appendix B.
Step 2 provides a small number of demonstrations involving pairs of FoCs uncovered in Phase A and the relations between them.For each example, a rationale is provided along with the decision of the type of relation.Demonstration examples: The demonstration examples of relations between FoCs had to also sat-isfy the requirements: (T1) the arguments of the example relations had to address all the distinct problems addressed in the training set; (T2) some demonstration examples should use pairs of FoCs that do not participate in any relation and (T3) to account for the context size of the LLM, only a small number of demonstrations should be provided.Building the rationale: For each demonstration example, a rationale of the relation is provided, explaining why a relation between the pair of FoCs exists as well as the type of relation.
Step 3 uses examples of pairs of CoTs from the curriculum to prompt the LLM to generate a rationale for a relation if one exists and to decide the type of relation.
Step 4 also follows the Verify-and-Edit paradigm, where whenever necessary, the human expert edits the rationales and the assigned FoC relations.
Phase C: Relevance of Frames of Communication
In addition to addressing salient problems, FoCs need to be relevant.In social media discourse, we measure the relevance of FoCs by the number of SMPs evoking each FoC, similarly to how relevance is measured for FoCs in news (Gamson, 1989).This number is available to us first from Phase A of the DA-FoC method, which allows us to collect all the examples of SMPs evoking each of the discovered F oC * .However, due to the discovery of relations between FoCs made possible by Phase B, these relevance numbers need to be updated.First, we select only one FoC from each set of paraphrased FoCs P F i , namely M-FoC, which is the most connected (through P-Rels) FoC in P F i .
The relevance of M-FoC is updated from the original number of SMPs evoking it to the sum of all SMPs evoking any FoC in P F i .In this way, the discovery of P-Rels enables us to filter out FoCs that articulate the same causes of the same salient problems.
The S-Rels discovered in Phase B of the DA-FoC method enable us to organize FoCs in taxonomies, enabling us to implement the notion of inherited relevance.This entails that the relevance of an F oC A having an S-Rel with F oC B can be updated, to sum up its original relevance value to the relevance of F oC B .Selecting a relevance threshold T r results in the final set of FoCs, spanned by the final set of S-Rel and C-Rel relations.We note that because C-Rels reveal contrasting viewpoints of the problem causes, we retain all FoCs participating in such relations, to allow opposing interpretations due to these FoCs.
Evaluation Results
Quantitative Results: To compare the results of our method with a simple baseline, we considered a methodology that clustered all SMPs from the test data.Clustering was facilitated by creating SMP embeddings p * i = SBERT (SM P * i ) from the test set.Hierarchical Agglomerative Clustering (HAC) was employed from Ward (1963) with a variance gain threshold of 1.1, selected from initial experiments on the training data.For each cluster CL j , the first sentence of the SM P i closest to the centroid of CL j was selected and placed in the set of final FoCs.Obviously, this baseline does not discover any relations between FoCs.Table 2 lists the number of FoCs uncovered by the HAC baseline method.
Four LLMs were considered in our evaluations of the DA-FoC framework: Vicuna-13B (Chiang et al., 2023;Zheng et al., 2023), LLaMa-2-70B (Touvron et al., 2023), GPT-3.5 (Ouyang et al., 2022), and GPT-4 (OpenAI, 2023).In Phase C we chose T r = 2, corresponding to each FoC needing to be evoked by at least two SMPs.Further discussion surrounding this decision along with ablation results are provided in Appendix D. Furthermore, active learning loops with a minimum of 50 curriculum examples produced the best results from initial LLM experiments.Table 2 lists the number of discovered FoCs resulting from Phase A when using each LLM, the number of P-Rels, S-Rels, and C-Rels discovered in Phase B, and the number of final FoCs selected in Phase C. As Table 2 illustrates, zero-shot learning with GPT-3.5 and Few-Shot learning with Vicuna-13B failed to produce any meaningful FoCs, and therefore these configurations were not included in the qualitative results.A further discussion of the context limitations of the considered LLMs is provided in Appendix C.
Qualitative results: The quality of the final set of FoCs was evaluated in terms of three properties: (a) the soundness of the rationale provided by the LLM when articulating a FoC; (b) the clarity of the FoC articulation generated by the LLM; and (c) the novelty of the final set of FoCs when compared to the known FoCs in the reference dataset.FoC proposed by each method, then the quality of reasoning (Z) involved in uncovering FoCs is Z = N S /N T while the quality of the articulation (A) of FoCs is A = N C /N T .
While metrics Z and A capture the soundness and clarity of the final set of FoCs, we also considered four additional evaluation metrics that account for the novelty of the FoCs.For each F , which is a clearly articulated FoC, an expert linguist was asked to find if F conveys the same information as any F R , representing the FoCs available from the reference dataset.When F and some F R state the same thing, we consider F to be known, and thus not novel.Let N K represent the number of known FoCs judged in this way, and N F the total number of reference FoCs.This allows us to define two additional evaluation metrics: (1) the R metric, defined as R = N C /(N C + N F − N K ), which models the recall of clearly articulated FoCs; and (2) R K = N K /N F which accounts for the recall of known FoCs from all those available in the reference dataset.Finally, as we desire the FoCs to be both clearly articulated and fully recalled, we combine the A measure with the R measure into F 1 = 2AR/(A + R).We also are interested in measuring the clarity of the novel FoCs, and therefore we use the evaluation metric 3 lists the results of all these evaluation metrics across all methods for discovering FoCs.However, because the clustering baseline does not involve any reasoning, it has no results for Z. Agreement between linguists was measured on a sample of 1000 judgments, with a Cohen's Kappa of 0.62 indicating moderate agreement (McHugh, 2012).
We also performed an evaluation of the relations between FoCs discovered by GPT-4 employing CoT-ICACL, given that this method produced the best results for discovering FoCs.Expert inspection revealed that 96.56% of these relations were correct.More specifically, 99.15% of P-Rels were correct, 96.54% of S-Rels were correct and 86.30% of C-Rels were correct.Mistakes are further analyzed in Appendix F.
Discussion
The results obtained when using CoT-ICACL with GPT-4 as the LLM are not only the best, but they are also impressive across all evaluation metrics.Even when using CoT-ICACL with GPT-3.5 as the LLM, our method obtained a substantial improvement over the baseline for all evaluation metrics.But unlike GPT-4, GPT-3.5 does not produce many sound rationales, as revealed by the results of the Z metric, showing that its reasoning capabilities are limited when compared to GPT-4 (Espejel et al., 2023).GPT-4 enabled the uncovering of many more clearly articulated FoCs, as captured by the A metric.Interestingly, many of the prompting methods were able to have good recall of the known FoCs, created by experts.But in terms of both clearly articulating FoCs and revealing all FoCs, only methods powered by GPT-4 were competitive, given the interpretation of the values of the F 1 metric.Furthermore, the values of the P A evaluation results indicate that novel FoCs, which were not discovered by experts, were well articulated only when the used LLM was GPT-4.This makes us conclude that uncovering FoCs from SMPs can be performed with high values of soundness, clarity, and novelty when using GPT-4 and can be further improved with CoT-ICACL.Articulation Quality: A different way of assessing the clarity of the FoC articulation is made possible when focusing only on the final FoCs (resulting from Phase C) which had the same content as some of the reference FoCs annotated in the reference dataset.For each pair of FoCs (F K , F R ), where the uncovered F K was judged by a computational linguist to convey the same information as a reference FoC F R , the linguist was asked whether the articulation of (5.1%) address Conspiracy; and 14 FoCs (4.8%) address Calculation.Surprisingly, one FoC (0.3%) addressed a new problem, namely Morality.When using the CoT-ICACL prompting with GPT-4, we found that the 586 P-Rels between FoCs discovered allowed us to filter out 1,216 of the uncovered FoCs, as they were paraphrasing other FoCs.In addition, the S-Rels allowed us to generate 130 FoC taxonomies, spanned by S-Rels.These taxonomies contained on average 6 FoCs.The largest taxonomy contained 49 FoCs, with a depth of 7. Sometimes, in a FoC taxonomy, there were FoCs specialized as many as 13 times.The taxonomies will enable further research on the ideal specialization of an FoC articulation.We also found that the final set of FoCs contained 43 pairs of contradicting FoCs, demonstrating that opposing viewpoints were common.
An interactive website enabling an exploration of the discovered FoCs, FoC relations, and FoC taxonomies has been made available3 .Figure 4 illustrates how this interactive website operates.Each node represents one of the final FoCs discovered when using the CoT-ICACL promting of GPT4, with the colors corresponding to the problems identified by CoT reasoning.Edges in the graph represent specializing and contradicting relations, since all paraphrases have been eliminated.Zooming in on the full graph enables an exploration of the various automatically constructed FoC taxonomies, and hovering over each node provides the articulated FoC along with the identified problems and the number of SMPs evoking the FoC.Hovering over the edges also provides the rationale justifying the relation spanning the pair of FoCs.
Related Work
Initial large-scale research on frame identification from social media has generally relied on unsupervised approaches (Neuman et al., 2014;Meraz and Papacharissi, 2013;de Saint Laurent et al., 2020) which revealed interesting framing patterns, highlighted by lexical terms, but did neither articulate any FoC nor discover any problems that FoCs address.Classifiers aiming to identify frame-invoking language were reported in Baumer et al. (2015), but these classifiers did not identify the problems addressed by FoCs.The assumption that frames can be associated with certain stock phrases was chal-lenged in Tsur et al. (2015), showing that frames can also be associated with certain topics.
A growing body of research using supervised NLP methods uses the Media Frames Corpus (MFC) (Card et al., 2015).These methods detect frame salient problems with techniques including logistic regression (Card et al., 2016), recurrent neural networks (Naderi and Hirst, 2017), lexicon induction (Field et al., 2018), and fine-tuning pretrained language models (Khanehzar et al., 2019;Kwak et al., 2020a).Furthermore, subcategories of the policy frame dimensions annotated in MFC were extracted with a weakly-supervised approach (Roy and Goldwasser, 2020).
The only prior work that considered the analysis of frames in social media was reported in Mendelsohn et al. (2021), where immigration policy problems were identified in SMPs with multi-label classification methods, relying on RoBERTa (Liu et al., 2019).All these prior methods do not articulate FoCs, they only discover them.We believe that the release of the reference dataset used in our work, which annotates both FoCs and the problems they address, will facilitate new research in the difficult problem of discovering and articulating FoCs.Finally, none of the previous methods have considered the need to learn to automatically provide a rationale for the discovered FoCs or for their salient problem(s), which our DA-FoC method enables by using Chain-of-Thought prompting of LLMs with In-Context Active Curriculum Learning.
Conclusion
This paper presents a new method capable to discover and articulate Frames of Communication from social media.By combining Chain-of-Thought prompting of LLMs with In-Context Active Curriculum Learning, both previously known and especially new frames were revealed.Extensive evaluations show that when using GPT-4 with CoT-ICACL, 86.73% of the frames identified by experts were re-discovered on the same dataset while also uncovering many new frames, which are both clearly articulated and sound.The rationales generated by GPT-4 with CoT-ICACL help us to make sense of these uncovered FoCs, providing additional insights for understanding why certain problems are discussed on social media.The relations between frames help us discover when some frames specialize others and when some frames contradict others.
Ethical Statement
We respected the privacy and honored the confidentiality of the users that have produced the SMPs pertaining to the dataset from Weinzierl and Harabagiu (2022).We received approval from the Institutional Review Board at the University of Texas at Dallas for working with this Twitter social media dataset.IRB-21-515 stipulated that our research met the criteria for exemption #8(iii) of the Chapter 45 of Federal Regulations Part 46.101.(b).Experiments were performed with high professional standards, avoiding evaluation on the test collection until a final method was selected from training performance.All experimental settings, configurations, and procedures were clearly laid out in this work, the supplemental material, and the linked GitHub repository.We do not perceive any major risks related to our research, as our work is in service of improving understanding of how COVID-19 vaccine hesitancy is framed on social media.The public good was the central concern during all enclosed research, with a primary goal of benefiting both natural language processing and public health research.
Limitations
The method capable to discover and articulate Frames of Communication that is introduced in this work focuses on social media posts from Twitter / X.Therefore, our methodology may not work as well on posts originating from other social media platforms, particularly platforms such as Reddit, where longer textual content is typical.Furthermore, our method relies only on the textual content of posts.Many social media posts use also images, videos, and other multimedia content.In future work, we plan to extend our methods by enabling them to discover and articulate Frames of Communication by considering the entire multimodal content of social media posts.In addition, we plan to also extend the social media platforms on which our methods can operate.
An important limitation of our approach stems from the need to have available a reference dataset of social media posts annotated with frames of communication that were discovered to be evoked in them.These frames of communication need to be discovered with inductive frame analysis (Van Gorp, 2010) on the set of social media posts.The postings evoking each frame from this repertoire of frames of communication also need to be known.This requires significant efforts from com-munication experts.In addition, the problems revealed by each frame need to be annotated such that our chain-of-thought prompting methodology may have demonstrations.Semi-automatic methods that propose the frames of communication evoked in social media posts and predict the problems that are addressed by the frames are considered in our future work, to alleviate these limitations.
Finally, our method only considered frames of communication for "COVID-19 Vaccines" due to the only existing dataset where frames of communication are annotated.Therefore, we could consider additional datasets that may cover a variety of topics, such as the policy problems addressing immigration, tobacco, or same-sex marriage, which are covered in the Media Frames Corpus (MFC) (Card et al., 2015).In future work, we shall contemplate the discovery of frames of communication for a variety of topics and domains.Initial experiments were conducted on the CO-VAXFRAMES dataset to determine which models of difficulty could serve to guide curriculum learning.5 FoCs were manually selected from COV-AXFRAMES to serve as a reference for difficulty models.For each of the selected FoCs, 20 pairs of SMPs were sampled for a total of 100 pairs of SMPs.An expert linguist judged which of the two SMPs in each pair was more difficult to recognize as evoking the respective FoC, which enabled measuring how accurately different difficulty models aligned with these human preferences, similar to Reinforcement Learning with Human Feedback (Christiano et al., 2017).Table 5 illustrates the accuracy of the various difficulty models considered in Section 3.
A Difficulty Modeling Experiments
The Cross-Encoder approach, introduced by Nogueira and Cho (2020), employs a BERT-based model to measure relevance and was trained on MS-MARCO (Nguyen et al., 2016).The Misinfo-GLP method (Weinzierl and Harabagiu, 2021) employs graph-link prediction to identify whether an SMP evokes a misinformation FoC about COVID-19 vaccines.BERTScore (Zhang* et al., 2020) employs BERT to measure the F 1 score between the contextualized embeddings of a reference sequence and a candidate sequence.Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) produces sentencelevel embeddings trained contrastively to be close together in Euclidean distance if the semantics of the sentences are similar.SBERT clearly resulted in the closest aligned measure of difficulty, with an accuracy of 71% in modeling human judgments of difficulty for recognizing frame evocation.Therefore, we utilized SBERT for all difficulty modeling in In-Context Active Curriculum Learning.
B Chain-of-Thought Prompting Details
The task-specific prompt provided for Phase A of DA-FoC (a) instructs the LLM to use the definition of FoCs from Entman (1993) and (b) details of the task.The prompt is illustrated in Figure 5.
Frames of communication select particular aspects of an issue and make them salient in communicating a message.Social science stipulates that discourse almost inescapably involves framing -a strategy of highlighting certain issues to promote a certain interpretation or attitude.It has been argued that "to frame is to select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote problem definition, causal interpretation, moral evaluation, and/or treatment recommendation." The Task: You will be tasked with identifying and articulating vaccine hesitancy framings on the social media postings.You should discuss your reasoning first, and then provide a final decision.
Each social media posting provided may or may not contain one or more frames of communication, so your first step is: (a) Reason about whether the posting contains a frame (or more frames), or just states something factual or an experience.
If the posting contains a frame, the next step is (b) Articulate that frame succinctly.You will perform these steps until the answer to (a) is false, either because there are no frames in the posting, or because you have already articulated all the frames.The LLM is asked to first produce a rationale for each FoC it may uncover in each exemplified SMP, and then it is asked to articulate the FoC.Moreover, since more than one FoC may be evoked by the same SMP, the LLM is instructed to uncover all FoCs evoked in an SMP.Similarly, the taskspecific prompt provided for Phase B of DA-FoC is illustrated in Figure 6.
C Context Length Limitations
All LLMs considered in Section 4 have a limited context length, defined by the number of tokens the LLM can consider in a single prompt.strations for few-shot learning, and this limitation is likely why Vicuna-13B performed so poorly in our evaluations, discussed in Section 4.However, LLaMa-2-70B, GPT-3.5, and GPT-4 had no problem including demonstrations for few-shot learning and In-Context Active Curriculum Learning.
D Ablation Experiments over Relevance Threshold
The relevance threshold T r = 2 corresponds to requiring two or more SMPs to evoke each FoC for that FoC to be considered relevant.Higher relevance thresholds can be considered, which produce a different final number of FoCs when employing CoT-ICACL with GPT-4, illustrated in Table 6.Further manual judgments were performed on T r > 2, also provided in Table 6.As the threshold for relevance increased, fewer and fewer final FoCs were produced leading to a major decrease in recall metrics.Interestingly, we also see a noticeable decline in the quality of new FoCs, measured by P A , which could indicate that the new highquality FoCs discovered with T r = 2 correspond more often to FoCs with lesser relevance.Human annotators likely missed these FoCs in constructing COVAXFRAMES because much fewer SMPs evoke them.Furthermore, as the test collection is only a representative sample of 2,113 SMPs, it was difficult to justify T r > 2, as T r = 2 already corresponds to 0.1% of the population of SMPs.
If we assume this sample is representative, then T r = 2 would correspond to a minimum evocation of approximately 470 SMPs per month for each Frames of communication select particular aspects of an issue and make them salient in communicating a message.Social science stipulates that discourse almost inescapably involves framing -a strategy of highlighting certain issues to promote a certain interpretation or attitude.It has been argued that "to frame is to select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote problem definition, causal interpretation, moral evaluation, and/or treatment recommendation." The Task: You will be tasked with identifying relationships between vaccine hesitancy framings.You should discuss your reasoning first, and then provide a final decision.Each framing provided may or may not be involved in a single relationship with one framing from a provided set of similar framings.We will consider three possible relationships: 1. Paraphrases(X,Y): X and Y say essentially the same exact thing, with different words or phrasing.If one person agreed with X, they would agree with Y, and vice versa.Frames should share the same cause and the same problem to be considered paraphrases.2. Specializes(X,Y): X is a more specific or detailed framing of Y. Notice the order of X and Y is important for this relationship, as X is more specific and Y is more general.Frames should share the same problem, but have more specific or general causes to be considered specializes.
3. Contradicts(X,Y): X and Y contradict each other, such that they frame the same exact issue from opposing perspectives.If one person agreed with X, they would disagree with Y, and vice versa.Be extremely careful with the contradicts relationship, as we do not want two frames to contradict simply because they say the vaccine is safe vs unsafe, the frames need to have the same cause to contradict, such as safe due to being tested vs unsafe due to being rushed.The two frames X and Y should essentially paraphrase each other, sharing the same problem and cause but from opposing perspectives.4. No relationship: There are no relationships between the new framing and any of the provided framings.You should (a) Reason about if the framing holds one of the above relationships with any of the provided framings.Multiple relationships could be true, but prioritize in the order provided: If a paraphrase relationship holds, it must be provided.
If there is no paraphrase, then look for specialize.If there is a specialize relationship, provide it, otherwise look for contradicts.Finally, if there is no contradicts relationship, answer no relationship.If a relationship is identified, then (b) State that relationship, using the IDs for each framing.FoC, using the collection criteria from Weinzierl and Harabagiu (2022).
E Successful and Erroneous FoC Examples and Relations Spanning Them
An example of a known uncovered FoC which was judged to be more clear than an FoC discovered by experts on COVAXFRAMES is F oC 2 :"Preference for getting COVID-19 and fighting it off than getting vaccinated", the known FoC, and F oC 3 : "Natural immunity is better than vaccine immunity", a FoC discovered by GPT-4 with CoT-ICACL.An example of an uncovered FoC that was not known and is clear as well as sound is F oC 4 : "Avoiding people is a better strategy than getting the COVID-19 vaccine".The rationale generated by CoT for F oC 4 is: "The problem of calculation is due to the cause that a trade-off is being made, where taking the vaccine is not worth the calculated risk when compared to avoiding people."Also, an example of a newly discovered F oC 5 which specializes some F oC 6 can be provided for F oC 5 : "People should make their own decisions about COVID-19 vaccination without being chastised" and F oC 6 : "People should make informed decisions about COVID-19 vaccination."An example of contradictory FoCs is established between F oC 7 : "Getting the COVID-19 vaccine will protect those who cannot get the vaccine" and F oC 8 : "The COVID-19 vaccine only benefits the recipient."These examples show that in addition to uncovering and articulating FoCs from social media, the method that we have presented discovers interesting and informative relations between FoCs.Moreover, the rationales generated to make sense of these FoCs provide additional insights for understanding why certain problems are discussed on social media.
F Errors in Articulated FoCs and FoC Relations
A closer inspection of the edited demonstrations from Phase A of the curriculum built for GPT-4 demonstrates the kinds of early mistakes, which were corrected through editing with CoT-ICACL.GPT-4 mistakenly only articulated a single FoC, when the prompted SMP evoked multiple FoCs, for five out of the six edited demonstrations.The sixth demonstration had sound rationale, but an overly verbose articulation of the FoC.In Phase B, GPT-4 required 20 examples to be edited, where 7 edited examples involved incorrect P-Rels on FoCs which shared problems; 6 edited examples included missed P-Rels; 4 examples were edited where GPT-4 incorrectly directed the S-Rel, and 3 edited examples were added for C-Rels which were incorrectly identified once as a P-Rel, and twice as no relation.
Figure 4 :
Figure 4: Interactive website enabling an exploration of the discovered FoCs, FoC relations, and FoC taxonomies discovered by GPT-4 employing CoT-ICACL for DA-FoC.
K was (a) better, (b) worse, or (c) of the same clarity as F R .The results of these judgments are listed in Table4.As expected, the baseline method uncovers FoCs with vastly worse articulation clarity (79.22%) than the reference FoCs.The CoT-ICACL prompting of GPT-3.5 significantly improves the clarity of FoC articulation, uncovering 29.21% of known FoCs with the same clarity quality as the reference FoCs and even improving 26.97% of the clarity of uncovered known FoCs.The percentage of known FoCs articulated more clearly is an impressive 55.10% when CoT-ICACL used GPT-4, and only 9.18% of the known FoCs are articulated with poorer clarity.This indicates that CoT-ICACL with GPT-4 is capable of better articulating FoCs uncovered from social media than experts 55.10% of the time, while 37.71% of the time the FoCs are articulated with equivalent clarity.A 9.18% reduced clarity indicates that the need for expert intervention is greatly reduced.Examples are provided in Appendix E of discovered FoCs and their quality of articulation.Organizing the FoCs: The rationales generated by CoT prompting with GPT-4 indicate the problems addressed by the uncovered FoCs.This allowed us to inspect the distribution of problems in the final set of FoCs obtained when using CoT-ICACL prompting with GPT-4.Our inspection indicates that a total of 174 FoCs (59.6%) address Confidence in vaccines; 39 FoCs (13.4%) address Collective Responsibility; 28 FoCs (9.6%) address Complacency; 23 FoCs (7.9%) address Compliance; 19 FoCs (6.5%) address Constraints; 15 FoCs
Figure 5 :
Figure 5: Task definition prompt for Phase A, the articulation of FoCs from SMPs for DA-FoC.
Figure 6 :
Figure 6: Task definition prompt for Phase B, the discovery of FoC relations for DA-FoC.
Table 1 :
Problems associated with vaccine hesitancy.
Thought Prompting with In-Context Active Curriculum Learning (CoT-ICACL).We note that in this scenario, we present initially a small number of demonstrations in Step 2, while this number grows in the following usages of the active learning loop, because if in Step 4, edits are performed on the results of Step 3, all those edits become new demonstrations available to the LLM when Steps 2-4 are performed again.Finally, when reaching Step 5, the LLM is prompted in the same way as in Step 3, however, this time, all examples from the test data are used.
Rubin et al. (2022)ubin et al. (2022)that for all the examples from the training data, we would need to have expert-quality rationales.This would generate a significant burden on communication experts, which we believe is not necessary.We could use instead Active Learning, which requires a smaller, manageable number of rationale examples to solve these issues.3. A scenario that (a) takes advantage of human intervention in the CoT prompting, by creating the active learning loop illustrated in Figure 2; as well as (b) curriculum learning, such that the examples presented in Step 3 have a growing level of difficulty.Because we still use (repeatedly) CoT prompting of the LLM, but also rely on In-Context Curriculum Learning and Active learning, we call this scenario Chain-of-3.2Curriculum Learning in DA-FoC We were inspired by recent reports (Maharana and Bansal, 2022) on the impact of curriculum learning on common sense reasoning.Thus, when learning a curriculum of examples used in Step 3 of CoT-ICACL, we have considered the two functions a curriculum should have: (1) ranking of examples in terms of difficulty; and (2) transitioning of easy to difficult examples during training.As in Elman
Table 2 :
Two linguists were tasked to judge the soundness, clarity, and novelty of final FoCs, with N S FoCs deemed sound, and N C FoCs deemed clear.With N T final Number of FoCs discovered in Phase A; number and type of relations between FoCs discovered in Phase B, and final number of FoCs selected in Phase C.
Table 3 :
Evaluation results of the final set of FoCs.
Table 4 :
Comparing the articulation clarity of uncovered FoCs against reference FoCs.
Table 5 :
Difficulty function results from initial experiments with different difficulty models.
Table 6 :
Ablation evaluation results over the relevance threshold from Phase C, producing the final set of FoCs for CoT-ICACL with GPT-4.
Table7presents the maximum context lengths possible for each of the considered LLMs.We note that Vicuna-13B has such a small context that it can barely fit the task-specific prompt and necessary demon- | 2024-03-16T13:04:51.044Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "5434c69f98683074e9ae64537f358d7a0afc7c7f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "5434c69f98683074e9ae64537f358d7a0afc7c7f",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": []
} |
271541490 | pes2o/s2orc | v3-fos-license | Deficiency of SIAH1 promotes the formation of filopodia by increasing the accumulation of FASN in liver cancer
It has been shown that the formation of filopodia is a key step in tumor cell metastasis, but there is limited research regarding its mechanism. In this study, we demonstrated that fatty acid synthase (FASN) promoted filopodia formation in liver cancer cells by regulating fascin actin-bundling protein 1 (FSCN1), a marker protein for filopodia. Mechanistically, on the one hand, the accumulation of FASN is caused by the enhanced deubiquitination of FASN mediated by UCHL5 (ubiquitin c-terminal hydrolase L5). In this pathway, low expression of SIAH1 (Seven in absentia homolog 1) can decrease the ubiquitination and degradation of ADRM1 (adhesion regulating molecule 1) thereby increasing its protein level, which will recruit and activate the deubiquitination enzyme UCHL5, leading to FASN undergo deubiquitination and escape from proteasomal degradation. On the other hand, the accumulation of FASN is related to its weakened ubiquitination, where SIAH1 directly acts as a ubiquitin ligase toward FASN, and low expression of SIAH1 reduces the ubiquitination and degradation of FASN. Both the two pathways are involved in the regulation of FASN in liver cancer. Our results reveal a novel mechanism for FASN accumulation due to the low expression of SIAH1 in human liver cancer and suggest an important role of FASN in filopodia formation in liver cancer cells.
INTRODUCTION
Liver cancer is a common type of malignant tumor of the digestive system.It has been reported that the 5-year overall survival rate of patients in China is only 12.5% [1,2], indicating a poor prognosis largely because of recurrence and metastasis.Therefore, it is necessary to explore the molecular mechanism of liver cancer metastasis and identify more effective molecular targets.
Cell migration, which is the self-directed movement of cells using their pseudopodia, is necessary for tumor invasion and metastatic growth [3].Studies have shown that the formation of filopodia plays an important role in the migration of liver cancer cells [4,5].Fatty acid synthase (FASN), a key enzyme required for the synthesis of fatty acids and some biologically important lipid precursors, can regulate metabolism, cell survival and proliferation, DNA replication, transcription, and protein degradation [6][7][8].It has been shown that FASN can promote the proliferation, metastasis, and apoptosis of liver cancer cells, therefore playing a key role in liver cancer progression [9][10][11].
Recent studies revealed that FASN can directly regulate FSCN1 (fascin actin-bundling protein 1), a protein related to the formation of filamentous pseudopodia, lamellar pseudopodia, and microspines and coding of cytoskeletal proteins, thereby promoting the migration and invasion of liver cancer cells [12,13].However, the effect of FASN on filopodia formation and the regulatory mechanism of FASN require further exploration.
Ubiquitination refers to the process by which ubiquitin classifies proteins in cells relying on the action of a series of special enzymes, including ubiquitin-activating enzymes (E1), ubiquitinbinding enzymes (E2), ubiquitin-ligases (E3), and deubiquitinating enzymes (DUBs) and, subsequently, selects and modifies the target proteins specifically.Substrates modified by E3s are either degraded by the ubiquitin-proteasome system (UPS), as in most cases, or are altered in their interactions, localization, or enzyme activity.It has been shown that 80-90% of intracellular proteins are degraded by the UPS pathway-a post-translational modification in eukaryotes that regulates important biological processes such as cell cycle, metabolism, proliferation, apoptosis, signal transduction, and DNA damage repair [14,15].Although the regulation of FASN by UPS has been reported, research is still in its nascent stage, and further studies are required.
In this study, we highlight the effect and mechanism of FASN on filopodia formation in liver cancer cells.We have revealed the molecular mechanism by which loss of SIAH1 in liver cancer regulates the protein stability of FASN through two pathways: promoting deubiquitination and reducing ubiquitination.Collectively, our findings suggest that the SIAH1-FASN-FSCN1 axis is crucial for filopodia formation in liver cancer cells, and this discovery provides a promising strategy for the treatment of liver cancer.
FASN promotes filopodia formation in liver cancer cells by regulating FSCN1
Multiple studies reported that FASN was upregulated in liver cancer, and associated with the malignant progression and poor prognosis [9][10][11].To detect the potential role of FASN in liver cancer, we analyzed the expression level of FASN in human liver cancer tissues and non-tumor tissues using the databases.The mRNA level of FASN in liver cancer tissues was upregulated (Fig. 1A, B).Furthermore, we analyzed the correlation between FASN mRNA level and patient survival.It was found that a high FASN mRNA level was associated with poor prognosis in liver cancer patients (Fig. 1C, D).Subsequently, the clinical tissue samples were extracted to detect whether the protein level of FASN was consistent with mRNA expression.It was found that the protein level of FASN in liver cancer samples was upregulated correspondingly (Fig. 1E).In addition, we constructed a survival model using nude mice and found that low levels of FASN were associated with longer survival time (Fig. 1F, G).Further analysis of clinical data revealed a correlation between the expression of FASN and the clinicopathological characteristics of liver cancer patients.We found that high levels of FASN were associated with tumor size (P < 0.05), venous invasion (P < 0.05), and direct liver invasion (P < 0.05).There was no significant correlation between FASN expression and the remaining pathological features (Table 1).
The high degree of malignancy in liver cancer is mainly related to recurrence and metastasis, particularly intrahepatic metastasis [16,17].It has been shown that filopodia formation, the previous step in cell migration, plays an important role in the metastasis of liver cancer cells [4,5].To determine the role of FASN in filopodia formation, we performed a filopodia localization assay in HepG2 and Huh7 liver cancer cells with different metastases.The results revealed that silencing of FASN inhibited filopodia formation in liver cancer cells, while overexpression of FASN produced the opposite effects (Fig. 2A, B).FASN has been reported to directly regulate fascin actin-bundling protein 1 (FSCN1), an important biomarker of filopodia formation [13].Further experiments indicated that FASN positively regulated FSCN1 protein level (Fig. 2C, D).In addition, we further determined the regulation of FASN on small GTPases which also play essential roles in the regulation of filopodia formation.The results showed that silencing of FASN reduced the protein level of RAC1, CDC42, RHOA, while overexpression of FASN had the opposite effect (Fig. 2E).Since filopodia formation is necessary for cell movement, we analyzed the effects of FASN on invasion and migration of liver cancer cells.It was found that FASN also plays a positive role in the invasion and migration of liver cancer cells (Fig. 2F and Supplementary Fig. 1A-D).To confirm whether FSCN1 was involved in filopodia formation and cell movement regulated by FASN, we transfected Myc-tagged FSCN1 into FASN-silenced liver cancer cells.It was shown that overexpression of FSCN1 significantly rescued the inhibition of filopodia formation and cell invasion and migration induced by silencing of FASN (Fig. 2G, H and Supplementary Fig. 1E).These results suggest that FASN promotes filopodia formation in human liver cancer cells by regulating FSCN1.To further verify the role of filopodia formation on cell movement, BDP-13176, a FSCN1 inhibitor, was used in Huh7 cells overexpressed FASN.It was found that BDP-13176 could obviously restrained filopodia formation in Huh7 cells, but not completely inhibited cell invasion and migration (Fig. 3A, B).In the context of our study, we investigated the regulatory effects of FASN on epithelial-to-mesenchymal transition (EMT) and matrix metalloproteinases (MMPs).Significantly, FASN could positively regulate the protein level of MMP9, but had no effect on E-cadherin, N-cadherin, or MMP2 (Fig. 3C, D).
Furthermore, we examined the effect of FASN in a mouse xenograft model.FASN-silenced Huh7 cells or shControl Huh7 cells were transplanted subcutaneously inoculated into Balb/C nude mice.Consistent with the in vitro observations, FASN knockdown markedly inhibited tumor growth in mice, which was associated with FSCN1 (Fig. 3E-G).More precisely, Hep1-6 cells knocked out FASN were implanted beneath the liver capsule to establish an orthotopic model of liver cancer in C57BL/6J mice (Fig. 3H).It was found that knocked-out FASN obviously inhibited tumor growth and intrahepatic metastasis.The expression of FASN and FSCN1 were analyzed using immunohistochemistry (Fig. 3I).These findings suggested that FASN also acts as a tumor promoter in mice by regulating FSCN1 expression.
The deubiquitinating enzyme complex ADRM1-UCHL5 promotes filopodia formation in liver cancer cells by stabilizing FASN Based on the above findings, reasons underlying the abnormal expression of FASN are worth studying.It has been reported that FASN could be degraded by ubiquitin-proteasome system (UPS), which might be a main mechanism of regulating its protein level [18][19][20].Thus, we investigated the mechanisms of FASN protein degradation or stabilization in liver cancer.The results revealed that the protein level of FASN was significantly increased after the proteasomal pathway was inhibited by MG132, but not the lysosomal pathway in HepG2 and Huh7 cells (Fig. 4A).Furthermore, much more ubiquitinated FASN was observed while the proteasome function was inhibited (Fig. 4B).These results suggest that FASN can be degraded via ubiquitin-proteasome pathway in liver cancer cells.
USP14 (ubiquitin-specific peptidase 14) has been reported to directly interact with and stabilize FASN [18].As a mammalian equivalent of USP14, the deubiquitinating enzyme UCHL5 (ubiquitin c-terminal hydrolase L5) is also associated with proteasome regulatory particles and plays important roles in proteasome modification [21][22][23].Therefore, we speculated whether UCHL5 could also regulate FASN, which was consistent with the ideas from Tan et al. in the discussion [18].To explore whether UCHL5 acts as an upstream deubiquitinating enzyme for FASN in human liver cancer cells, we first analyzed the roles of UCHL5 in the regulation of FASN protein level in HepG2 and Huh7 cells.It was found that knockdown of UCHL5 obviously inhibited the expression of FASN, while overexpression of UCHL5 had the opposite effect (Fig. 4C, D).Molecularly, the Co-IP assays showed that endogenous UCHL5 interacted with FASN in liver cancer cells (Fig. 4E).In addition, silencing of UCHL5 increased the ubiquitination level of FASN, while overexpression of UCHL5 reduced it (Fig. 4F, G).It has been reported that UCHL5 specifically cleaves K48linked polyubiquitin chains [24,25].Next, to clarify whether UCHL5 could inhibit K48-linked ubiquitination of FASN, the ubiquitin mutant vectors K48 and K63, which contain arginine substitutions of all lysine residues except the one at positions 48 or 63, respectively, were used in the transfection assays.It was found that UCHL5 inhibited both K48 and K63-linked ubiquitination of FASN (Fig. 4H).Correspondingly, the protein stability of FASN was decreased upon silencing of UCHL5 (Fig. 4I).To investigate the clinical significance of UCHL5 and FASN, we first detected the mRNA level of UCHL5 in liver cancer via database.It was found that the mRNA of UCHL5 was high expressed in liver cancer (Fig. 4J).And the mRNA levels of UCHL5 and FASN presented a direct correlation in normal and liver cancer specimens (Fig. 4K).Furthermore, we collected clinical samples to determine the protein level of UCHL5 and found that the protein of UCHL5 was higher in liver cancer tissues than normal liver tissues (Fig. 4L).Correspondingly, the protein levels of UCHL5 and FASN also presented a direct correlation in normal and liver cancer specimens (Fig. 4M).Prognostically, it was found that high UCHL5 mRNA level was associated with a poor prognosis in liver cancer patients (Fig. 4N).Functionally, we determined whether UCHL5 could regulate filopodia formation in liver cancer cells.It was found that UCHL5 facilitated filopodia formation in liver cancer cells by regulating FSCN1 (Fig. 5A-C), which was associated with FASN (Fig. 5D, E).Correspondingly, UCHL5 could promote cell invasion and migration through regulating FASN (Fig. 5F, G).These results suggest that UCHL5 promotes filopodia formation in liver cancer cells by stabilizing FASN.
Since UCHL5 exerts its role as a deubiquitinating enzyme through recruitment and activation by ADRM1 (adhesion regulating molecule 1), also known as Rpn13 [26,27], we further explored the regulation and mechanism of action of ADRM1 on FASN in liver cancer.Briefly, overexpression of ADRM1 promoted the expression of UCHL5, as well as FASN and FSCN1, while inhibition of ADRM1 had the opposite effect (Fig. 6A, B).Next, the Co-IP assays revealed that endogenous ADRM1 interacted with UCHL5 and FASN in HepG2 and Huh7 cells (Fig. 6C).Furthermore, inhibition of ADRM1 increased the ubiquitination level of FASN, while overexpression of ADRM1 reduced it (Fig. 6D, E).Clinically, the mRNA of ADRM1 was high expressed in liver cancer (Fig. 6F).And the mRNA levels of ADRM1 and UCHL5 presented a direct correlation in normal and liver cancer specimens, as well as ADRM1 and FASN (Fig. 6G, H).Proteinally, the protein of ADRM1 was higher in liver cancer tissues than normal liver tissues (Fig. 6I).And both the protein levels of ADRM1-UCHL5 and ADRM1-FASN presented a direct correlation in normal and liver cancer specimens (Fig. 6J, K).Prognostically, high ADRM1 mRNA level was associated with a poor prognosis in liver cancer patients (Fig. 6L).These results suggest that ADRM1-UCHL5 could promote the deubiquitination of FASN and there is a direct correlation between them in clinical expression.
The E3 ubiquitin ligase SIAH1 destabilizes ADRM1 by promoting its K6-linked polyubiquitination and proteasome degradation in liver cancer cells SIAH1 (Seven in absentia homolog 1), a ubiquitin ligase, has been reported to be involved in cell cycle, apoptosis, DNA damage repair, and hypoxia stress [28][29][30].Oncologically, studies showed that SIAH1 was associated with the involvement of liver cancer [31][32][33].In a preliminary study, we performed a TMT proteomics analysis in HepG2 cells to compare the protein expression between emptyvector and overexpression of SIAH1 (−/ + MG132) and found that ADRM1 was one of the differential proteins (Fig. 7A).Subsequent experiments found that SIAH1 could degrade ADRM1 through the UPS pathway but not the lysosomal pathway in HepG2 and Huh7 cells (Supplementary Fig. 2).
Furthermore, we analyzed the regulatory effect of SIAH1 on the ADRM1-UCHL5 pathway.The results showed that overexpression of SIAH1 decreased the expression of ADRM1 and UCHL5, while silencing of SIAH1 had the opposite effect (Fig. 7B, C).To explore the mechanism by which SIAH1 degrades ADRM1, we first determined the interaction between them in liver cancer cells.The Co-IP assays showed that SIAH1 interacted with ADRM1 in HepG2 and Huh7 cells, as well as UCHL5 (Fig. 7D).It has been reported that the RING domain of SIAH1 is required for its ubiquitin ligase activity [34].To investigate whether SIAH1 E3 ligase activity is required for SIAH1-induced ADRM1 protein degradation, we generated a SIAH1-RING mutant in which Cys44 in the RING domain was converted to Serine (C44S-SIAH1).Previous studies revealed that this mutant has no E3 ligase activity [35].Our results showed that, compared with wild-type SIAH1 (WT-SIAH1), the SIAH1-RING mutant largely abolished the ability of SIAH1 to degrade ADRM1 (Fig. 7E).In addition, compared with the wildtype SIAH1, SIAH1-RING mutant also largely abolished SIAH1mediated ADRM1 ubiquitination (Fig. 7F).Furthermore, we confirmed the direct ubiquitination of SIAH1 on ADRM1 using recombinant proteins to simulate an in vitro environment in a test tube (Fig. 7G).In addition, we observed that SIAH1-mediated polyubiquitination of ADRM1 was initiated by K6-linked chains in HepG2 and Huh7 cells, because ubiquitination of ADRM1 is almost undetectable when expressing the K6R ubiquitin mutant (Fig. 7H).We also demonstrated that the protein stability of ADRM1 was decreased upon upregulation of SIAH1 (Fig. 7I).
Clinically, SIAH1 was found to be downregulated in liver cancer samples (Fig. 7J, K), and there was a directly negative correlation between SIAH1 and ADRM1 protein levels in normal and liver cancer specimens (Fig. 7L).These results suggest that K6-linked chains serve as the principal ubiquitin chain type for SIAH1induced ADRM1 proteasomal degradation and there is a direct negative correlation between them in clinical expression.
SIAH1 directly promotes K33-linked polyubiquitination and degradation of FASN, thereby inhibiting filopodia formation in liver cancer cells
The above results have shown that SIAH1 acts as a ubiquitin ligase toward ADRM1.This can affect the recruitment of UCHL5, thereby regulating the protein stability of FASN.Therefore, we asked whether SIAH1 directly regulates FASN through the ubiquitination pathway.To address this question, we analyzed the protein level of FASN in liver cancer cells when SIAH1 was overexpressed or silenced and found that overexpression of SIAH1 decreased the expression of FASN, while knockdown of SIAH1 increased it (Fig. 8A, B).Mechanistically, SIAH1 was demonstrated to interact with FASN in HepG2 and Huh7 cells (Fig. 8C).In addition, MG132, Fig. 4 Deubiquitinating enzyme UCHL5 stabilizes FASN through reducing its K6-linked polyubiquitination.A Representative blots and quantification of FASN expression in liver cancer cells treated with chloroquine (CHL) or MG132.B Representative blots of ubiquitinated FASN in HepG2 or Huh7 cells with or without MG132.C, D Representative blots and quantification of FASN and FSCN1 expression in human liver cancer cells silencing or overexpressing UCHL5.E Co-immunoprecipitation assay showed that UCHL5 interacted with FASN in HepG2 and Huh7 cells.F, G Representative blots of ubiquitinated FASN in cells overexpressing or silencing UCHL5.H UCHL5 cleaves Lys48 and 63-linked polyubiquitin chains to deubiquitinate FASN in human liver cancer cells.I Representative bolts and quantification showed overexpression of UCHL5 increased the stability of FASN.J UCHL5 mRNA levels in human liver tumor and normal liver tissues in GEPIA (T, n = 369; N, n = 160) databases.K The GEPIA database revealed that the mRNA level of UCHL5 and FASN presented a direct correlation in human liver cancer.L Protein levels of UCHL5 in human liver tumor tissues (n = 14) and normal liver tissues (n = 14).M The correlation of UCHL5 expression with FASN in human normal liver tissues and liver cancer tissues.r = 0.4991; P = 0.0069.N Correlation between UCHL5 mRNA level and human overall survival in TCGA database (high, n = 170; low, n = 170).HepG2 and Huh7 cells were transfected with target plasmids.Twenty-four hours later, the cells were treated with MG132 (15 μM) for 8 h.*P < 0.05, **P < 0.01, ***P < 0.001.
but not CHL, could block the degradation of FASN in SIAH1upregulated cells (Fig. 8D, E).The protein level of FASN could be restored while the ubiquitin ligase ability of SIAH1 was defective (Fig. 8F).Moreover, compared with wildtype SIAH1, the ubiquitination level of FASN in HepG2 and Huh7 cells was significantly decreased when SIAH1 ubiquitin ligase activity was inactivated (Fig. 8G).More importantly, we found that SIAH1 could directly promote FASN ubiquitination in vitro environment (Fig. 8H).We further observed that SIAH1-mediated FASN polyubiquitination was initiated by K33-linked chains in HepG2 and Huh7 cells (Fig. 8I).Similarly, we determined that the protein stability of FASN was decreased in SIAH1-upregulated cells (Fig. 8J).In terms of clinical protein expression correlation, the protein levels of SIAH1 and FASN exhibited a directly negative correlation in normal and liver cancer specimens (Fig. 8K).
To confirm whether SIAH1 affects filopodia formation in liver cancer cells by regulating FASN, we detected the regulation of FSCN1 by SIAH1.It was found that overexpression of SIAH1 significantly decreased the expression of FSCN1 in HepG2 and Huh7 cells, while silencing of SIAH1 increased it (Fig. 9A, B).Morphologically, overexpression of SIAH1 inhibited filopodia formation in Huh7 cells, as well as cell invasion and migration Fig. 6 ADRM1 deubiquitinates FASN through regulating UCHL5 in liver cancer cells.A, B Representative blots and quantification of UCHL5, FASN, and FSCN1 expression in human liver cancer cells inhibiting or overexpressing ADRM1.C Co-immunoprecipitation assay showed that ADRM1 interacted with UCHL5 and FASN in HepG2 and Huh7 cells.D, E Representative blots of ubiquitinated FASN in human liver cancer cells inhibiting or overexpressing ADRM1.F ADRM1 mRNA levels in human liver tumor and normal liver tissues in GEPIA (T, n = 369; N, n = 160) databases.G The GEPIA database revealed that the mRNA level of ADRM1 and UCHL5 presented a direct correlation in human liver cancer.H The GEPIA database revealed that the mRNA level of ADRM1 and FASN presented a direct correlation in human liver cancer.I Protein levels of ADRM1 in liver tumor tissues (n = 14) and normal liver tissues (n = 14).J The correlation of ADRM1 expression with UCHL5 in human normal liver tissues and liver cancer tissues.r = 0.5038; P = 0.0063.K The correlation of ADRM1 expression with FASN in human normal liver tissues and liver cancer tissues.r = 0.4268; P = 0.0255.L Correlation between ADRM1 mRNA level and human overall survival in TCGA database (high, n = 170; low, n = 170).HepG2 and Huh7 cells were transfected with target plasmids.Twenty-four hours later, the cells were treated with MG132 (15 μM) for 8 h.*P < 0.05, **P < 0.01, ***P < 0.001.
(Fig. 9C-E and Supplementary Fig. 3A).We further performed rescue experiments to confirm the above findings by overexpressing 3×Flag-FASN in SIAH1-upregulated liver cancer cells.It was shown that overexpression of FASN effectively rescued the FSCN1 expression induced by overexpression of SIAH1 (Fig. 9F), and restored Huh7 cell filopodia formation and movement (Fig. 9G, H and Supplementary Fig. 3B).These results suggest that SIAH1 regulates filopodia formation in human liver cancer cells by mediating K33-linked polyubiquitination and proteasomal degradation of FASN.
To explore the association between SIAH1 and the growth and metastasis of liver cancer in vivo, Huh7 cells overexpression of SIAH1 were subcutaneously injected into Balb/c nude mice (Fig. 9I).Predictably, the mice tumor growth was significantly inhibited upon SIAH1 was overexpressed (Fig. 9J, K).The expression of proteins was detected by western blotting (Fig. 9L).Furthermore, an orthotopic model of liver cancer was established using Hep1-6 overexpressed SIAH1.Predictably, images of the liver and H.E. staining showed that overexpression of SIAH1 inhibited tumor growth and intrahepatic metastasis, and immunohistochemistry was used to analyze the expression of SIAH1, ADRM1, UCHL5, FASN, and FSCN1 (Fig. 9M).Collectively, these results revealed that SIAH1 acts as a tumor suppressor in mice, and loss of SIAH1 may be an important event during the development and progression of liver cancer in mice.
DISCUSSION
Multiple studies have confirmed the important roles of ubiquitination and deubiquitination in the occurrence and development of tumors, such as proteasomal degradation, selective autophagy, cell signaling regulation, endocytosis, and receptor trafficking, DNA damage response, cell cycle control, and programmed cell death [36][37][38][39].In this study, we found SIAH1 is lower expressed in liver cancer.On the one hand, low expression of SIAH1 can trigger FASN to undergo deubiquitination and escape from proteasomal degradation through ADRM1-UCHL5 complex.On the other hand, loss of SIAH1 can directly weaken the ubiquitination of FASN, leading to FASN protein accumulation.These findings not only reveal a novel mechanism of SIAH1-mediated liver cancer occurrence and progression but also provide a certain theoretical basis for the treatment of liver cancer.
Initially, tumor cell migration begins with the formation of protuberances, in which filopodia play a leading role [40].Although studies have shown that the formation of filopodia plays an important role in the migration of liver cancer cells [4,5], the effect and mechanism of filopodia in liver cancer still require further exploration.Recent studies revealed that FASN directly regulates FSCN1 which participates in the formation of filamentous pseudopodia, lamellar pseudopodia, and microspines and the coding of cytoskeletal proteins, thus promoting the migration and invasion of liver cancer cells [12,13].FASN is a key enzyme required for the synthesis of fatty acids and some biologically important lipid precursors, thereby regulating metabolism, cell survival and proliferation, DNA replication and transcription, and protein degradation by catalyzing the generation of endogenous fatty acids and interacting with various cancer control networks [6][7][8].It has been shown that FASN can promote the metastasis of liver cancer cells, and its overexpression is closely related to clinical invasiveness and poor prognosis [10,11], which was also observed in this study both in human and mice.Although studies on FASN mostly focus on its regulation of lipid metabolism, its promotion of liver cancer transfer is not entirely dependent on it [9,13], and targeting FASN as a monotherapy has still shown limited efficacy [41].Therefore, directly exploring the mechanism of FASN regulating cell migration, such as filopodia formation, may provide new insights.In our study, we confirmed that FASN could positively regulate FSCN1 and determined that FASN could promote filopodia formation in human liver cancer cells by regulating FSCN1.In addition, we further found that some small GTPases, such as CDC42, RAC1 and RHOA, were also regulated by FASN.These results suggest that FASN promotes the filopodia formation in liver cancer cells through regulating multiple downstream targets.Interestingly, when investigating the correlation between FASN regulation of filopodia formation and invasion and migration of liver cancer cells, we observed that inhibiting filopodia formation did not completely impede cell motility.Additionally, we demonstrated that FASN can impact MMP9 expression, but does not exert a significant influence on epithelialmesenchymal transition (EMT).The evidence suggests that FASN can impact the metastasis of liver cancer through various pathways.Based on these findings, it is reasonable to further explore the regulatory mechanism of high expression of FASN in liver cancer.
Multiple reports showed that FASN could be regulated by UPS [18][19][20], which was also observed in our results.Therefore, this study focused on identifying UPS-related enzymes that regulate FASN.Studies have shown that USP14 is a specific upstream DUB of FASN and speculated that UCHL5 may have the same effect [18].UCHL5 is a member of the UCH family and can function as DUB in combination with proteasome.The c-terminal deubiquitinating enzyme adapter of ADRM1 binds to and activates UCHL5, and then reverses the ubiquitination of some key substrates and maintains protein stability [26,27].In this study, we determined that ADRM1-activated UCHL5 was upregulated in liver cancer and stabilized FASN through deubiquitinating it, thereby promoting the expression of FSCN1 and filopodia formation in human liver cancer cells, as well as cell movement.Correspondingly, as the UCHL5-specific upstream activator, ADRM1 also plays a similar role.
SIAH1 is a highly conserved E3 ubiquitin ligase that can regulate transcription factors, neurotransmitters, hypoxia-inducible factors, and other substrates and then affect several cell life activities, such as cell cycle, apoptosis, DNA damage repair, and hypoxia stress [28][29][30].Studies have shown that SIAH1 plays an important role in the occurrence and development of liver cancer [31][32][33].In this study, we found that SIAH1 could inhibit the expression of UCHL5 through ubiquitinating and degrading ADRM1.It is speculated that SIAH1 may also be involved in the regulation of the FASN-FSCN1 pathway and filopodia formation in human liver cancer cells.To confirm this concept, we examined the regulation of FASN-FSCN1 pathway and filopodia formation in liver cancer cells by manipulating SIAH1.The results showed that SIAH1 could decrease the expression of FSCN1 by directly ubiquitinating FASN, thereby inhibiting filopodia formation in human liver cancer cells, as well as cell invasion and migration.
Polyubiquitination is mediated by seven lysines, including K6, K11, K27, K29, K33, K48, and K63.While K48-and K63-linked chains are broadly covered in the literature, the other types of chains assembled through K6, K11, K27, K29, and K33 residues deserve equal attention considering the latest discoveries [42].In this study, we redefined the role of partial lysine residues in substrate Fig. 7 The E3 ligase SIAH1 interacts with and ubiquitinates ADRM1 to promote its degradation in liver cancer cells.A TMT proteomics analysis comparing protein expression of empty-vector and overexpression of SIAH1 (−/ + MG132) in HepG2 cells.B, C Representative blots and quantification of ADRM1 and UCHL5 expression in human liver cancer cells overexpressing or silencing SIAH1.D Co-immunoprecipitation assay showed that SIAH1 interacted with ADRM1 in HepG2 and Huh7 cells.E Representative blots and quantification of ADRM1 expression in human liver cancer cells with a SIAH1 mutation.F Representative blots of ubiquitinated ADRM1 in human liver cancer cells with a SIAH1 mutation.G Representative blots of ubiquitination of FASN by SIAH1 in vitro.H SIAH1 ubiquitinated FASN through Lys33-linked ubiquitin chains.I Representative bolts and quantification showed overexpression of SIAH1 decreased the stability of ADRM1.J Representative blots and, K quantification of SIAH1 in human liver tumor tissues (n = 14) and normal liver tissues (n = 14).L The correlation of SIAH1 expression with ADRM1 in human normal liver tissues and liver cancer tissues.r = −0.5758;P = 0.0013.HepG2 and Huh7 cells were transfected with target plasmids.Twenty-four hours later, the cells were treated with MG132 (15 μM) for 8 h.*P < 0.05, **P < 0.01, ***P < 0.001.degradation in human liver cancer cells.We found that ADRM1 polyubiquitination mediated by SIAH1 was initiated by K6-linked chains, while FASN was initiated by K33-linked chains.This could provide a basis for the study of non-canonical protein ubiquitination.
In conclusion, the current study identified that FASN is upregulated in liver cancer and then promotes filopodia formation and metastasis of liver cancer cells by regulating FSCN1 and other pathways.Molecularly, we identified that the upregulation of FASN is caused by the increment of the deubiquitination enzyme UCHL5.In this regard, low expression of SIAH1 decreases the ubiquitination and degradation of ADRM1 thus increasing its protein level, which further recruits and activates the deubiquitination enzyme UCHL5, ultimately makes FASN to undergo deubiquitination and escapes from proteasomal degradation.Additionally, we observed that the accumulation of FASN is also related to its low level of ubiquitination, where SIAH1 acts as a ubiquitin ligase towards FASN, and low expression of SIAH1 reduces the ubiquitination and degradation of FASN.Both the two pathways participate in the regulation of FASN in liver cancer (Fig. 10).Importantly, the correlation of SIAH1-FASN-FSCN1 axis also verified by human clinical tissues and mice model.Our study has demonstrated the oncogenic effect and regulatory mechanism of FASN in liver cancer, which will provide new insights for further study of the molecular mechanism of liver cancer metastasis and molecular targeted therapy for liver cancer.
METHODS Tissues
Patients who were pathologically diagnosed with liver cancer after surgery at the Affiliated Hospital of Xuzhou Medical University were included.Normal liver specimens were obtained from patients undergoing partial hepatectomy to treat liver rupture due to trauma.All tissue samples were immediately frozen in liquid nitrogen and stored at −80 °C.
Animal survival model
All the animal experimental protocols were approved by the Animal Care and Use Committee at the Xuzhou Medical University.Animal maintenance was in accordance with the Animal Experiment Center of Xuzhou Medical University standard guidelines.The protocols were performed in accordance with the Guide for the Care and Use of Laboratory Animals published by the National Institutes of Health.The Huh7 cells (1 × 10 6 cells) were resuspended in 100 μL PBS and then were injected subcutaneously into 4-week-old Balb/c nude mice to establish the mouse xenograft model (6 male mice/group).The Hep1-6 cells (10 6 /20 μL) were then injected beneath the liver capsule of 8-week-old C57BL/6J mice to establish the orthotopic model of liver cancer (3 male mice/group).Each group of animals was randomly assigned.
Western blotting (WB) analysis
WB was performed as previously described in our published articles [43,44].For western blotting, cells were lysed in RIPA buffer supplemented with a protease inhibitor cocktail and centrifuged at 12,000×g at 4 °C for 10 min; equal amounts of protein were subjected to 8% SDS-PAGE and then transferred onto a 0.45-micrometer pore size PVDF membrane.After blocking with 3% bovine serum albumin, the membrane was incubated overnight with the primary antibodies (FASN, FSCN1, UCHL5, ADRM1, SIAH1, HA, Flag, Myc, His, Ub, GAPDH, etc.) at 4 °C and then with secondary antibodies for 1 h at room temperature.After washing with 1×TBST, ECL Plus western blotting Substrate was used to detect the protein bands, and a chemiluminescence detection system was used for visualization.ImageJ 1.8.0 was used to quantify band density.Relative protein levels were determined by normalizing the optical density values of the target protein with those of the loading control.The antibodies used are as Supplementary Table 1.
Cell culture
The 293T cell line and the liver cancer cell lines HepG2 and Huh7 were provided by the Stem Cell Bank, Chinese Academy of Sciences (Shanghai, China), and were cultured in an incubator at 37 °C and 5% CO 2 with Minimum Essential Medium (MEM) or Dulbecco's Modified Eagle Medium (DMEM) (Yuanpei, Shanghai, China) supplemented with 10% fetal bovine serum (FBS, Gibco, Shanghai, China).All cells were certified by SRT.
Lentivirus construction and transfection
To produce the lentiviruses, 293T cells were cotransfected with the corresponding plasmids (shControl, shFASN, shUCHL5 and shSIAH1) and helper plasmids (psPAX2 and pMD2.G) using Hieff Trans™ Liposomal Transfection Reagent (Yeasen, Shanghai, China).After 72 h, the lentiviruses were collected and subsequently used to infect HepG2 and Huh7 cells.Forty-eight hours after infection, the cells were continuously cultured in a medium containing 2.5 μg/mL puromycin (Beyotime).The surviving cells were cultured into cell lines stably expressing shControl, shFASN, shUCHL5, and shSIAH1.The primer pairs used are shown in Supplementary Table 2.
Filopodia localization assay
The cell climbing pieces in a 24-well plate were covered with FITC-gelatin (Biovision, San Francisco, USA) for 30 min and fixed with 100 µL precooled glutaraldehyde (0.5%) for 30 min.After washing with PBS, the cell climbing pieces were incubated with 1 mL precooled sodium borohydride (5 mg/ mL) and then washed with PBS again.After alcohol (70%) disinfection for 30 min, aldehyde quenching with serum-free medium at 37 °C was undertaken for 1 h.Subsequently, the cells were placed on the 24-well plate (5 × 10 4 cells/well) and cultured in an incubator at 37 °C.Five hours later, the cells were fixed with 4% paraformaldehyde, permeabilized with 0.5% Triton X-100, and blocked with 5% non-fat milk.The cells were incubated with primary antibodies at 4 °C overnight and fluorescent secondary antibodies at 25 °C for 1 h in the dark.They were then incubated with phalloidine (Yeasen) at 25 °C for 30 min in the dark.Finally, the antiquenching agent was used to seal the tablet, and imaging was performed using a fluorescence microscope (IX71; Olympus, Tokyo, Japan).
Co-immunoprecipitation (Co-IP) assay
The HepG2 and Huh7 cells were lysed with ice-cold IP buffer (1% Triton X-100, 150 mM NaCl, 20 mM HEPES, 2 mM EDTA, 5 mM MgCl 2 , pH 7.4).The cell lysates containing the proteins were conjugated to the beads after being incubated overnight with the indicated antibodies.Subsequently, the beads were eluted and subjected to WB assays using the indicated primary and corresponding secondary antibodies.
Ubiquitination assay in vivo
The HepG2 and Huh7 cells were successfully transfected with the desired plasmids and then lysed with ice-cold IP buffer.Similarly, the cell lysates containing the proteins were conjugated to the beads after being incubated overnight with the indicated antibodies.The beads were eluted and subjected to WB assays using the anti-HA/Ub antibodies and corresponding secondary antibodies.
Ubiquitination assay in vitro
Ubiquitination assay in vitro was performed using a kit (Bio-Techne, Minnesota, USA) according to the manufacturer's instructions.A 20 μL reaction mixture was prepared in a 1.5-mL polypropylene tube using the following volumes: 2 µL 10×Reaction Buffer, 2 µL 10×Ubiquitin, 1 µL 20×E1 Enzyme, 2 μL 10×E2 conjugating enzyme (Bio-Techne), 4 μL (contain 2 µg) E3 Ligase enzyme (LSBio, Shanghai, China), 4 μL (contain 2 µg) substrate protein (LSBio), 2 µL 10×Mg 2+ -ATP solution, 3 μL ddH 2 O was used to supplement 20 μL of total reaction volume.The group without E3 was used as a negative control.After slightly mixing, the Fig. 8 SIAH1 directly ubiquitinates FASN to promote its degradation in liver cancer cells.A, B Representative blots and quantification of FASN expression in human liver cancer cells overexpressing or silencing SIAH1.C Co-immunoprecipitation assay showed that SIAH1 interacted with FASN in HepG2 and Huh7 cells.D Representative bolts and quantification of FASN expression in liver cancer cells transfected with His-SIAH1 plasmid (+CHL).E Representative blots and quantification showed that MG132 could block the degradation of FASN in SIAH1upregulated cells.F Representative blots and quantification of FASN expression in human liver cancer cells with a SIAH1 mutation.G Representative blots of ubiquitinated FASN in human liver cancer cells with a SIAH1 mutation.H Representative blots of ubiquitination of FASN by SIAH1 in vitro.I SIAH1 ubiquitinated FASN through Lys33-linked ubiquitin chains in HepG2 and Huh7 cells.J Representative bolts and quantification showed overexpression of SIAH1 decreased the stability of FASN.K The correlation of SIAH1 expression with ADRM1 in human normal liver tissues and liver cancer tissues.r = −0.4591;P = 0.014.HepG2 and Huh7 cells were transfected with target plasmids.Twenty-four hours later, the cells were treated with MG132 (15 μM) for 8 h.*P < 0.05, **P < 0.01, ***P < 0.001.tubes were incubated for 1.5 h in 37 °C water bath.Reactions were terminated with DTT (10 mM, Bio-Techne) and then analyzed via SDS-PAGE gel.WB with an anti-substrate antibody was used to determine conjugate formation, which may appear as either a high-molecularweight smear or a discrete banding pattern.
Quantitative TMT-based proteomic analysis
Quantitative TMT-based proteomic analysis was performed by Frasergen Bioinformatics Co. Ltd (Wuhan, Hubei, China).Total protein was extracted from HepG2 (Vector, SIAH1, SIAH1 + DMSO and SIAH1 + MG132) cells.Each protein (100 μg) was denatured in 8 mol/L urea in 50 mmol/L NH 4 HCO 3 (pH 7.4) and alkylated with 10 mmol/L iodoacetamide for 1 h at 37 °C.Then each sample was diluted tenfold with 25 mmol/L NH 4 HCO 3 and digested with trypsin at a ratio of 1:100 (trypsin/substrate) for 6 h at 37 °C.A 25 μg aliquot of digested peptides for each sample was subjected to eight-plex TMT labeling according to the manufacturer's instructions.Peptides from each TMT experiment was subjected to capillary liquid chromatography-tandem mass spectrometry (LC-MS/MS) using a Q Exactive Hybrid Quadrupole-Orbitrap Mass Spectrometer (Thermo Fisher Scientific, Waltham, MA, USA).The quantitative analysis was conducted by calculating the ratios between the experimental and control groups.The TMT experiment was repeated three times.The changes were considered significant if the change value increased or decreased by >1.5 fold, and the P value was <0.05.The original mass spectrum data were searched by database using Mascot 2.2 and Proteome Discoverer 1.4 (Thermo Fisher Scientific, Waltham, MA, USA).
Establishment of the mouse xenograft model
The animal studies have been conducted in accordance with the Institutional Animal Care and Use Committee of Xuzhou Medical University.Six male nude mice (Vital River Laboratory Animal Technology, Beijing, China) per group with no significant difference in body weight (at 4 weeks of age) were reared in a sterile environment for 1 week.After disinfecting their skin, 100 µL of Huh7 cell suspension (10 6 /mL) was injected into the dorsal side of the right hind limb.The mice were then returned to their cages and housed under the same conditions.Tumor length and width were measured every other day using Vernier calipers, and tumor volume was calculated as follows: length × (width 2 /2).After 6 weeks, all mice were euthanized, and their subcutaneous tumors were removed and then all tumors were isolated for protein extraction.
Survival model of nude mice
Twenty-five male nude mice per group with no significant difference in body weight (at 4 weeks of age) were reared in a sterile environment for 1 week.After disinfecting their tail, 100 µL of Huh7 cell suspension (10 6 /mL) was injected into the caudal vein.The mice were then returned to their cages and housed under the same conditions until they die of natural causes.Time of death was recorded and survival curve was drawn.
Establishment of the orthotopic model of liver cancer
The animal studies have been conducted in accordance with the Institutional Animal Care and Use Committee of Xuzhou Medical University.Three male C57BL/6J mice (Vital River Laboratory Animal Technology, Beijing, China) per group with no significant difference in body weight (at 7 weeks of age) were reared in a sterile environment for 1 week.After a 12-h fast, the abdomen was opened under conventional anesthesia to expose the left lobe of the liver.The Hep1-6 cells (10 6 /20 μL) were then injected beneath the liver capsule.Gentle pressure was applied to the puncture site with a cotton swab to prevent leakage.After the closure of the abdominal cavity, it was maintained at 37 °C for 2 h before regular feeding commenced.Two weeks later, an open liver was photographed and subsequently subjected to follow-up analysis using H.E. staining and immunohistochemistry.
Transwell migration and invasion assays
Transwell migration and invasion assays were carried out as described and adjusted [43,45,46].Briefly, transwell chambers (24-well, 8.0-µm pore membranes, NY) were used in the migration assay.In total, 1 × 10 4 cells/ well were seeded in the upper chamber in 200 µL of serum-free medium, and 600 µL complete medium as a chemoattractant was added in the lower chamer.After incubated for 48 h (HepG2) or 24 h (Huh7) at 37 °C, some cells successfully passed through the upper chamber membrane, and the cells on the lower surface of the membrane are the migrated sells.After fixed with 4% paraformaldehyde for 30 min and stained with 0.3% crystal violet for 30 min, the migrated cells were photographed by an inverted microscope.
The transwell invasion assay was conducted as described above, except that 100 µl of 1×Matrigel (BD, China, Shanghai; phosphate-buffered saline [PBS] was used for dilution on ice) was added to the upper compartment.Three random fields of view in each chamber were selected for counting.The control group was labeled as '1' for statistical purposes.
Statistical analysis
Data represent the results of experiments repeated at least three times, and all quantitative data are expressed as the mean ± SD.Statistical analysis was performed using GraphPad Prism (v.10.0;GraphPad Software, USA).Student's t tests were used to compare samples with normality, homogeneity of variance, and independence.Nonparametric tests were used to analyze measurement or count data that did not meet these requirements.P < 0.05 was considered statistically significant.
Fig. 1 FASN
Fig. 1 FASN is upregulated in liver cancer and associated with poor prognosis.A, B FASN mRNA levels in human liver tumor and normal liver tissues in TCGA (T, n = 374; N, n = 50) and GEPIA (T, n = 369; N, n = 50) databases.C Correlation between FASN mRNA level and human overall survival in TCGA database (high, n = 192; low, n = 173).D Correlation between FASN mRNA level and human disease-free survival in GEPIA database (high, n = 236; low, n = 236).E Protein levels of FASN in human liver tumor (n = 14) and normal liver (n = 14) tissues.The protein expression levels of target genes were normalized to those of GAPDH (loading control).F Representative blots and quantification of FASN expression in human liver cancer cells.G Correlation between FASN expression level and mice overall survival (shControl, n = 25; shFASN, n = 25).*P < 0.05, **P < 0.01.
Fig. 2
Fig. 2 FASN promotes filopodia formation in liver cancer cells by regulating FSCN1.A Representative images and quantification of filopodia localization assay in HepG2 and Huh7 cells silencing FASN.Scale bar, 12.5 µm.B Representative images and quantification of filopodia localization assay in HepG2 and Huh7 cells overexpressing FASN.Scale bar, 12.5 µm.C, D Representative blots and quantification of FSCN1 expression in HepG2 and Huh7 cells silencing or overexpressing FASN.E Representative blots of CDC42, RAC1 and RHOA expression in HepG2 and Huh7 cells silencing or overexpressing FASN.F Quantification of cell invasion and migration while silencing or overexpressing FASN.G Representative images and quantification of filopodia localization assay of shControl, shFASN#2, and shFASN#2+Myc-FSCN1 groups in Huh7 cells.Scale bar, 12.5 µm.H Quantification of Huh7 cell invasion and migration.*P < 0.05, **P < 0.01, ***P < 0.001.
Fig. 3
Fig. 3 FASN promotes invasion and migration of liver cancer cells and metastasis in vivo.A Representative images and quantification of filopodia localization assay in Huh7 cells treated with BDP-13176 (10 μM for 24 h).Scale bar, 12.5 µm.B Quantification of cell invasion and migration of Huh7 cells treated with BDP-13176 (10 μM for 24 h).Scale bar, 200 µm.C, D Representative blots of E-cadherin, N-cadherin, MMP2 and MMP9 expression in HepG2 and Huh7 cells silencing or overexpressing FASN.E Mice tumor tissues isolated from tumors initiated with cells infected with shControl or shFASN vectors, and Representative blots of FSCN1 and FASN levels in mice tumor tissues.F Mice tumor mass.G Growth curve obtained by measuring mice tumor size on the indicated days.H Representative blots of FASN expression in Hep1-6 cells knock out FASN.I Image of livers; representative images of tissues stained with hematoxylin and eosin.Scale bar, 500 μm; representative IHC images of FASN and FSCN1 in tumor tissues.Scale bar, 50 μm.*P < 0.05, **P < 0.01, ***P < 0.001.
Fig. 9
Fig. 9 SIAH1 inhibits filopodia formation in liver cancer cells by regulating the FASN-FSCN1 pathway.A, B Representative blots and quantification of FSCN1 expression in liver cancer cells overexpressing or silencing SIAH1.C, D Representative images of filopodia localization assay in Huh7 cells overexpressing SIAH1.Scale bar, 12.5 µm.E Quantification of Huh7 cell invasion and migration while overexpressing SIAH1.F Representative blots and quantification showed that overexpression of FASN could block the degradation of FSCN1 in SIAH1-upregulated cells.G Representative images of filopodia localization assay of Vector, His-SIAH1, and His-SIAH1 + 3×Flag-FASN groups in Huh7 cells.Scale bar, 12.5 µm.H Quantification of Huh7 cell invasion and migration.I Mice tumor tissues isolated from tumors initiated with cells infected with Control or SIAH1 OE (SIAH1 overexpression) vectors.J Tumor mass.K Growth curve obtained by measuring tumor size on the indicated days.L Representative blots of protein levels in tumor tissues.M Image of livers; representative images of tissues stained with hematoxylin and eosin.Scale bar, 500 μm; representative IHC images of SIAH1, ADRM1, UCHL5, FASN, and FSCN1 in tumor tissues.Scale bar, 50 μm.*P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001.
Fig. 10
Fig. 10 Schematic illustration of this study.
Table 1 .
Clinicopathological correlation of FASN expression in Human liver cancer. | 2024-07-31T06:17:39.873Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "5f04f2b328b7f42e584411499c3254427e4af9c7",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6017abeb67c055cbd7f13329b2c80f56158db2b8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268549795 | pes2o/s2orc | v3-fos-license | The influence of aortic stiffness on carotid stiffness: computational simulations using a human aorta carotid model
Increased aortic and carotid stiffness are independent predictors of adverse cardiovascular events. Arterial stiffness is not uniform across the arterial tree and its accurate assessment is challenging. The complex interactions and influence of aortic stiffness on carotid stiffness have not been investigated. The aim of this study was to evaluate the effect of aortic stiffness on carotid stiffness under physiological pressure conditions. A realistic patient-specific geometry was used based on magnetic resonance images obtained from the OsiriX library. The luminal aortic–carotid model was reconstructed from magnetic resonance images using 3D Slicer. A series of aortic stiffness simulations were performed at different regional aortic areas (levels). By applying variable Young's modulus to the aortic wall under two pulse pressure conditions, one could examine the deformation, compliance and von Mises stress between the aorta and carotid arteries. An increase of Young's modulus in an aortic area resulted in a notable difference in the mechanical properties of the aortic tree. Regional deformation, compliance and von Mises stress changes across the aorta and carotid arteries were noted with an increase of the aortic Young's modulus. Our results indicate that increased carotid stiffness may be associated with increased aortic stiffness. Large-scale clinical validation is warranted to examine the influence of aortic stiffness on carotid stiffness.
Introduction
Carotid stiffness has been associated with cerebrovascular disease, cognitive impairment, and more recently incident depressive symptoms [1][2][3].Disease risk with the ageing process is associated with generalized stiffening of the larger arterial vessels and the degree of arterial stiffness has been shown to be a predictor of future cardiovascular events and all-cause mortality, independent of traditional risk factors [1].
The standard interpretation of the relationship between vascular stiffness and blood pressure is that it increases blood pressure (BP).Pulse pressure (PP) increases pulsatile aortic wall stress, advancing elastic fibre degeneration [4,5].Importantly, several studies have demonstrated that increased local carotid and aortic stiffness levels in normotensive individuals are associated with an increased risk of incidental higher BP and progressive development of hypertensive BP over time [4,6,7].Additionally higher carotid-femoral pulse velocity in adolescents has been associated with obesity and hypertension later in life, suggesting a bidirectional relationship between hypertension and arterial stiffness [8,9].
The stiffness of the arterial wall is nonuniform along the arterial tree.Oxygenated blood is propagated through the elastic aorta toward stiffer muscular peripheral arteries, creating an impedance gradient ascending progressively from the heart to the peripheral arteries [10].In the healthy and younger arterial system, the ascending impedance creates a wave reflection, reducing distal energy entering the microcirculation [10,11].With large vessel stiffening, the microvascular structures may be sensitive to the PP and mean arterial pressure (MAP), resulting in either pathological end organ cardiac hypertrophy or increased peripheral vascular resistance if MAP is elevated beyond normal healthy physiology [10].
In diabetic and hypertensive patients, the aorta stiffens significantly more with age than local carotid stiffness [12,13].In recent years new methods and techniques have been developed which allow the examination of aortic and local stiffness.However, these methods provide a limited evaluation of the arterial stiffness in each segment of the arterial tree.For example, pulse wave velocity (PWV), a gold standard for the measurement of central aortic stiffness, provides only an estimate of aortic stiffness as PWV represents a sum of the biomechanical and mechanical properties of different vascular walls located between the two measurement points of reference [14].The relationship between aortic and carotid stiffness also remains undetermined despite emerging interest for their role in pathogenetic mechanisms of cardiovascular diseases.Computer simulations changing aortic stress in real time for a case study have not been undertaken to understand biomechanical changes in the carotid artery wall.This study aims to address these gaps to evaluate the relationship between aortic and local carotid stiffness using software simulation models derived from actual patient imaging and the ability to analyse this complex behaviour under physiological conditions.
Modelling of aorta and carotid artery
This study examined a realistic patient-specific vascular geometry of the aorta and carotid arteries, adapted from a case in the OsiriX library (https://www.osirix-viewer.com/resources/dicom-imagelibrary/)where magnetic resonance imaging (MRI) was available.Using 3D Slicer, realistic geometry was reconstructed in the systolic phase of the cardiac cycle [15].The lumen boundary of the vascular regions of interest was then extracted and reconstructed into a virtual vascular geometry, which included a patient-specific aorta and carotid arteries.The outcome was a vascular system for use in the following computational simulations (figure 1).From published clinical studies, we defined arterial wall thicknesses for aorta, brachiocephalic artery, carotid arteries, and subclavian arteries as 2.6, 1.5, 1.0 and 1.0 mm, respectively [16][17][18].
Simulation settings
The geometry was discretized with three-dimensional hexahedral elements to create a computational mesh with 36 158 elements in total [19].This mesh condition was then re-examined and appeared to be responsive to variable simulation parameters.
Firstly, a two-step series of simulations were performed across different Young's modulus set points for the entire aorta and carotid.This was to investigate the effect of aortic stiffness on carotid arteries, as royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230264 shown in tables 1 and 2. To achieve this outcome, first, a realistic pressure wave was applied to the inner wall for the entire geometry, with fixed supports defined at all openings of the model.A value of 40 mmHg indicates a pulse pressure in a healthy individual [21] and as little as 10 mmHg increase can increase the cardiovascular risk as much as 20% [20].Two diverse levels of the pressure wave (pulse pressure) were examined for each step of the simulation: PP1-indicating a normal pulse pressure of 5994 Pa (42 mmHg); and PP2-indicating an increased PP of 7639 Pa (57 mmHg).Linear transient simulations of a single-layered aorta-carotid model were then performed in ANSYS Workbench 2020 R2 (ANSYS, USA).
Structural analysis parameters
A variety of mechanical properties were analysed to examine the effect of aortic stiffness on carotid arteries.
Deformation-The deformation of the arterial wall was calculated with the following formula: U = √Ux2 + √Uy2 + √Uz2, where U = total deformation and Ux, Uy and Uz are the three component deformations.
Compliance-The classic definition of arterial compliance is the blood volume change relative to the distending pressure.However, the direct measurement of aortic compliance is challenging as there are no simple clinical methods to estimate the local changes in the blood volume.
To consider the compliance of the whole aortic system, the volume change (ΔV ) of the systolicdiastolic status for the entire geometry and each separate section are respectively calculated over the pulse pressure (ΔP): compliance = ΔV/ΔP [22].Von Mises stress-The von Mises stress is used to predict the yielding of materials under complex loading from the results of uniaxial tensile tests: J2 = K 2 , where K is the yield stress of the material in pure shear.
Results
Clinically, it is common to refer to a patient's 'stiff arteries'.Simplified stiffness is a general term describing the vessel's resistance to deformation.However, defining the stiffness of arterial blood vessels can also be challenging as no single number or index can describe the complex mechanical behaviour of the vessel.
Variable stiffness for the entire aorta on a fixed compliance of the carotid
Firstly, we developed a series of aorta and carotid artery simulations to examine the increasing effect of aortic stiffness on normal carotid arteries.Five different values of Young's modulus for the whole aorta were applied (0.8-1.8 × 10 6 Pa), while the Young's modulus for both carotid arteries remained constant (1.0 × 10 6 Pa).
We examined the aortic and carotid deformation, compliance, and von Mises stress under standard PP = 5994 Pa, followed by a variation of the PP = 7639 Pa simulating normal and increased PP in a physical environment.
Figure 2 illustrates the deformation of the aorta and carotid artery, with a maximum and minimum value of Eaorta modulus set for our simulation under the two different PPs applied.
With the increase of Young's modulus, our model's deformation of the aorta and carotid arteries experienced a decrease during the entire cardiac cycle.Specifically, we demonstrate that less stiff aorta (Eaorta = 0.8 × 10 6 Pa) experiences a maximal deformation of 9.14 × 10 −2 (m) compared to the stiffer aorta (Eaorta = 1.6 × 10 6 Pa), with maximum deformation of 7.23 × 10 −3 (m).
We further investigated the deformation of the aorta and carotid arteries under increased PP.Similarly, as Young's modulus increases, the deformation of the aorta and carotid arteries in our simulation series decreases during the cardiac cycle.In particular, the aortic simulation with the lower value of Eaorta = 0.8 × 10 6 Pa demonstrated maximum deformation of 8.91 × 10 −2 (m) and a value of Eaorta = 1.6 × 10 6 Pa demonstrated maximum deformation of 7.23 × 10 −3 (m).
The distribution of maximum deformation across the aortic tree is presented in the electronic supplementary material (figure S1).
Compliance
Graphic illustration in the compliance of different sections in the model affected by Young's modulus under two pulse pressure conditions for the aorta is presented in the electronic supplementary material (figure S2).Across 5 cases under both pressure conditions, aortic and carotid compliance decreases with the increase of the Young's modulus.
Figure 3 shows with a change in pulse pressure a marked loss of compliance between Case 1 and Case 5 of 56% and 52%, respectively.Specifically applied PP (7369 Pa), where the compliance of aorta and aorta and carotid arteries experienced >50% decline from Case 1 to Case 5 (53% and 57% for aorta and aorta and carotid and aorta, respectively).Above we considered pulse pressure.We next applied variable Young's modulus to the aorta across cases.This enabled us to explore whether fixed aortic Young's modulus can affect either the left or right royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230264 carotid arteries.We found a similar decrease in the local aortic compliance with an increase in the aortic Young's modulus.In both pressure settings, the left carotid artery was less compliant than the right carotid artery, with a percentage decrease of 12% and 8%, respectively.
Having demonstrated decreased carotid compliance with increasing Young's modulus of the aorta, we further investigated the maximum local compliance changes in both carotid arteries during the normal cardiac cycle.When applying a normal pulse pressure wave of 5994 Pa, we observed a maximum compliance variation between 97 and 99% at the proximal segment of the right common carotid artery, near the bifurcation of the brachiocephalic trunk, as well as the distal segment of the right internal carotid artery.In addition, we observed maximum compliance change for the left carotid artery between 83 and 88%, at the level of the aortic arch bifurcation and the bifurcation of the common carotid artery.Similarly, we observed a maximum compliance change of over 90% with an applied PP of 7369 Pa for both carotid arteries, with the highest compliance change noted at anatomical bends and bifurcations for both carotid arteries.
3.3.Regional stress changes for the entire aorta: von Mises stress The three-dimensional distribution of von Mises stress is presented in figure 4. The von Mises stress distribution on the aortic wall for each case was plotted and observed to provide a visual representation of the computational stresses across our simulated vascular system.The maximum wall stress in both simulation steps increased with the increase of the Young's modulus across the two levels of pulse pressure (electronic supplementary material, figure S3).
Variable stiffness across the aortic arch, ascending and descending aorta
The aortic PWV estimates the aortic stiffness that averages multiple branches of the aortic tree and does not consider the influence of regional differences in the aortic stiffness and diameter.In clinical studies, PWV assessed with MRI method demonstrated different age-related changes of the aorta in the various segments of the aortic tree, including the aortic arch, thoracic and mid descending, and abdominal aorta [23,24].In silico and in vivo studies have found that aortic stiffness increased down the aortic tree, with abdominal aorta stiffness mostly affected with increasing age [25,26].As the aortic stiffness is not uniformly disseminated, in the next step of our simulation model, we aimed to explore the effect of increased aortic stiffness on different segments of the aortic tree, including ascending, descending aorta and the aortic arch.
Regional areas of deformation of the aorta
Increasing stiffness for aortic segments is illustrated in figure 5.
Under normal pulse pressure of 5334 Pa being applied, as the Young's modulus increases for the aortic arch, ascending and descending aorta, the maximum deformation decreases, particularly the maximum deformation for Case 1 (9.73 × 10 −2 (m)) compared to Case 5 (6.71 × 10 −2 (m)).
Similarly, when increased pulse pressure of 7364 Pa was applied, the maximum deformation of all segments of the aorta decreased with the decrease of the Young's modulus (Case 1 maximum deformation = 9.95 × 10 −2 (m) compared to Case 5 maximum deformation = 6.89 × 10 −2 (m)).
Compliance
The compliance of all segments of the aorta and carotid arteries at normal PP (5994 Pa) significantly decreases with the increase of the Young's modulus (electronic supplementary material, figure S5).The highest compliance difference is noted in the ascending aorta and the aortic arch with 57% and 54% decrease between Case 1 and Case 5. When we applied an increased PP (7369 Pa), we noted a maximum decrease in the ascending aorta and aortic arch of 57% and 54%, respectively (figure 6).
After adjusting for both pulse pressure settings as described above, we next applied variable Young's modulus to the different segments of the aorta across cases.This allowed us to examine whether fixed aortic Young's modulus for different regions of the aorta can affect either the left or right carotid arteries.Both left and right carotid arteries showed a decrease in the local compliance with the increase of the aortic Young's modulus.Across both pulse pressures, the left carotid artery was less compliant than the right carotid artery, with a percentage decrease of 11% and 7%, respectively.royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230264 As in the first step in our simulation model, we further explored the effect of increasing Young's modulus for the aorta on the maximal local compliance of both carotid arteries.Under the PP of 5994 Pa, the maximum local compliance variation of the right carotid artery was estimated between 96 and 99%, in the middle and distal segments of the common carotid artery and proximal segment of the external carotid artery.For the left carotid artery, we observed maximal local compliance variation between 95 and 99%, with the highest compliance observed in the proximal and distal segments of the common carotid artery.At an applied PP of 7369 Pa, maximum compliance variation of the right carotid artery was between 95 and 99% in the proximal common carotid artery and the distal segments of the external and internal carotid artery.Interestingly, the maximum local compliance for the left carotid artery was 96-99%, presenting at the mid-level of internal and external carotid arteries (data not presented).
Regional stress changes for aortic segments: von Mises stress
The von Mises stress distribution on the aortic wall for each case was plotted and observed to represent and interpret the computational stress analysis results easily (figure 7).The maximum wall stress applied across both PP values increased with the increase of the Young's modulus (electronic supplementary material, figure S6).
Discussion
Increased stiffness in ageing larger central arteries, such as the aorta and its branches, is associated with cardiovascular risk, including myocardial infarction, heart failure, atrial fibrillation, stroke and renal disease.Arterial stiffness parameters, such as PWV, are also commonly used as predictive values for royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 230264 both all-cause cardiovascular mortality and non-fatal coronary events in diabetic, hypertensive, elderly and community populations [27][28][29].
While PWV is the gold standard for arterial stiffness measurement, this measure represents an average stiffness value for the entire aortic tree [26,27,30,31].In addition, there are significant regional differences along the aorta and its branches with respect to variable levels of stiffness, which cannot be fully assessed with the traditional and emerging clinical methods such as ultrasound and applanation tonometry.
Clinical studies have reported variable differences between aortic and carotid stiffness [12,32].Paini et al. reported that the aorta stiffened more than the carotid artery with age [12].However, the mechanisms for these differences between aortic and carotid stiffness have not well examined due to a lack of standardized methods and precise mapping of differential effects of aortic stiffness in different segments of the aortic tree.
Our study analysed the complex interactions between different regions of the aortic tree and investigated the influence of aortic stiffness on mechanical behaviours and compliance in carotid arteries.
Specifically, we first evaluated the effect of increasing Young's modulus for the entire aorta on the local compliance and deformation of the carotid arteries.In the first model the aortic stiffness was kept at a constant value at baseline.Secondly, we evaluated the influence of increasing Young's modulus for the entire aorta and subsequently different segments of the aortic tree, including the aortic arch, ascending and descending aorta, while for all models retaining the constant value of carotid Young's modulus at baseline.Finally, we changed the pulse pressure from 5994 Pa to 7639 Pa and explored its effect.For the variable stiffness across the entire aorta, our data indicate (1) with the increase of Young's modulus, the compliance of the aorta decreases for the aorta (53%) and for both carotid arteries (7% for right and 12% for left carotid artery); (2) maximum local compliance variations between Case 1 and Case 5 were 94-97% for the right and 83-88% for left carotid artery; and (3) compared with baseline, stiffness is decreased when normal and increased pulse pressure is applied.
For the variable stiffness across the aortic tree, our findings highlight the following: (1) with the increase of the Young's modulus for ascending, descending aorta and aortic arch, von Mises stress increased and the deformation and compliance decreased for aorta, all individual segments and both carotid arteries; (2) the maximum local compliance difference varied at 99% for the right and 95-99% for left carotid artery; and (3) again, comparatively stiffness is decreased when normal and increased pulse pressure is applied.
Moreover, we observed decreases in local compliance around the bifurcation and anatomical bends of the carotid arteries when aortic compliance was reduced.Anatomical anomalies such as branching bends and curvatures experience decreased shear stress and turbulent blood flow and have been related as the most vulnerable to the development of atherosclerosis [30,31].Interestingly, increased stiffness of the carotid wall has been reported in patients with carotid artery dissection, where diagnostic can pose a significant challenge [33][34][35].
If carotid stiffness increases with accompanying shear stress one would expect an increase in pathological changes over time in these vulnerable areas.
Increased BP, particularly PP, increases pulsative wall stress and is viewed by many authors as an accelerated form of arterial ageing, leading to aortic stiffening [36][37][38].However, there is current debate and disagreement regarding the role of arterial stiffness as a predecessor of arterial hypertension.Our results in both simulation steps indicate a similar tendency of increase in aortic and carotid stiffness under normal and increased levels of pulse pressure and support the hypothesis that aortic stiffness precedes the increase of PP.In the Framingham Health Study [39] age related to PP and PWV may not be consistent with the hypothesis that elevated BP is a precursor of aortic stiffening.In this cohort, cfPWV increases from a young age and may be attributable to a concurrent increase in MAP prior to midlife [40] supporting the evidence of a reciprocal relationship between aortic stiffness and hypertension.By contrast, PP, which contributes to the fragmentation of the elastic fibres through a repetitive strain, declines from early adulthood into midlife and then rises again significantly later in life [39].This pattern of age-related changes suggests that at the individual level, aortic wall stiffening contributes to a substantial increase of PP later in life, which is associated with predominantly systolic hypertension in the elderly [41,42].
To the best of our knowledge, this study is the first to comprehensively analyse the complex relationship between aortic and carotid stiffness and increased aortic stiffness on the mechanical properties and local compliance of the carotid arteries.Cuomo et al. conducted a fluid-solid interaction model to explore the effects on ageing, demonstrating an increase in stiffness down the aortic tree in humans aged 40, 60 and 75 years [25].Xia et al. conducted a three-dimensional computational study to explore the aortic stiffness and corresponding haemodynamics on a fixed arterial model.Their results presented increased PWV and cPP but reduced PP amplification, with increased stiffness [43].
Our study has certain limitations, particularly in representing the specific geometry of the aorta and its branches.The anatomy of the aorta, including its bends and bifurcations, can influence stiffness changes along the aortic tree.However, it is important to note that our study was not designed to provide individualized evaluations of aortic and carotid stiffness.In the future, more extensive models derived from diverse clinical cases may help validate our findings across a broader, more diverse populaton.A perceived limitation of our model is that we applied a linear model to the simulations of the aortic and carotid stiffness.While the material model may run into complications for a healthy aorta, we were approximating stiffened vascular circuits.When the aorta becomes stiffer, the elasticity of different layers of the aorta wall may vary in a complex way, depending on individuals, age and measuring approach [44,45].A fixed linear material property can present a clearer, simple and effective way to get a preliminary idea of the carotid response to changes in aorta stiffness and pressure conditions.
Figure 1 .
Figure 1.Three-dimensional simulation geometry of the patient-specific aorta and carotid arteries.
Figure 3 .
Figure 3.Comparison of change difference (%) in compliance of aorta and carotid arteries in two pulse pressure settings.(a) Change difference in compliance at PP of 5994 Pa.(b) Change difference in compliance at PP of 7369 Pa.RCA, right carotid artery; LCA, left carotid artery.
Figure 4 .
Figure 4. Distribution of von Mises stress on aorta and carotid arteries under two pressure conditions.(a) PP = 5594 Pa.(b) PP = 7369 Pa.
and 02 between case 01 and 03 between case 01 and 04 between case 01 and 05 between case 01 and 02 between case 01 and 03 between case 01 and 04 between case 01 and 05
Figure 6 .
Figure 6.Comparison of change difference (%) in compliance of entire aorta, and carotid arteries in two pulse pressure settings.(a) Change difference in compliance at PP of 5994 Pa.(b) Change difference in compliance at PP of 7369 Pa.
Figure 7 .
Figure 7. Contour images of the von Mises stress on the arterial wall at systole and diastole under two pulse pressure conditions.(a) PP = 5994 Pa.(b) PP = 7369 Pa.
Table 2 .
Stiffness parameters for aorta and carotid arteries. | 2024-03-22T05:10:43.783Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "1c7dc003d4c6a2cccdd34764e5a964de1cd0ccee",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1098/rsos.230264",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c7dc003d4c6a2cccdd34764e5a964de1cd0ccee",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248018365 | pes2o/s2orc | v3-fos-license | Relationship of Dapagli fl ozin With Serum Sodium Findings From the DAPA-HF Trial
OBJECTIVES This study aimed to assess the prognostic importance of hyponatremia and the effects of dapagli fl ozin on serum sodium in the DAPA-HF (Dapagli fl ozin And Prevention of Adverse outcomes in Heart Failure) trial. BACKGROUND Hyponatremia is common and prognostically important in hospitalized patients with heart failure with reduced ejection fraction, but its prevalence and importance in ambulatory patients are uncertain. METHODS We calculated the incidence of the primary outcome (cardiovascular death or worsening heart failure) and secondary outcomes according to sodium category ( # 135 and > 135 mmol/L). Additionally, we assessed: 1) whether baseline serum sodium modi fi ed the treatment effect of dapagli fl ozin; and 2) the effect of dapagli fl ozin on serum sodium. RESULTS Of 4,740 participants with a baseline measurement, 398 (8.4%) had sodium # 135 mmol/L. Participants with hyponatremia were more likely to have diabetes, be treated with diuretics, and have lower systolic blood pressure, left ventricular ejection fraction, and estimated glomerular fi ltration rate. Hyponatremia was associated with worse outcomes
H yponatremia is common in patients hospitalized with decompensated heart failure (HF), occurring in 20% to 30% of such individuals. [1][2][3][4] In these patients, hyponatremia is an established predictor of adverse outcomes, associated with both inpatient and longer-term mortality. [1][2][3][4] The causes of hyponatremia in HF are complex, but they can be simplified into those causing impaired water excretion and those increasing sodium loss (both reduced water excretion and increased sodium loss can contribute to hyponatremia). [4][5][6][7] Reninangiotensin-aldosterone system and sympathetic nervous system activation lead to a nonosmotically mediated release of arginine vasopressin which inhibits free-water excretion and stimulates thirst, leading to increased water intake. [4][5][6][7] Reduced glomerular filtration (and as a result, renal tubular flow) leads to an impaired ability of the kidney to excrete free water. [4][5][6][7] Large doses of diuretic agents may lead to excessive sodium loss, especially if coupled with restriction of sodium intake; thiazide diuretic agents may also inhibit urinary dilution. [4][5][6][7] Whether hyponatremia is causally related to mortality or is simply a marker of the severity of HF remains unknown, although low serum sodium concentration remains an independent predictor of mortality in adjusted models incorporating other prognostic variables. [4][5][6]8 Much less is known about the prevalence or the prognostic significance of hyponatremia in ambulatory patients with heart failure and reduced ejection fraction (HFrEF), especially in such individuals receiving contemporary treatments. [9][10][11] Sodium glucose cotransporter 2 (SGLT2) inhibitors have been recently introduced as a treatment for HFrEF. [12][13][14] SGLT2 inhibitors inhibit proximal renal tubular reabsorption of glucose, coupled with sodium, leading to an initial osmotic diuresis and natriuresis. The effects of these agents (added to conventional diuretic agents and mineralocorticoid receptor antagonists) on serum sodium concentration in HFrEF are unknown and probably complex. Therefore, we inves- Chronic Heart Failure") trial. 12 We also examined whether sodium concentration at baseline modified the effects of dapagliflozin on clinical outcomes in the DAPA-HF trial.
HYPOTHESIS
This study was designed to investigate the prognostic significance of hyponatremia in ambulatory patients with HFrEF, the efficacy of dapagliflozin according to baseline serum sodium concentration, and the effect of dapagliflozin on serum sodium in the DAPA-HF trial.
METHODS
DAPA-HF was a prospective, randomized, doubleblind, controlled trial in patients with HFrEF, which evaluated the efficacy and safety of dapagliflozin 10 mg once daily, compared with matching placebo, added to standard care. 12 receiving optimal pharmacological and device therapy. 12 All analyses were conducted using Stata version 16.0 (StataCorp) and SAS version 9.4 (SAS Institute).
A value of P < 0.05 was considered statistically significant.
RESULTS
A baseline serum sodium measurement was available in 4,740 patients and showed a normal distribution (Supplemental Figure 1); 398 (8.4%) participants had a value #135 mmol/L ( Values are mean AE SD, n (%), or median (IQR). a Anemia: Hemoglobin <130 g/L in males and hemoglobin <120 g/L in females.
The net result of these changes was that more patients in the dapagliflozin group had hyponatremia (n ¼ 260, 11.3%) than in the placebo group (n ¼ 218, SAFETY AND ADVERSE EVENTS. Each of the adverse events of interest was uncommon. There was a higher rate of adverse events related to volume depletion and renal dysfunction in the low-sodium group compared with the normal-sodium group ( Table 5).
The other adverse events of interest were very infrequent in each sodium subgroup. Baseline serum sodium did not notably modify the rate of adverse events in patients assigned to either placebo or dapagliflozin ( Table 5).
DISCUSSION
In a contemporary, well-treated ambulatory cohort of patients with HFrEF, most of whom had mild symptoms, the prevalence of hyponatremia was low Initially, compared with placebo, dapagliflozin led to a small, although statistically significant, decrease in sodium. However, after 2 weeks, the opposite pattern was observed.
Although hyponatremia is recognized as the most common electrolyte disorder among hospitalized patients with HF, there are few reports of Tables 1 and 2.
FIGURE 2 Dapagliflozin Treatment Effect
Effect of dapagliflozin on key outcomes in patients with and without hyponatremia at baseline. CV ¼ cardiovascular; other abbreviation as in Figure 1. prior study where such extensive adjustment was made, including for natriuretic peptide level, in ambulatory patients. [9][10][11] Moreover, most studies to date have only reported the association between hyponatremia and all-cause mortality, whereas we have also shown that low sodium was independently predictive of worsening HF events (principally HF hospitalization) and symptoms. 16,17 The prognostic importance of a single sodium measurement was remarkable given the rapid and inhibitors is believed to lead to a reduction in intravascular volume and blood pressure, and the increased delivery of sodium to the distal nephron results in a decline in eGFR by inducing tubuloglomerular feedback. [22][23][24][25] However, it has been hypothesized that SGLT2 inhibitors reduce blood volume less than conventional diuretics. 26 Although the initial decrease in sodium mirrors the early decline in eGFR after starting dapagliflozin, subsequently, serum sodium concentration increased more in the dapagliflozin group than the placebo group, to the extent that the mean concentration was eventually Values are n/N (%). The analysis was truncated at 16 months because there were fewer than 100 people in one or both treatment groups among those who had hyponatremia at baseline. | 2022-04-08T15:11:33.800Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "2783d1a12995946f57d207874394fc4fc540e33f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jchf.2022.01.019",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f257339100db2929527a5b0db8c106fede263175",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
117220995 | pes2o/s2orc | v3-fos-license | A Multi-Wavelength Study of Low Redshift Clusters of Galaxies I. Comparison of X-ray and Mid-Infrared Selected AGNs
Clusters of galaxies have long been used as laboratories for the study of galaxy evolution, but despite intense, recent interest in feedback between AGNs and their hosts, the impact of environment on these relationships remains poorly constrained. We present results from a study of AGNs and their host galaxies found in low-redshift galaxy clusters. We fit model spectral energy distributions (SEDs) to the combined visible and mid-infrared (MIR) photometry of cluster members and use these model SEDs to determine stellar masses and star-formation rates (SFRs). We identify two populations of AGNs, the first based on their X-ray luminosities (X-ray AGNs) and the second based on the presence of a significant AGN component in their model SEDs (IR AGNs). We find that the two AGN populations are nearly disjoint; only 8 out of 44 AGNs are identified with both techniques. We further find that IR AGNs are hosted by galaxies with similar masses and SFRs but higher specific SFRs (sSFRs) than X-ray AGN hosts. The relationship between AGN accretion and host star-formation in cluster AGN hosts shows no significant difference compared to the relationship between field AGNs and their hosts. The projected radial distributions of both AGN populations are consistent with the distribution of other cluster members. We argue that the apparent dichotomy between X-ray and IR AGNs can be understood as a combination of differing extinction due to cold gas in the host galaxies of the two classes of AGNs and the presence of weak star-formation in X-ray AGN hosts.
INTRODUCTION
Galaxy formation and evolution has long been a subject of considerable interest, with early work dedicated to exploring the physical processes responsible for star-formation (Whipple 1946), explaining the genesis of the Milky Way (Eggen et al. 1962), and examining the evolution of galaxies in clusters (Spitzer & Baade 1951). Models for the evolution of galaxies in clusters gained strong observational constraints with the discovery of an apparent evolutionary sequence among local clusters (Oemler 1974). The discovery that the fraction of blue, spiral galaxies in relaxed galaxy clusters increases from z = 0 to z ≈ 0.4 quickly followed (Butcher & Oemler 1978. The dearth of spiral galaxies in the high-density regions at the centers of galaxy clusters is known as the morphologydensity relation (Dressler 1980;Postman & Geller 1984;Dressler et al. 1997;Postman et al. 2005). This relation places additional, strong constraints on evolutionary models for cluster galaxies. That star-forming galaxies are also rare in the centers of clusters had been previously suggested by the results of Osterbrock (1960) and was subsequently observed in other work (Gisler 1978;Dressler et al. 1985). The impact of environment on the frequency and intensity of star-formation at a wide variety of density scales has been measured using numerous visible (Abraham et al. 1996;Balogh et al. 1997;Kauffmann et al. 2004;Poggianti et al. 2006Poggianti et al. , 2008 atlee@astronomy.ohio-state.edu von der Linden et al. 2010) and mid-infrared (MIR; Saintonge et al. 2008;Bai et al. 2009) diagnostics. Starforming galaxies are consistently found to be more common and to have higher star-formation rates (SFRs) in lower density environments and at higher redshift (Kauffmann et al. 2004;Poggianti et al. 2006Poggianti et al. , 2008. The observed trends in star-formation with environment are usually attributed to variations in the sizes of gas reservoirs, either the existing cold gas or the hot gas that can cool to replenish the cold gas as it is consumed. Given that AGNs also consume cold gas to fuel their luminosity, similar patterns might be expected among AGNs. Indeed, recent work reveals strong dependencies of the luminosities and types of AGNs on environment (e.g. Kauffmann et al. 2004;Constantin et al. 2008;Montero-Dorta et al. 2009) for AGNs selected via visible-wavelength emission-line diagnostics. Von der Linden et al. (2010) find fewer "weak AGNs" (primarily LINERS) among red sequence galaxies near the centers of clusters compared to the field, but they find no corresponding dependence among blue galaxies. Intriguingly, while Montero-Dorta et al. (2009) independently report a decline in the fraction of low-luminosity AGNs toward the centers of low-redshift clusters, they find an increase in the fraction of LIN-ERs in higher density environments. The difference is likely a result of evolution. Montero-Dorta et al. (2009) found qualitatively different behavior between their main z ∼ 1 sample and the result produced when they applied their analysis to SDSS clusters. These results indicate that the variation of galaxy properties with local environment may influence the types of AGNs observed and that evolution in the relationship between some AGN classes and their host galaxies is important. Understanding the environmental mechanism that transforms star-forming galaxies into passive galaxies in clusters may help relate gas reservoirs in cluster galaxies to galaxy evolution as well as to AGN feeding and feedback.
Several mechanisms to cause the transformation from star-forming to passive galaxies have been proposed.
These include ram-pressure stripping of cold gas (Gunn & Gott 1972;Quilis et al. 2000;Roediger & Hensler 2005), strangulation (Larson et al. 1980;Balogh et al. 2000;Kawata & Mulchaey 2008;McCarthy et al. 2008) and galaxy harassment (Moore et al. 1996. Each mechanism operates on a different characteristic timescale and has its greatest impact on galaxies of different masses and at different radii. In principle, the transition of galaxy populations from star-forming to passive as a function of environment can probe the relative importance of these processes. However, such approaches suffer from practical difficulties. For example, Bai et al. (2009) argue that the similarity of the 24µm luminosity functions observed in galaxy clusters and in the field suggests that the transition from star-formation to quiescence must be rapid, which implies that ram pressure stripping must be the dominant mechanism. Von der Linden et al. (2010), by contrast, find a significant trend of increasing star-formation with radius up to 5R 200 from cluster centers. They conclude that preprocessing at the group scale is important, which is inconsistent with ram pressure stripping as the driver of the SFR-density relation. Patel et al. (2009) find a similar trend for increasing average SFR with decreasing local density down to group-scale densities (Σ gal ≈ 1.0 Mpc −2 ) near RX J0152.7-1357 (z = 0.83). The importance of preprocessing in groupscale environments reported by these authors suggests that strangulation rather than ram pressure stripping drives the SFR-density relation. The starkly different conclusions reached by Bai et al. (2009) compared to Patel et al. (2009) andvon der Linden et al. (2010), despite their common use of star-forming galaxies to examine the influence of environment, highlight the difficulties inherent in such studies.
Attempts to distinguish between various environmental processes become still more difficult with cluster samples that span a wide range in redshifts. The epoch of cluster assembly (0 ≤ z 1.5, e.g. Berrier et al. 2009) coincides with the epoch of rapidly declining star formation (e.g. Madau et al. 1998;Hopkins & Beacom 2006) and AGN activity (e.g. Shaver et al. 1996;Boyle & Terlevich 1998;Shankar et al. 2009), which makes it difficult to disentangle rapid environmental effects from the global reduction in the amount of available cold gas. Dressler & Gunn (1983) found early evidence for an increase in AGN activity with redshift, and the Butcher-Oemler effect had already provided evidence for a corresponding increase in SFRs. In the last decade, the proliferation of observations of highredshift galaxy clusters at X-ray, visible and infrared wavelengths has yielded similar trends in the fraction of both AGNs (Eastman et al. 2007; Martini et al. 2009) and star-forming galaxies (Poggianti et al. 2006(Poggianti et al. , 2008Saintonge et al. 2008;Haines et al. 2009) identified using a variety of methods. These newer results have also examined cluster members confirmed from spectroscopic redshifts rather than relying solely on statistical excesses in cluster fields, which permits more detailed study of the relationships between galaxies and their parent clusters.
The wide variety of AGN selection techniques employed in more recent studies represents an important step forward in understanding the dependence of AGNs on environment. Several recent papers have used X-rays to study the frequency and distribution of AGNs in galaxy clusters (Martini et al. 2006, henceforth M06;Martini et al. 2007;Sivakoff et al. 2008;Arnold et al. 2009;Hart et al. 2009) and their evolution with redshift (Eastman et al. 2007;Martini et al. 2009). Martini et al. (2009) found that the AGN fraction among cluster members increases with decreasing local density and increases dramatically (f AGN ∝ (1 + z) 5.3±1.7 ) with redshift. They also found that X-ray identification produces a much larger AGN sample than visible-wavelength emission line diagnostics: only 4 of the 35 X-ray sources identified as AGNs by M06 would be classified as AGNs from their visible-wavelength emission lines. Similar results have been found when comparing radio, X-ray and mid-IR AGN selection techniques for field AGNs (e.g. Hickox et al. 2009).
The different AGN selection techniques identify different AGN populations and suffer from distinctive selection biases. Both X-ray and visible-wavelength techniques can miss AGNs due to absorption, either in the host galaxy or in the AGN itself; however, X-ray selection can find lower luminosity AGNs and AGNs behind larger absorbing columns compared to emission line selection. Mid-infrared selection techniques suffer from relatively poor angular resolution, so they are mainly sensitive to AGNs that outshine their host galaxies in the band(s) used to perform the AGN selection. The X-ray and visible techniques can also be contaminated by emission from the host galaxy. While the identification of X-ray sources with L X > 10 42 erg s −1 as AGNs is unambiguous, X-ray luminosities in the 10 40 -10 42 erg s −1 range can be produced by low-mass X-ray binaries (LMXBs), high-mass X-ray binaries (HMXBs), and thermal emission from hot gas. Both visible-wavelength and MIR indicators are subject to contamination from young stars, which produce emission lines and heat dust near star-forming regions until it emits in the MIR. Even the interpretation of the well-established Baldwin-Phillips-Terlevich diagram (Baldwin et al. 1981) can be controversial in the transition region between star-forming galaxies and AGNs.
These difficulties motivate the use of multiple techniques to obtain a complete census of AGN and to correctly identify potential imposters. In this paper, we extend the work of Martini et al. (2006Martini et al. ( , 2007 by supplementing their X-ray imaging and visible-wavelength photometry with MIR observations from the Spitzer Space Telescope. We use these data to select AGNs independent of their X-ray emission. We also measure the properties of AGN host galaxies by fitting their visible to MIR spectral energy distributions (SEDs). We discuss our visible and MIR data reduction and photometry in Section 2. Section 3 details our techniques for identifying AGNs and measuring galaxy properties, and we describe the results in Section 4. We discuss the implications for the relationship between AGNs and their host galaxies in Section 5. Throughout this paper we use the WMAP 5-year cosmology-a ΛCDM universe with Ω m = 0.26, Ω Λ = 0.74 and h = 0.72 (Dunkley et al. 2009).
OBSERVATIONS & DATA REDUCTION
We obtained MIR observations with the Spitzer Space Telescope of the X-ray sources identified as members of 8 low-redshift galaxy clusters by M06. The initial reduction of the Spitzer imaging is described in Section 2.1. Visible wavelength photometry of these clusters were obtained at the 2.5m du Pont telescope at Las Campanas by M06. We provide a brief summary of these data in Section 2.2; further details are provided by M06. We then discuss the corrections for Galactic extinction and for instrumental effects in Section 2.3.
Spitzer Reduction
We obtained mid-infrared (MIR) observations from the Spitzer Space Telescope using the IRAC (λ ef f = 3. 6, 4.5, 5.8, 8.0 µm;Fazio et al. 2004) and MIPS (λ ef f = 24; Rieke et al. 2004) instruments from Spitzer program 50096 (P.I. Martini). Observations were carried out between 2008 November 1 and 2009 April 22. Spitzer pointings were chosen to image the X-ray point sources in 8 low-redshift galaxy clusters identified by M06. We supplemented these observations with data from the Spitzer archive for Abell 1689 and AC 114.
Spitzer's cryogen ran out before the MIPS observations of three clusters (Abell 644, Abell 1689 and MS 1008.1-1224) were carried out. In one of these clusters (Abell 1689) we extended our coverage to 24µm using observations from the Spitzer archive, leaving two clusters with no usable MIPS observations. The Astronomical Observation Request (AOR) numbers used to construct the MIR mosaic images of each cluster are listed in Table 1, along with the corresponding 3σ observed-frame luminosity limits at both 8 and 24µm. These limits are approximate because the image depth varies across the mosaics due the changing number of overlapping pointings. Quoted limits correspond to areas with "full coverage" but without overlap from adjacent pointings.
The raw Spitzer data are reduced by an automated pipeline before they are delivered to the user, but artifacts inevitably remain in the calibrated (BCD) images. Preliminary artifact mitigation for the IRAC images was performed using the IRAC artifact mitigation tool by Sean Carey 1 . We inspected each corrected image after this step and determined whether the image was immediately usable, if additional corrections were required, or if it simply had too many remaining artifacts to be reliably corrected. The latter class primarily included images with extremely bright stars that caused artifacts too severe to be corrected. Where appropriate, additional corrections were applied using the muxstripe 2 and jailbar 3 correctors by Jason Surace and the column pull-down corrector 4 by Leonidas Moustakas. Artifacts in the MIPS images were removed by applying a flatfield correction algorithm packaged with the Spitzer mosaic software, (MOPEX 5 ), as described on the Spitzer Science Center (SSC) website 6 .
Mosaic images for both IRAC and MIPS were constructed from the artifact-corrected images using MOPEX. Aperture photometry was extracted from the resulting mosaics using the apphot package in IRAF. We converted the measured fluxes to magnitudes in the Vega system after the photometric corrections described in Section 2.3 had been applied. All magnitudes quoted in this work, both visible and MIR, are calculated with respect to the Vega standard. The photometric apertures used by apphot were chosen to enclose a region of approximately 10 kpc projected radius at the redshift of each cluster. These large apertures yielded reduced S/N, but most cluster members were sufficiently bright that the uncertainties on the measured fluxes were dominated by systematic errors (5%) in the zero-point calibration, except at 24µm. The use of large photometric apertures also allowed galaxies to be treated as point sources for the purpose of computing aperture corrections, as recommended by the SSC. A smaller aperture could improve the S/N, but this gain would be outweighed by the systematic uncertainty introduced by the aperture corrections for the resulting flux measurements, as aperture corrections for IRAC extended sources remain highly uncertain (IRAC Instrument Handbook 7 ).
Visible Photometry
All 8 clusters in our sample have B-, V -and R-band imaging, and 4 of the 8 have I-band imaging. We extracted separate source catalogs for each of these bands using Source Extractor (SExtractor, Bertin & Arnouts 1996) and merged the catalogs using the R-band image as the reference image for astrometry and total (Kron) magnitudes. We correct from aperture to total magnitudes without altering the colors from the aperture photometry by applying the R-band aperture corrections to all bands, where m Ap and m Kron are the aperture and Kron-like magnitudes, respectively, for the band being corrected. Rather than taking the published photometry from M06, we used the redshift-dependent apertures assigned to each cluster as described in Section 2.1. This maintains consistency with our IRAC photometry and results in relatively small aperture corrections, typically ∼ 0.1 mag. SExtractor returns R-band positions that are good to within a fraction of an arcsecond. However, the positions of sources in IRAC and MIPS images are less precise due to the poorer angular resolution and larger pixel sizes in these bands. We selected the best astrometric matches to each Spitzer source from the objects identified by SExtractor within a specified search radius, θ.
To determine the best value of θ, we scrambled the RA of SExtractor sources and determined how many Spitzer sources were matched to a scrambled galaxy as a function of θ. We found the best balance between purity and completeness for θ ≈ 1. ′′ 25. This search radius yielded spurious matches for less than 2% of objects. The actual contamination of our catalog will be much lower, because a Spitzer object with a spurious match will usually be better matched to its "true" counterpart, which has a median match distance d = 0. ′′ 4. The images used to perform the matching do not suffer from substantial confusion, even in the cluster centers, so erroneous photometry due to overlapping sources is unlikely to present a problem. Further details of the visible image reduction were described by M06.
Photometric Corrections
We estimated the Galactic reddening toward each of the 8 clusters in our sample from the dust map of Schlegel et al. (1998) and calculated extinction corrections assuming R V = 3.1 and the Cardelli et al. (1989) reddening law. The resolution of the Schlegel et al. (1998) dust map requires us to use a common extinction correction for all cluster members. However, Galactic cirrus is apparent in some of our images, so this assumption is not always appropriate. This leads to additional uncertainty associated with the extinction corrections, but the total (visual) extinction toward our clusters is typically less than 0.1 mags. The associated uncertainties are therefore small. For the clusters with the highest extinctions (Abell 2104 and 2163, with A V = 0.73 and 1.1, respectively), variations in extinction across the cluster represent an important source of systematic uncertainty. We account for this by adopting a 10% uncertainty in all extinction corrections and propagating this uncertainty to the corrected magnitudes. In Abell 2163, for example, this corresponds to an uncertainty of 0.11 mags in the dereddened V -band magnitude.
The raw fluxes measured from the MIR mosaics must be corrected for various instrumental effects, including aperture, array-location and color corrections, as described in the IRAC and MIPS 8 Instrument Handbooks. Aperture corrections are, in principle, required for all observations. In practice, even our smallest apertures (∼ 7 ′′ ) are large enough that aperture corrections to visible-wavelength point sources are negligible. For MIR point sources, this is not the case. We apply aperture corrections from the IRAC Instrument Handbook appropriate for our redshift-dependent photometric apertures to the IRAC photometry. These corrections are not strictly appropriate due to the extended nature of our sources; however, we have chosen apertures that are large compared to the sources (∼ 3× larger than the FWHM of the largest galaxies, see Section 2.1). We therefore apply aperture corrections appropriate for point sources.
We determined aperture corrections appropriate for our MIPS images by averaging a theoretical point-source response function (PRF) from STinyTim 9 with three bright, isolated point sources in the Abell 3125 and Abell 2104 mosaics. The PRFs of sources from the different clusters agree with one another and with the theoretical PRF to within a few percent over the range of aperture sizes relevant for our MIPS photometry. The dispersion between the individual PRFs at fixed aperture size provides an estimate of the uncertainty on the corrections and is included in the 24µm error budget. The MIPS images of the other clusters lack bright, isolated points sources with which to make a similar measurement, so we assume that the PRF appropriate for Abell 3125 and Abell 2104 gives reasonable aperture corrections for all clusters. This introduces some systematic error in our derived 24µm fluxes, but the agreement of the observed PRFs of point-sources identified in Abell 3125 and Abell 2104 with the theoretical PRF indicates that this uncertainty is small.
The flatfield corrections applied to IRAC images by the automated image reduction pipeline are based on observations of the zodiacal background light, which is uniform on the scale of the IRAC field of view. It is also extremely red compared to any normal astrophysical source. The combination of scattered light due to the extended nature of the source and the color of the source illuminating the detector for the flatfield images results in different gains for point-sources and extended sources. It also requires an effective bandpass correction that varies with position on the detector. These effects can be corrected by applying a standard array-location correction image to a single IRAC image. For a mosaic, the magnitude of the required correction is significantly reduced by adding dithered images with different corrections at a given position on the sky. However, the residual effect can be a few percent or more depending on the number of overlapping IRAC pointings. We construct an array-location correction mosaic by co-adding the correction image for a single IRAC pointing shifted to the positions of each dithered image in the science mosaic. We measure the required array-location corrections in the same apertures used to measure the IRAC fluxes.
The Spitzer image reduction pipeline assumes a flat power-law SED to convert electrons to incident fluxes. Astrophysical sources typically do not show flat SEDs and therefore require color corrections to determine the true flux at the effective wavelength of a given band. This is especially important in star-forming galaxies, which show strong polycyclic aromatic hydrocarbon (PAH) emission features at 6.2 and 7.7µm (Smith et al. 2007). We determine color corrections to the measured fluxes from model SEDs (Section 3.1). We compute preliminary model SEDs for each cluster member from the photometry with all other corrections applied. We then integrate the model SED across the various MIR bandpasses and determine the appropriate color corrections following the procedures outlined in the instrument handbooks. The color correction, K, applied to an IRAC source is given by, where F ν is the model spectrum and R ν is the response function of the detector in the appropriate channel. The formalism for MIPS color corrections is similar but slightly more complicated; we refer interested readers to Section 3.7.4 of the MIPS Instrument Handbook.
Optical and MIR photometry for each cluster member after all relevant corrections have been applied are listed in Table 2.
METHODS
We wish to identify cluster members hosting AGNs, determine the AGN luminosities, examine the properties of AGN host galaxies, and determine whether they differ in any appreciable way from "normal" cluster galaxies or from their counterparts in the field. This requires that we distinguish cluster members from foreground and background galaxies, fit model SEDs to the member photometry, and measure the rest-frame properties of the AGN host galaxies. We describe the model SEDs in Section 3.1. Using these models, we calculate K-corrections to the measured fluxes, estimate stellar masses and SFRs for cluster member galaxies, and identify AGNs.
We use redshifts reported in Martini et al. (2007) or extracted from the NASA Extragalactic Database 10 to identify members of the galaxy clusters in our sample. We define a galaxy to be a cluster member if it falls within a circular field with radius, where σ is the cluster's velocity dispersion (Treu et al. 2003). We also require that members have spectroscopic redshifts within the ±3σ redshift limits prescribed in Table 1 of Martini et al. (2007), which were established using the biweight velocity dispersion estimator of Beers et al. (1990). This criterion yields a sample of 1165 cluster member galaxies. We eliminate many of these galaxies from our sample due to either limited photometric coverage or, in a few instances, because the spectroscopic redshifts in the literature are clearly in disagreement with the photometric redshifts obtained from the SED fits (Section 3.1). The final sample of "good" cluster members, those galaxies with detections in at least 5 bands and with apparently reliable spectroscopic redshifts, contains 488 galaxies. Assef et al. (2010; hereafter A10) constructed empirical SED templates that can be used to determine photometric redshifts and K-corrections for galaxies and AGNs over a wide range of redshifts. The A10 templates include three galaxy templates (elliptical, spiral, and starburst or irregular) and a single AGN template, which can be subjected to variable intrinsic reddening. These templates were derived empirically across a long wavelength baseline (0.03-30µm), using 14448 apparently "pure" galaxies and 5347 objects showing AGN signatures. We fit two independent model SEDs to the photometry of each cluster member using the published codes of A10. The first model included only the three galaxy templates, while the second also included an AGN component. The χ 2 differences between the two fits can be used to identify AGNs (Section 3.2). Model SEDs for the M06 X-ray point sources included in our sample of "good" galaxies are shown in Figure 1. AGNs identified from their SED fits, but which have no X-ray counterparts, are shown in Figure 2. The fits to the X-ray point sources are representative of the fit quality returned for all cluster members, while the fits to photometrically-identified AGNs are, on average, poorer.
Model SEDs
The model SEDs fit to 25 of the 488 spectroscopicallyidentified cluster members are poorly matched to the measured photometry (χ 2 > 25). We determine photometric redshifts for all of the identified cluster members, and in cases where the measured photometric redshifts are more than 3σ away from the cluster redshift, we replace the spectroscopic redshifts with photometric redshifts and repeat the fit. In 11 cases, this procedure results in substantial improvements to the fits (∆χ 2 > 12, χ 2 photo−z < 4). This suggests that some galaxies in the sample have erroneous spectroscopic redshifts. One such object is an X-ray source, identified as AC 114-5 by M06. The redshift for this object was reported by Couch et al. (2001;their galaxy #365). The spectra used by these authors covered a relatively narrow wavelength range (8350Å < λ < 8750Å) and had moderately poor S/N. We suspect that this combination of factors, in concert with a strong prior in favor of cluster membership in the presence of a putative Hα emission line at the correct redshift, led Couch et al. (2001) to mis-identify the [Oii]λλ4354 and [Oiii]λλ4363 emission lines of a background quasar at z = 0.988 as the [Nii]λλ6548 and Hα emission lines, respectively, at the cluster redshift. Four of the 5 objects flagged as having erroneous redshifts in AC 114 have redshifts from Couch et al. (2001). Two of the four have redshifts from only one emission line, and we have confirmed that both objects with redshifts from multiple emission lines have plausible pairs of lines near the photometric redshifts. Furthermore, all of the objects with apparently erroneous redshifts are quite faint, having V 22, which makes acquiring high-S/N spectra difficult. Our identification of objects with discrepant photometric and spectroscopic redshifts as interlopers appears to be reliable, and we eliminate the associated galaxies from further consideration. The absence of AC 114-5 from the X-ray AGN sample has important repercussions, which we discuss in Section 4.
AGN Identification
We consider AGNs selected based on their X-ray luminosities, the shapes of their SEDs, or both. X-ray sources with L X > 10 42 erg s −1 are unambiguously AGNs, but a number of processes can produce X-ray luminosities in the 10 40 -10 42 erg s −1 range. These include LMXBs, HMXBs and a galaxy's extended, diffuse halo gas. The integrated X-ray luminosities of LMXBs and hot halo both correlate strongly with stellar mass, as measured by the galaxy's K-band luminosity (Kim & Fabbiano 2004;Sun et al. 2007), and the luminosity from HMXBs correlates with SFR (Grimm et al. 2003). These correlations allow us to predict the X-ray luminosity of a normal galaxy using only parameters that can be measured from the model SEDs. Similar analyses were performed by Sivakoff et al. (2008) and Arnold et al. (2009), who used K-band luminosities measured from 2MASS photometry rather than luminosities estimated from model SEDs. Table 4. Objects also identified as AGNs from their SED fitting are labeled "IR." The heavy lines show the total model SED, while the solid, dotted, dashed and dot-dashed lines show the A10 AGN, elliptical, spiral and irregular templates, respectively. Not all components appear in all panels. See Section 3.1 for further details.
We measure K-band magnitudes from the model SEDs and determine SFRs from the K-corrected 8µm and 24µm luminosities of X-ray sources in each cluster. We use L K and SFR in Eqns. 4, 5 and 6 to predict the expected X-ray luminosities from the host galaxies of X-ray point sources identified by M06 (Kim & Fabbiano 2004;Grimm et al. 2003;Sun et al. 2007, respectively). The predictions for X-ray emission from a given galaxy due to LMXBs, HMXBs and the thermal halo are good to within ∼ 0.3 dex and are given by, where L K and L Ks are the galaxy's luminosities in the Kand K s -filters. Each relation is given in slightly different energy ranges, none of which coincide with the range used by M06. This problem is especially severe for Eqn. 5, because Grimm et al. (2003) take their X-ray fluxes from various sources in the literature without converting them to a common energy range. They claim that the resulting uncertainty is small because the scatter in the relation is much larger than the bandpass corrections. Fortunately, even if this were not the case, the HMXB contribution to the total predicted X-ray luminosities is small for the SFRs typical of cluster galaxies (< 10 M ⊙ yr −1 ). The contribution from thermal emission to the soft Xray luminosity can be significant, dominating the LMXB -Model SEDs for objects identified as IR AGNs which are not also identified as X-ray AGNs. Line types and bandpasses shown are the same as in Figure 1. The object names indicated on each panel correspond to those in Table 1. See Section 3.1 for further details. 6 × 10 40 erg s −1 . This transition luminosity depends on the specific form adopted in Eqn. 6. Mulchaey & Jeltema (2010) found that L X (corona) ∝ L 3.9±0.4 K for field galaxies, which differs significantly from the results of Sun et al. (2007). While the Mulchaey & Jeltema (2010) relation is not strictly applicable to our sample, the difference between cluster and field galaxies suggests that the thermal X-ray emission from a galaxy's halo depends on its environment. Such a variation introduces a systematic uncertainty in L X (corona) of up to 0.8 dex at L K = 4 × 10 11 L ⊙ . Hereafter we neglect this uncertainty, as its effect in a given cluster is impossible to quantify given the data presently available.
We convert Eqns. 4-6 to determine luminosities in the soft X-ray (0.5-2 keV) and hard X-ray (2-8 keV) bands, assuming a Γ = 1.7 power law for the LMXB and HMXB relations. We further assume that the Grimm et al. (2003) relation corresponds to luminosities in the 2-10 keV range and that the thermal emission from the kT = 0.7 keV halo gas is negligible in the hard X-ray band. The X-ray luminosities reported by M06 and our estimates of the systematic uncertainties in these luminosities associated with the choice of energy correction factor (ECF) are shown in Figure 3, along with the predicted luminosities from the host galaxies. Many of the reported point sources require an AGN component, but several of the M06 point sources have very massive host galaxies, and their observed fluxes may arise entirely from non-AGN sources.
M06 selected 40 X-ray point sources with reliable detections above the extended emission from the surrounding ICM (N count ≥ 5). Of these 40 sources, they identify 35 as probable AGNs. We have sufficient photometry to construct reliable model SEDs for 35 M06 X-ray point sources. The remaining 5 M06 point sources either -Comparison of the X-ray luminosities of X-ray point sources from M06 (y-axis) to the predicted X-ray luminosities of their host galaxies (x-axis). Points show the measured luminosities, and the "tails" connect each source to the luminosity estimated by separating its 0.5 − 8.0 keV X-ray luminosity into soft and hard components using a Γ = 1.7 power-law. The length of the tail indicates how well the measured photon energies are described by a Γ = 1.7 power law, and consequently describes the systematic uncertainty on the quoted L X . Long tails belong to objects poorly described by a Γ = 1.7 power law. Heavy lines mark the line of equality (L X = L host ), and the dashed lines show the ±0.7 dex scatter about the empirical relations used to predict the X-ray luminosity of a given host galaxy. See Section 3.2 for the method used to predict X-ray luminosities of normal galaxies.
lack enough data to produce a reliable model SED or fall outside the R-band field of view. We find that 23 of these 35 sources have X-ray luminosities more than 1σ greater than the predicted host luminosity. Henceforth, we will call these objects X-ray AGNs. The systematic flux error estimates in Figure 3 indicate that many X-ray AGNs have photon energy distributions that are poorly matched to the Γ = 1.7 power-law assumed by M06. Three such AGNs are close to the boundary separating probable AGNs from more ambiguous cases and have too large a soft X-ray flux compared to their hard X-ray flux to be consistent with a Γ = 1.7 power law. M06 did not correct for X-ray absorption, and in the cases where the ratio of soft to hard X-ray photons is too low for a Γ = 1.7 power law, absorption may explain the apparent discrepancy. However, objects whose soft X-ray fluxes are unexpectedly large compared to the total cannot result from absorption.
Many narrow-line Seyfert 1 galaxies (NLS1) show excess soft X-ray emission (Arnaud et al. 1985). However, only one X-ray source identified by M06 is a NLS1 (their Abell 644 #1), so the soft X-ray excess common to NLS1s cannot explain the presence of excess soft X-ray emission in 13 X-ray sources with AGN-like luminosities. Alternative explanations include soft X-rays arising from gas that is photoionized by an obscured AGN (e.g. Ghosh et al. 2007), poor signal-to-noise in the X-ray, and thermal emission from hot gas. The ECF used to convert soft X-ray photons to incident fluxes for kT = 0.7 keV thermal bremsstrahlung (assumed by Sun et al. 2007) is larger than the ECF for a Γ = 1.7 power law by approximately 10%. This implies that two of the three suspect X-ray AGNs have luminosities sufficiently close to the threshold that they may reasonably be mis-classified galaxies. This yields a possible contamination in the Xray AGN sample of approximately 10%, which is comparable to the estimated contamination of the IR AGN sample (see below).
In comparison to our sample of 23 X-ray AGNs from a parent sample of 35 X-ray point sources with complete photometry, M06 found that 35 of their 40 point sources had X-ray luminosities consistent with AGNs. The larger fraction of AGNs reported by M06 may be attributed to their use of L X -L B relations, which show larger scatter than the K-band relations. We also introduce some uncertainty by estimating L K from the model SEDs, but this uncertainty is small (∼ 10%) compared to the scatter in the L X -L K relation. An additional difference is that M06 considered the two luminosity components separately and did not compare their sum to the measured luminosities, This was done subsequently by Sivakoff et al. (2008) and Arnold et al. (2009) in their studies of AGNs in low-redshift groups and clusters of galaxies. Their analyses are much closer to our method, and their samples included some of the clusters in our sample (Abell 3128, 3125 and 644).
An alternative method to identify AGNs is to use the distinctive shape of their SEDs, particularly in the MIR (e.g. Marconi et al. 2004;Stern et al. 2005;Richards et al. 2006;A10). This approach can identify AGNs behind gas column densities large enough to obscure even the X-rays emitted by an AGN. Such an AGN sample has very different selection criteria and biases than an X-ray selected sample, and combining the two results in more complete AGN identification.
We identify AGNs from their SEDs by comparing the goodness-of-fit of two sets of model templates. The first set uses only the normal galaxy templates. The other also includes the AGN template. We determine whether a given galaxy requires an AGN component in its model SED by applying a threshold on the likelihood ratio, ρ, where χ 2 (gal) and χ 2 (gal + AGN ) are goodnesses-of-fit for a model with only the A10 galaxy templates and for a model that includes an additional AGN component, respectively. AGNs are those objects whose ρ is smaller than a pre-determined selection limit, ρ max , established by Monte Carlo simulations of normal galaxies. We created artificial galaxy photometry to determine an appropriate ρ max by combining the three galaxy templates of A10 in proportions that reflect the template luminosity distributions in real cluster members. We introduced Gaussian photometric errors comparable to the photometric uncertainties in our real data (0.07 mag) to the fluxes given by the model SEDs. We also allowed occasional catastrophic errors of up to 0.3 dex. The artificial galaxy photometry did not include upper limits, which we also neglected when constructing model SEDs of real galaxies. We fit the artificial galaxies with two models. The first model excluded the AGN component from the fit, while the second component included it. The likelihood ratio distributions computed from the goodness-of-fit results for the two different models are shown in Figure 4. These distributions show the probability that a pure galaxy will be erroneously classified as an AGN due to the presence of photometric errors. The similarity of the different distributions, even based on only 4 photometric bands, indicates that a single ρ max can be used to select AGNs from among all galaxies in our sample.
We also identify AGNs based on the F-statistics of the two model SED fits described above. Figure 5 shows the F-statistic as a function of χ 2 (gal) for X-ray AGNs selected using Figure 3, AGNs selected using likelihood ratios, and "normal" cluster members. The F-statistic is given by, where ∆χ 2 is the (absolute) change in the total χ 2 after introducing the AGN component to the fit. In addition to the galaxies that are well-fit by the galaxy-only model and not substantially improved by the addition of an AGN component, there are objects with large χ 2 (gal) but small F , and objects with large F but small χ 2 (gal). Neither of the latter two categories contain objects likely to be AGNs from the point-of-view of the model SEDs.
The most luminous X-ray AGNs have both large F and large χ 2 (gal). These are clearly identified as AGNs by the model SEDs, and less luminous X-ray AGNs can be found with increasing density toward the normal galaxy locus at the origin of Figure 5. The dotted and dashed lines in the Figure correspond to the ρ < ρ max selection boundaries for N=6 and N=9 flux measurements, respectively. Some objects above the N=9 line are not selected as IR AGNs because they fail a cut on the overall goodness-of-fit, which requires χ 2 ν (gal + AGN ) < 5. We could define an AGN selection region in Figure 5, but due to the non-uniformity of our photometric data, this would result in different effective cuts in ∆χ 2 between different clusters and between objects in individual clusters. Furthermore, we find that only 3 AGNs identified using likelihood ratios fall into the suspect part of Figure 4 with F ≈ 1. This level of contamination (∼ 10%) is consistent with the estimated purity of the X-ray AGNs, which we deem to be acceptable. Therefore, for the rest of this work, we rely on the more simplistic likelihood ratio threshold to identify AGNs.
Likelihood ratio selection of AGNs using SED fitting is most sensitive to the shape of the MIR SED, so we refer to AGNs so identified as IR AGNs. We find 29 IR AGNs using a selection boundary at the 99.8% confidence interval of the merged ρ distribution (ρ max = 1.5 × 10 −3 ). Table 1 lists both X-ray and IR AGNs, their luminosities, and the basic parameters of their host galaxies. IR AGN selection recovers 5 of 7 AGNs (71%) identified via the Stern wedge (Stern et al. 2005, see Figure 6) and 8 of the 23 X-ray AGNs. The galaxies in the Stern wedge that are not selected from their SED fits fall just inside the boundary of the wedge, so they may be normal galaxies shifted into the wedge by photometric errors. Gorjian et al. (2008) find that 35% of X-ray sources in the Boötes field of the NOAO Deep Wide Field Survey (f X > 8 × 10 −15 erg s −1 cm −2 ) with detections in all 4 IRAC bands fall outside the Stern wedge, and Figure 14 of A10 shows that a substantial fraction of the pointsource (luminous) AGNs in their sample fall outside the wedge as well. Given the high luminosities in both of these samples, it is perhaps not surprising that most of the lower-luminosity AGNs common in galaxy clusters fall outside the Stern wedge. For ρ max = 1.5 × 10 −3 and the size of our sample (488 galaxies), we expect on average one false-positive AGN identification and 3 or fewer false-positives at 98% confidence, implying > 90% purity in our IR AGN sample. Figure 4 (IR AGNs). Open blue squares show X-ray AGNs, and solid black squares show "normal" cluster members. The dotted and dashed curves show the ρ thresholds for objects having N=6 and N=9 flux measurements, respectively. Objects above their corresponding selection boundaries are identified as AGNs, provided that they pass a χ 2 ν cut.
We estimate the completeness of the IR AGN sample as a function of the reddening of the AGN template and luminosity using Monte Carlo simulations. We construct model AGN SEDs by injecting an AGN component with some luminosity and reddening into artificial galaxy photometry, which we generate using the Monte Carlo techniques described above. We estimate the completeness of the IR AGN sample from the fraction of such AGNs recovered. The completeness depends strongly on the luminosity of the AGN component. We only reliably identify AGNs with L bol 7 × 10 10 L ⊙ . The completeness depends only weakly on E(B − V ). There are measurable differences only for AGNs with E(B − V ) > 2. For our observed wavelengths, AGN identification depends most strongly on the shape of the MIR SED, which is insensitive to modest amounts of reddening. The full dependence of completeness on L bol and E(B − V ) is listed in Table 2.
We caution that both our AGN identification and the analysis below were conducted using the fixed AGN template derived by A10. While this template is dominated by luminous AGNs, AGNs of all luminosities were used in its construction, and in some sense it represents the optimal median AGN SED. There is some evidence that AGNs with low Eddington ratios (L bol /L Edd ) are systematically weaker in the UV and the MIR than higher L bol /L Edd AGNs. This appears to become important at L bol /L Edd ≈ 10 −3 (Ho 2008). However, the UV weakness of such objects remains a subject of debate (e.g. Ho 1999Ho , 2008Dudik et al. 2009;Eracleous et al. 2010), and the SEDs of AGNs appear to all be quite similar out to λ ≈ 20µm, even in AGNs with accretion rates as low as L bol /L Edd ≈ 10 −3 (Ho 2008, Figure 7). Furthermore, the variable reddening of the AGN component allowed by the models can account for differing UV/visible flux ratios, making the AGN component of the model SEDs flexible enough to mimic AGNs with a wide variety of Eddington ratios.
Intrinsic variations in the AGN SED are one possible cause of the absence of an important AGN component in the SEDs of many X-ray AGNs, despite their similar distributions in L bol (Section 4). Another possible explanation is that the nuclear MIR emission from many X-ray AGNs is overwhelmed by star-formation in their host galaxies. We find that X-ray AGNs with L X > 10 42 erg s −1 that are also identified as IR AGNs have no measurable star-formation, while those not identified in the IR have SF R = 0.3 M ⊙ yr −1 . This may be a selection effect, since nuclear MIR emission is not subtracted before computing SFRs in galaxies not identified as IR AGNs. However, it appears that the balance between SFR and nuclear emission is an important factor in determining whether a given X-ray source will be identified as an IR AGN.
Also of concern is the MIR emission exhibited by some normal galaxies which is clearly not associated with star-formation (e.g. Verley et al. 2009;Kelson & Holden 2010). The strength of the diffuse interstellar dust emission relative to star formation varies from galaxy to galaxy depending on the populations of AGB stars, which can produce and heat dust (Kelson & Holden 2010), and field B-stars (including HB stars), which produce UV light that can both heat dust grains and excite PAH emission in the diffuse ISM (e.g. Li & Draine 2002). These effects could mimic the presence of an AGN, particularly in passively-evolving galaxies, which the A10 templates predict should decline strictly as a νF ν ∝ ν 2.5 power-law. Given the limited data available to constrain MIR emission not associated with either an AGN or a star-forming region and the as-yet uncertain magnitude of the associated variations, we neglect any potential effects on our AGN identification. However, potential sources of MIR emission not accounted for by the A10 templates, especially emission from dust heated by old stars in passive galaxies, remain a potentially important systematic uncertainty.
Stellar Masses
Stellar population synthesis modeling provides a means to estimate stellar masses in the absence of detailed spectra. Bell & de Jong (2001) construct model spectra of galaxies for a wide variety of stellar masses, SFRs, metallicities and stellar initial mass functions (IMFs) to convert colors to mass-to-light ratios (M/L). Their models assume a mass-dependent formation epoch with bursty star-formation histories, which is appropriate for the spiral galaxies they study. Figure 9 of Bell & de Jong (2001) makes it clear, however, that their results also robustly estimate M/L for passively evolving galaxies. In fact, the scatter about the mean M/L tends to decrease for redder systems because the stochasticity of the starformation history becomes less important in galaxies that experienced their last burst of star-formation in the distant past.
Bell & de Jong provide a table of coefficients (a λ ,b λ ) relating M/L for a galaxy to its color, where color is measured in the bands for which a λ and b λ were determined. We adopt the coefficients appropriate for Solar metallicity computed with the Bruzual & Charlot (2003) population synthesis code and the scaled Salpeter IMF suggested by Bell & de Jong (2001), who report that a modified Salpeter IMF with total mass M ′ = 0.7M Salpeter yields the best agreement with the Tully-Fisher relation. Once we select an appropriate (a λ ,b λ ) pair, it is straightforward to compute stellar masses from the visible photometry. However, we must first subtract the AGN component of the model SED in sources identified as IR AGNs before computing colors. The uncertainty introduced by the AGN subtraction is a combination of the fractional uncertainty in the contribution of the AGN template to the model SED, which is determined by the fit, and the uncertainty in the AGN template itself. To measure the uncertainty in the template, we examined 1644 luminous quasars with spectroscopic redshifts from the AGN and Galaxy Evolution Survey (AGES; Kochanek et al. in prep) and determined the variation in their measured photometry about their best-fit model SEDs. Using these measurements, we constructed an RMS SED for AGNs and averaged it across each of the bandpasses we employ. The uncertainty in the AGN correction resulting from intrinsic variation about the AGN template is 10% except at 24µm, where there are too few z = 0 quasars to make a meaningful comparison. The uncertainty in the AGN correction at 24µm is therefore large, but it can be constrained by the relatively good agreement of the 8µm and 24µm SFRs (Figure 7).
In galaxies with no genuine nuclear activity the AGN template can correct for variations in stellar populations relative to the templates, intrinsic extinction, or errors in the measured photometry. Subtraction of the AGN com- ponent under these circumstances would result in underestimated stellar masses and SFRs, while failure to subtract the AGN component in a genuine, low-luminosity AGN would cause the measured SFRs of their host galaxies to be biased toward higher values. However, the ambiguity between a genuine, low-luminosity AGN and an apparent AGN component introduced to correct for photometric errors (Section 3.2) renders any attempt to subtract the AGN component in such cases suspect. Therefore, in normal galaxies and in X-ray AGNs not identified as IR AGNs, no AGN correction is applied. We accept the inherent bias to avoid introducing ambiguous AGN corrections, which would be much more difficult to interpret.
The Bell & de Jong (2001) calibrations are reported for rest-frame colors, so we need K-corrections for each cluster member to convert the measured magnitudes to the rest frame. We calculate the K-corrections from the model SEDs returned by the A10 fitting routines. Uncertainties on K-corrections cannot be directly determined from the uncertainties in the model components because K-corrections depend non-linearly on these uncertainties. Therefore, we recombine the components of each model SED in proportion to the uncertainties in their contributions to the total model flux. This results in a series of temporary model SEDs. We then calculate the Kcorrections implied by these temporary model SEDs and measure their dispersions to estimate the uncertainties in the K-corrections returned by the original model SED.
The systematic uncertainty on the stellar masses calculated from Eqn. 9 can be estimated by comparing the fiducial masses with masses derived using different assumptions. We estimate the typical systematic uncertainty in stellar mass, listed in Table 1, to be 0.2 dex. These uncertainties are derived by measuring the difference between the fiducial masses and those determined using coefficients appropriate for the Pégase population synthesis models with a Salpeter IMF. Conroy et al.
(2009) studied the ability of different models to reproduce the observed colors of stellar populations in globular clusters and found that systematic uncertainties on stellar masses derived from population synthesis codes typically reach or exceed 0.3 dex.
Star-Formation Rates
We measure SFRs from our AGN-corrected MIR photometry using the empirical relations of Zhu et al. (2008), which have been determined for both the IRAC 8µm and the MIPS 24µm bands using the same calibration sample. While the contribution of the stellar continuum to the observed 24µm luminosity is negligible, the Rayleigh-Jeans tail of the stellar continuum emission can make an important contribution to the integrated flux at 8µm, especially in galaxies with the low SFRs typical in clusters. The method used to subtract this contribution is an important systematic uncertainty in the SFR calculation. Zhu et al. (2008) assume that the contribution of the stellar continuum at 8µm can be described by L stellar ν (8µm) = 0.232L ν (3.5µm), as derived from the models of Helou et al. (2004). Under this assumption, Zhu et al. (2008) derive luminosity-SFR relations appropriate for a Salpeter IMF, where L dust ν (8µm) is determined by subtracting L stellar ν (8µm) from the the measured 8µm luminosity. Simões-Lopes et al. (in preparation) find that L stellar ν (8µm) = 0.269L ν (3.5µm) provides a better estimate for their sample of nearby, early-type galaxies with no dust and conclude that the difference in their result compared to Helou et al. (2004) is due to metallicity. Another important systematic uncertainty in SFRs derived from PAHs is the dependence of the PAH abundance on metallicity (Calzetti et al. 2007) because lower metallicity systems have fewer PAHs and therefore weaker 8µm emission at fixed SFR. This second effect is negligible for the high-mass-and therefore metal-richgalaxies we consider. We neglect both metallicity-and mass-dependent effects for the remainder of our analysis. Instead, we follow Zhu et al. (2008) and assume that L stellar ν (8µm) = 0.232L ν (3.5µm). We derive SFRs from Eqns. 10 and 11. For galaxies having measurable (> 3σ) SFRs from both IRAC and MIPS, we take a geometric mean of the two; otherwise, we use whichever SFR measurement is available. The resulting SFRs for AGNs are summarized in Table 1.
Equations 10 and 11 were derived using the extinctioncorrected Hα luminosity of the associated galaxies. The MIPS SFR determined from Eqn. 11 for a galaxy with νL ν = 7.15 × 10 9 L ⊙ is ≈ 0.6 dex larger than the SFR derived from the Calzetti et al. (2007) relation, which was calibrated using the Paα emission line. Calzetti et al. (2007) used the Starburst99 IMF, and after accounting for this difference, the resulting discrepancy is reduced to 0.4 dex. The choice of SFR calibration therefore represents an important systematic uncertainty in the measured SFRs. The total systematic uncertainty in SFR is indicated by the significant scatter (0.2 dex) and the small but marginally significant offset (0.1 dex) between the IRAC and MIPS SFRs in Figure 7. Since the offset is smaller than both the scatter about the line of equality and the systematic uncertainty when comparing to the Calzetti et al. (2007) result, we neglect it below. However, we caution that there remains a ∼ 15% uncertainty in our results associated with the discrepancy between the IRAC and MIPS SFR indicators.
RESULTS
We identify 29 IR AGNs with likelihood ratios ρ < ρ max . We also confirm the presence of AGNs in 23 Xray point sources whose X-ray luminosities significantly exceed the luminosities expected from their host galaxies. Surprisingly, the X-ray and IR AGN samples are largely disjoint: only 8 AGNs appear in both. Only the more luminous IR AGNs appear in the X-ray AGN sample and vice-versa. While it is not surprising for faint X-ray AGNs to drop out of the IR AGN sample, the absence of X-ray emission associated with many IR AGNs, which require a moderately luminous AGN for a reliable detection, is unexpected. This may indicate either different selection biases in the two methods or genuine, physical differences between the AGNs selected by these techniques.
Bolometric AGN Luminosities
In order to conduct a meaningful comparison of X-ray and IR AGNs, we need to place them on a common luminosity system. The most obvious choice is the bolometric AGN luminosity (L bol ), which also allows us to examine black hole growth rates.
The A10 AGN template provides a natural means of determining the bolometric luminosity (L bol ) for IR AGNs, but the MIR luminosity in the template comes from reprocessed dust emission, which would result in double-counting the UV emission from the disk for AGNs viewed face-on (Marconi et al. 2004, hereafter M04;Richards et al. 2006). We instead determine L bol using a piecewise combination of the AGN model SED and three power-laws. We integrate the unreddened A10 AGN template from Lyα to 1µm, shortward of which the template becomes uncertain due to absorption by the Lyα forest. We estimate the X-ray luminosity by integrating a Γ = 1.7 power law from 1-10 keV. We estimate the extreme ultraviolet (EUV) luminosity by integrating L ν ∝ ν −αox from λ = 1216Å to 1 keV. The slope of the EUV SED (α ox ) is given by Eqn. 2 of Vignali et al. (2003), with L ν (2500Å) taken from the AGN template SED. Finally, we eliminate reprocessed emission from dust by assuming F ν ∝ ν −2 for 1µm < λ < 30µm, following M04.
To correct the X-ray luminosities of X-ray AGNs to bolometric luminosities, we fit a power-law to the measured L X (0.3 − 8 keV) and L bol of the 8 IR AGNs identified separately in X-rays. A least-squares fit to the total The measured X-ray (0.3-8 keV) and bolometric luminosities of AGNs identified by both the X-ray and IR selection criteria are shown by black points. L bol is derived by integrating the A10 AGN model SED component from 1216Å to 1µm and assuming a declining continuum with Fν ∝ ν −2 for λ > 1µm. L X is determined assuming a Γ = 1.7 power law. See Section 4.2 for further information.
X-ray and AGN luminosities yields, where L bol is the bolometric AGN luminosity integrated from 10 keV to 30µm. The AGNs used to determine Eqn. 13 show a scatter of 0.4 dex about the best-fit relation (Figure 8). Figure 8 suggests that the slope returned by the fit may be strongly influenced by the highest-luminosity AGN. However, a fit to the other 7 AGNs returns an identical slope (0.9 ± 0.5), so Eqn. 13 is not significantly biased by the highest-luminosity object. The luminosity dependence of the bolometric corrections (BCs) derived from the fit is therefore robust. The slope is also consistent, within large statistical uncertainties, with the luminosity-dependence derived by M04. The BCs derived from Eqn. 13 are fairly crude. For example, the fit does not account for uncertainties on L X or L bol . It also ignores upper limits, which will lead it to over-predict the true L X at fixed L bol . M04, by contrast, provide luminosity-dependent BCs in several energy ranges that account for X-ray non-detections (their Eqn. 21). We convert their BCs to 0.3-8 keV assuming Γ = 1.7 and estimate the expected X-ray flux from our IR AGNs. The predicted X-ray fluxes exceed those estimated using Eqn. 13, which we know over-estimates the intrinsic L X -L bol relation, by 0.7 dex or more. This might result if the M04 SED is a poor match to the A10 AGN template. M04 determine their X-ray BCs using the α ox relation derived by Vignali et al. (2003) for a sample of SDSS quasars, including broad-absorption line quasars (BALQSOs). Given that our L bol calculation is insensitive to the absorption in BALQSOs, it is possible that the M04 BCs over-estimate L X at fixed L ν (2500Å) when applied to our sample. In order to produce consis-tent results for the X-ray and IR AGNs, we therefore use the BCs implied by Eqn. 13 rather than the M04 BCs, despite the large uncertainties associated with Eqn. 13.
X-ray Sensitivity
With the L X -L bol relation provided by Eqn. 13, we can determine whether the X-ray non-detection of many IR AGNs results from some intrinsic difference between the two classes of AGNs or if it is merely a result of the sensitivity of the X-ray images used by M06. Eqn. 13 predicts that 9 (5) IR AGNs with no X-ray detections should be more than a factor of 3 (5) brighter than the faintest point source in their parent clusters (M06). The M04 BCs produce more X-ray flux at fixed bolometric luminosity than Eqn. 13 and yield 13 (12) IR AGNs with significant X-ray non-detections with the same flux limits. The lack of detectable X-rays from many IR AGNs is consequently easier to explain if we use Eqn. 13 rather than the M04 relations to predict their X-ray luminosities.
The minimum detected flux in a given cluster may not always be a fair representation of the sensitivity for a given IR AGN due to variations in the Chandra effective area with off-axis angle. However, the magnitudes by which many IR AGNs in AC 114 exceed the minimum detected flux, sometimes more than a factor of 5, suggest that these AGNs should have been detected if they obeyed the L X -L bol relation of Eqn. 13. The nondetection of many IR AGNs in X-rays is qualitatively consistent with the results of Hickox et al. (2009), whose IR AGN selection relied upon the Stern wedge, and who found many strong IR AGNs that could not be identified in X-rays. At least some of the "missing" IR AGNs could be highly obscured. An intervening absorber with N H = 10 22 cm −2 would reduce the observed 0.5-2 keV flux by a factor 3, which is sufficient to explain many of the missing IR AGNs. The missing AGNs could also result from the large scatter about the mean L ν (2500Å)α ox relation. The AGN with the most significant Xray non-detection exceeds the minimum reported flux by a factor of 7, which can be explained by ∆α ox ≈ 0.4. Vignali et al. (2003) report a large intrinsic scatter about their best-fit relation, and the combination of this scatter with in situ absorption could mask moderately luminous AGNs from detection in X-rays.
Finally, at least one IR AGN (A1689 #109) appears to be absent from the M06 sample due to X-ray variability rather than as a result of absorption, intrinsic Xray faintness, or shallow Chandra imaging. This object is moderately luminous (L bol = 2.1 × 10 10 L ⊙ ), AGNdominated (f AGN = 0.95), falls firmly in the middle of the Stern wedge, and is very robustly detected by our likelihood ratio selection (ρ = 4 × 10 −77 ). Nevertheless, there is no X-ray point source associated with this object in the Chandra image employed by M06. In a more recent observation (Chandra Obs ID 6930, PI G. Garmire), A1689 #109 is associated with an X-ray point source far brighter than the X-ray sources reported by M06. It therefore seems likely that the IR AGNs that require the most extreme values of α ox could be accounted for by variability rather than by systematically weak X-ray emission compared to their visible-wavelength luminosities. Fig. 9.-Cumulative stellar mass, SFR and sSFR distributions of the X-ray and IR AGN samples compared to the distributions for all cluster members. Neither of the AGN samples show any significant differences in either M * or SFR compared to the full sample of cluster members, nor does the merged AGN sample. However, IR AGN hosts have higher sSFR than both X-ray AGN hosts and normal galaxies at 99% confidence, despite their similarities in M * and SFR.
Host Galaxies
We determine stellar masses and SFRs for AGN host galaxies after subtracting the AGN component from the SED. This introduces some additional uncertainty in the resulting masses and SFRs beyond the original photometric uncertainties, as discussed in Section 3.3. The uncertainty in the AGN contribution to the measured MIR fluxes can prevent detection of low-level star formation in IR AGNs. The SFR distribution among IR AGNs is therefore biased toward high SFR. Figure 9 shows the results of comparing galaxies hosting different types of AGNs to one another and also to cluster galaxies as a whole. The stellar mass and SFR distributions of galaxies hosting X-ray and IR AGNs show no measurable differences with the distributions of all cluster members. Merging the X-ray and IR AGN samples likewise yields no measurable difference. However, the hosts of IR AGNs have high specific SFRs (sSFR) compared to the hosts of X-ray AGNs and to all cluster members at 98% and 97% confidence, respectively. The difference between the sSFRs of X-ray AGN hosts and the full galaxy sample is not significant. However, X-ray AGN hosts appear to have lower sSFRs than the average galaxy in Figure 9, which is consistent with previous results using field galaxies (Hickox et al. 2009). We must also consider the effect of non-detections on the measured distributions. Many of the IR AGN hosts have upper limits on SFR that are smaller than the SFRs of the X-ray AGN host galaxies with the lowest measurable SFRs. Therefore, if the IR AGN hosts had a distribution of SFRs similar to the X-ray AGN hosts with measurable star-formation, star-formation would have been detected in most IR AGN hosts. This indicates that uncertainties in the AGN corrections alone cannot account for the higher sSFRs among IR AGN hosts.
The IRAC color-color diagram (e.g. Stern et al. 2005) probes of the nature of AGN host galaxies independent of their model SEDs by identifying the dominant source of their MIR emission (Donley et al. 2008). The MIR colors of X-ray and IR AGNs before their AGN components are subtracted are compared to all cluster members in Figure 6. Galaxies hosting AGNs have unremarkable [5.8] − [8.0] colors but do not extend as far to the red as normal galaxies, which indicates that AGNs are seldom found in starbursts or luminous infrared galaxies (Donley et al. 2008). AGN hosts also show redder [3.6] − [4.5] colors than typical for a red sequence galaxy, which may indicate a contribution of hot dust to the 4.5µm continuum. The colors of AGN hosts, especially IR AGN hosts, are influenced by the AGN continuum, but tests using the AGN and spiral galaxy templates indicate that only galaxies in the Stern wedge have more than 50% of their IRAC fluxes contributed by the AGN component. A two-dimensional KS-test confirms that, after excluding objects in the Stern wedge, the IRAC colors of both X-ray and IR AGNs differ from normal galaxies at > 99.9% confidence, and the absence of Xray AGNs among the most vigorously star-forming galaxies (those with the reddest [5.8] − [8.0] colors) is consistent with earlier indications that X-ray AGNs avoid the blue cloud in visible color-magnitude diagrams (CMDs; Schawinski et al. 2009;Hickox et al. 2009). The distribution of X-ray AGNs in Figure 6 also appears to be consistent with the results of Gorjian et al. (2008), who found that 16.8 ± 0.3% of X-ray-identified AGNs outside the Stern wedge had very red [5.8] − [8.0] colors consistent with vigorous, on-going star-formation. We found this population to be 20 ± 6% among our X-ray AGNs.
The visible CMD provides a means to estimate the nature of galaxies in the absence of measurable starformation at MIR wavelengths. Figure 10 shows the CMD for each cluster after the AGN component has been subtracted. The fraction of cluster members hosting an X-ray AGN peaks on the red sequence, and the probability that the X-ray AGN hosts are drawn from the parent cluster population is less than 10 −3 . This contrasts with AGN hosts in the field, where the X-ray AGN fraction typically peaks in the green valley (Hickox et al. 2009;Schawinski et al. 2009;Silverman et al. 2009, henceforth S09) for AGNs identified using either X-ray luminosity or emission-line diagnostics. IR AGN hosts, both in our sample of cluster AGNs and in the field sample of Hickox et al. (2009), conspicuously avoid the red sequence. Like the difference between X-ray AGN hosts and the parent cluster population, this result is significant at > 99.9% confidence. This indicates that the IR AGN sample has at most limited contamination by MIR-excess early-type galaxies of the sort studied by, e.g. Brand et al. (2009). Galaxies hosting IR AGNs in clusters also show an important difference compared to their counterparts in the field. While only 1.5% of field galaxies hosting the IR AGNs studied by Hickox et al. (2009) had 0.1 (u − g) colors redder than the median of the red sequence, more than 20% of IR AGNs in clusters have visible colors redder than the red sequence in their parent clusters.
We examined the SDSS g −r colors of very red galaxies (V − R) rest−f rame > 0.8 in Abell 1689, which has the largest number of such objects, and found that most also appear red in SDSS colors. The most notable exception is Abell 1689 #192, which we have identified as an IR AGN, which suggests that its colors may change due to AGN variability. The qualitative agreement between the colors of very red galaxies in Figure 10 and their g −r colors from SDSS suggests that these objects are genuinely unusual and not the result of photometric errors. These galaxies also show substantial reddening of the AGN template in their A10 fit results, with E(B−V ) = 0.4 and a trend for higher E(B − V ) in galaxies with redder colors at 97% confidence. These results suggest that the unusually red galaxies in Figure 10 experience significant internal extinction that is not present in most galaxies.
Since the AGN component of the SED fit may account not only for a true AGN contribution but also for intrinsic variations about the normal galaxy templates, some or all of these very red AGNs, which represent approximately 1/3 of our IR AGN sample, may not be true AGNs. However, fewer than half (7/17) of objects with (V − R) rest−f rame > 0.8 are identified as IR AGNs; this implies that IR AGNs must differ from normal galaxies not only in the visible but also in the MIR, and MIR fluxes are practically immune to extinction. Therefore, most of the IR AGNs identified in this region of colormagnitude space cannot be false-positives selected due to their unusual visible colors but must have genuine nuclear activity contributing to their SEDs.
Accretion Rates
We use the bolometric luminosities of both X-ray and IR AGNs to measure the growth of their black holes and compare the black hole growth to the assembly of stellar mass in their host galaxies. The accretion rate of a black hole can be generically written as, Fig. 11.-Comparison of accretion rates (Ṁ BH ; left panel) and SMBH growth rates relative to their host galaxies (Ṁ BH /SF R; right panel). The various distributions compare different methods of estimatingṀ BH : directly from the SEDs of IR AGNs (red dashed), applying ad hoc BCs derived from AGNs with both Xray and IR identifications (solid black), and applying the M04 BCs (blue dotted). M04 BCs return significantly lowerṀ BH than the other two methods, which are consistent with one another. Short black and red arrows mark upper limits for X-ray and IR AGNs, respectively. The dashed vertical line marks the ratio required to maintain the z = 0 M BH -M bulge relation (Marconi & Hunt 2003). See Section 4.4 for more on the various BCs.
where L bol is the bolometric luminosity and ǫ is the efficiency of conversion between the rest mass energy (Ṁ c 2 ) of the accreted material and the energy radiated by the black hole. We assume ǫ = 0.1, appropriate for a thin accretion disk around an SMBH with moderate spin (Thorne 1974) and determine L bol as described in Section 4.1. The accretion rates derived from Eqn. 14 for the Xray and IR AGN samples are shown in Figure 11. The left panel suggests that X-ray and IR AGNs have similar accretion rates, and a KS test reveals that there is no significant difference between the two samples. This is surprising, since we would naïvely expect that the difference between X-ray and IR AGNs might be due to different dependence of X-ray and IR selection techniques on luminosity. Instead, the right panel of Figure 11 shows that the X-ray and IR AGN samples have Ṁ BH /SF R = 3 × 10 −3 and Ṁ BH /SF R = 2 × 10 −3 , respectively. These ratios are comparable to the mean M BH /M bulge in the local universe (2 × 10 −3 , Marconi & Hunt 2003), which indicates that the SMBHs in cluster AGNs are accreting at approximately the rate required to maintain the z = 0 M BH -M bulge relation. However, this is likely an artifact of our SFR detection thresholds, as the accretion rates of these objects are not large enough to produce outliers on the M BH -M bulge relation in a Hubble time. Figure 12 compares black hole accretion rates with host mass and sSFR. We find no significant correlation be-tweenṀ BH and sSFR, nor do we find a correlation oḟ M BH with stellar mass among the X-ray AGN sample. However,Ṁ BH correlates with stellar mass among IR Fig. 12.-Relationships of black hole accretion rates (Ṁ BH ) to stellar masses and sSFRs. Black points and arrows showṀ BH inferred from X-ray luminosities using BCs from Eqn. 13, while red points and arrows markṀ BH inferred from integrating model SEDs. Stellar masses and SFRs are measured from SEDs and include the entire galaxy, not just the spheroidal component.
AGNs at 99.5% confidence, weaking to 98% confidence among the merged AGN sample. This correlation may be related to the ability of more massive cluster members to retain more cold gas. Figure 13 shows the relationship between black hole growth and stellar mass assembly in AGN host galaxies. The correlation ofṀ BH with SFR is extraordinarily strong (> 99.9% confidence), and both X-ray and IR AGNs appear to follow the same relation, with SFR ∝ M 0.46±0.06 BH . Netzer (2009) studied emission line selected AGNs from SDSS and also found a tight correlation between SFR and AGN luminosity across nearly 5 dex in L bol . However, their SFR-Ṁ BH relation (SFR ∝Ṁ 0.8 BH ) is steeper than ours at 5.7σ. Furthermore, Lutz et al. (2010) performed a stacking analysis of X-ray identified AGNs at z ∼ 1 and found no measurable correlation of SFR with L bol for AGNs with L 2−10keV < 10 44 erg s −1 . However, the millimeter-bright, optically luminous QSOs studied by Lutz et al. (2008) appear to be consistent with both Netzer (2009) andLutz et al. (2010). The qualitative similarity of our results to those of Netzer (2009) andLutz et al. (2010) suggests that we are seeing the same underlying relationship. That both X-ray, IR and emission-line selected AGNs appear to show the same general trend toward higherṀ BH in hosts with higher SFR suggests that accretion rates in all of these objects are set by the size of the global cold gas reservoir. Such a relationship is also predicted theoretically as a result of large-scale dynamical instabilities, which drive cold gas to the centers of galaxies where it can be accreted (Kawakatu & Wada 2008;Hopkins & Quataert 2010). However, the quantitative discrepancies between the various observational signatures of star-formation and gas accretion indicate that further work on the relationship between these phenomena is needed. Figure 13 also compares star-formation and black hole growth among our AGN sample with the median ratio found by S09 and the ratio needed to maintain the z = 0 M BH -M bulge relation. In some casesṀ BH /SF R falls more than a dex below the ratio reported by S09 for field galaxies at z ≈ 0.8 and more than 0.3 dex below the rate needed to maintain the local M BH -M bulge relation. However, if we consider AGN hosts with no measurable star-formation, the disagreement inṀ BH /Ṁ * between the cluster AGNs we measure and the field AGNs of S09 becomes far less pronounced. The upper limits in Figure 13 fill in much of the empty space between the S09 median relation and the cluster AGNs with measurable star-formation, but the fraction of galaxies withṀ BH /SF R < 2 × 10 −3 is larger in Figure 13 than in Figure 13 of S09 (7/39 versus 9/67). This difference grows (7/27) if we consider only AGNs witḣ M BH < 10 −2 M ⊙ yr −1 , which is below the luminosity limit of the S09 sample. However, even the difference between the low-luminosity subsample and the S09 result is not statistically significant (90% confidence). Silverman et al. (2009) project the evolution in the median SFR of their AGN sample to z = 0 and find that it agrees with the SFRs measured in Type 2 AGNs with log(L [OIII] ) > 40.5 in SDSS. The median z = 0.2 SFR for the S09 AGN hosts is SF R ≈ 0.5 M ⊙ yr −1 , which is comparable to our detection threshold. As a result, the AGNs measured in Figure 13 are more comparable to a high-SFR subsample of the S09 AGNs. However, there is no significant difference in theṀ BH /SF R of high-SFR versus low-SFR AGNs in S09. We therefore concluded that the ratio ofṀ BH to SFR our sample of low-z cluster AGNs is consistent with the ratios observed in high-z AGNs in the field. Martini et al. (2007) found that luminous (L X > 10 42 erg s −1 ) X-ray AGNs were more centrally concentrated in R/R 200 than normal cluster members at 97% confidence. After pruning the AGN sample of suspect redshifts and applying improved K-corrections, we assemble the radial distributions of our AGN samples in Figure 14. Figures 14a and 14b, which consider the X-ray and IR AGN samples, respectively, have slightly different distributions of parent galaxies. This is because Spitzer pointings cover only the fields around X-ray sources identified by M06 and not the full Chandra field of view. The IR AGNs are selected from the cluster member catalog after SED fitting has been performed, so the radial distribution of IR AGNs is guaranteed to be unbiased with respect to the cluster galaxy sample we used above, while X-ray AGNs must be compared to the distribution of all galaxies within the Chandra footprint. These different selection footprints lead to the different radial distributions shown in the solid red and black lines in Figure 14b. The difference is not significant, however, and has no impact on our conclusions.
Radial Distributions
We have determined that the host galaxy of the X-ray point source identified as the cluster AGN AC114-5 by M06 had an erroneous spectroscopic redshift reported in the literature (see Section 3.1 and Figure 1). Our SED fitting indicates that this source is a background QSO at z phot ≈ 0.99. Without this object, which is located at a projected distance R/R 200 ≈ 0.2 from the center of AC 114, the significance of the difference between the lumi- Fig. 13.-Relationship between black hole growth and star-formation in our AGN sample compared to the AGN sample from COSMOS examined by S09. Green circles and arrows mark the S09 AGNs. All other symbols are the same as in Figure 12. Lines mark theṀ BH /SF R relation measured by S09 (solid) and the ratio required to produce the z = 0 M BH -M bulge relation (Marconi & Hunt 2003;dashed). Our SFRs and those reported by Silverman et al. (2009) are galaxy-wide SFRs rather than bulge SFRs. nous X-ray AGN and control samples drops to 89% confidence with a luminosity-selected control sample and 92% confidence with a mass-selected control sample. Consistent with the results of Martini et al. (2007), we also find no significant difference between the radial distribution of the full X-ray AGN sample compared to the distribution of cluster members as a whole.
Following Martini et al. (2009), we also try a redshiftdependent luminosity threshold M R,cut = M * R (0) + 1 − z in place of a fixed value. The galaxy and AGN samples selected using this criterion show no significant differences in their R/R 200 distributions. Martini et al. (2009) chose this evolving threshold to select a sample of passively-evolving galaxies at fixed stellar mass. A mass threshold (M * > 3 × 10 10 M ⊙ ) appropriate for an ellipti-cal galaxy at z = 0 with M R = M R,cut again yields no measurable difference between the radial distributions of X-ray AGNs and all cluster members. We conclude that the radial distributions of both X-ray and IR AGNs in galaxy clusters are consistent with the distribution of cluster members, although the agreement between cluster members and IR AGNs is much better than between cluster members and X-ray AGNs.
DISCUSSION
Identifying AGN from their X-ray emission is widely considered to be among the most robust means of selecting AGNs (e.g. Ueda et al. 2003, S09, A10), because the measured hard X-ray luminosity of a given AGN is largely insensitive to absorption if N H < 10 24 cm −2 . Furthermore, the fraction of Compton-thick AGNs (N H > 10 24 cm −2 ) is small, with 10% or less of all cosmic black hole growth taking place in Compton-thick systems ). Alternatively, AGNs can also be robustly identified from their UV continuum emission after it has been absorbed by dust and re-emitted in the MIR. If these techniques are similarly immune to the effects of absorption they should yield very similar AGN samples. Instead, we find that at most 15% of AGNs in galaxy clusters are identified by both X-ray and MIR techniques.
Furthermore, it is clear that this dichotomy does not result solely from the relative luminosities of X-ray and IR AGNs. The IR AGN sample contains 5-9 objects that should have been detected in X-rays if their SEDs were similar to those AGNs identified using both selection methods. The most prominent of these is Abell 1689 #109, which has L bol ≈ 8 × 10 45 erg s −1 but was not detected in the Chandra image used by M06 to identify X-ray AGNs in Abell 1689. This AGN appears quite prominently in a subsequent Chandra image, indicating that its initial non-detection was most likely the result of X-ray variability. This example demonstrates that the absence of detectable X-ray emission from an AGN candidate, even a fairly luminous one, does not necessarily preclude the presence of an AGN. However, Abell 1689 #109 is not typical. The IR AGNs with significant Xray non-detections are not necessarily the most luminous. Instead, they reside in the clusters with the deepest Xray images. Indeed, all of the X-ray non-detections in AC 114 that fall within the Chandra image footprint are predicted to be at least 3 times brighter than the faintest reported X-ray point source. As a result, at least some of these non-detections could indicate contamination of the IR AGN sample by one or more of the effects discussed in Section 3.2, e.g. intrinsic variation in the AGN SED or dust heating by AGB carbon-stars. More observational and theoretical work on the dust emission in old stellar populations are required before the potential of these sources of MIR emission to mimic an AGN-like SED can be quantified.
In the absence of detailed, calibrated models for "contamination" of MIR emission by old stars, we assume that this component is negligible. This implies that Xray selection alone can miss a large fraction of moderateto-low luminosity AGNs. This could have important implications for studies of star-formation in clusters using MIR luminosities (e.g. Saintonge et al. 2008;Bai et al. 2009;Geach et al. 2009). This is especially important if authors assume that AGNs can always be identified with X-rays alone or that the MIR emission from galaxies with X-ray excesses is always dominated by AGN emission. These assumptions imply that any MIR emission not associated with an X-ray AGN must be powered by star-formation and that no MIR emission from a galaxy hosting an X-ray AGN can be powered by star-formation.
Our results indicate that these assumptions may lead authors to overestimate the number of cluster galaxies with vigorous star-formation and to underestimate the number with moderate star-formation. Therefore, additional tests for AGN are needed to correctly interpret the MIR luminosities of cluster galaxies.
A difference between X-ray-and MIR-selected AGN samples also appears among field samples, which consist of more luminous AGNs than the ones we study and use a different MIR selection method (Hickox et al. 2009). The color distributions of IR AGNs selected using different techniques also differ from one another, but it is clear that galaxies hosting AGNs identified from their X-ray emission are dissimilar from galaxies hosting AGNs identified in the MIR. Most notably, IR AGN hosts have significantly higher sSFRs than the average cluster galaxy, while there is no significant difference between the sSFRs of X-ray AGNs and the cluster population as a whole. Since SFR correlates well with cold gas mass, higher sS-FRs among IR AGN host galaxies suggests these galaxies have a larger fraction of their baryons in cold gas than X-ray AGN hosts. However, the differences discussed in Section 4.3 are determined only for galaxies with measurable star-formation. Several IR AGNs are found in host galaxies that have both visible and IRAC colors consistent with passively-evolving stellar systems.
The tight correlations between accretion rates of both X-ray and IR AGNs with SFR in their host galaxies suggests that the two classes are fueled by the same mechanism and are therefore fundamentally similar. Subject to the caveat described above, the larger sSFRs found in IR AGN hosts might explain the apparent dichotomy of the two AGN classes despite their physical similarity. Larger gas fractions in IR AGN hosts could lead to larger average column densities in IR AGNs, depressing L X /L bol in these systems. The presence of at least 5 of the 8 IR AGNs with X-ray counterparts on the red sequence, where there is little cold gas to participate in X-ray absorption, tends to support this scenario ( Figure 10). If cold gas fractions of AGN host galaxies influence the detectability of X-ray AGNs, this might also explain the dearth of X-ray AGNs in the green valley in clusters compared to the field. The X-ray AGNs in our sample are weaker than the AGNs usually studied in field galaxy samples, and a modest cold gas reservoir in green valley galaxies could more easily absorb enough X-rays from an AGN with L X = 10 41 erg s −1 to make it undetectable.
Doing the same for an AGN with L X = 10 43 erg s −1 , which is more typical for the field samples studied by, e.g. Hickox et al. (2009) and S09, would require a larger gas column.
Just over half (58%) of the M06 X-ray point sources have detectable hard X-ray emission, and therefore many AGNs near the Chandra detection limits could be hidden by a sufficiently large absorbing column. Only 3 of the 9 IR AGNs in AC114 whose bolometric luminosities imply that they should have been detected in X-rays, but were not, would remain detectable in the soft X-ray band behind a gas column with N H = 10 22 cm −2 . This column density is large for Type I AGNs, but it is not unusual for Type II AGNs observed in X-rays (Ueda et al. 2003). Furthermore, X-ray and IR AGNs seem to obey the same relationship between SFR and accretion rate in AGN hosts whose SFRs are measurable. This is consistent with the hypothesis that the apparent dichotomy between X-ray and IR AGNs is false, and the shape of an AGN's SED depends strongly on the amount of absorbing material between us and the central black hole.
The scenario we propose, in which absorption by cold gas in the host galaxy is responsible for the absence of detectable X-ray emission from IR AGNs, is consistent with the differences we find between the two samples. However, verifying that absorption by the host ISM is indeed the cause of this observed difference will require deeper X-ray observations to detect X-ray counterparts and estimate absorption columns. If this can be accomplished, the presence of spectral signatures of X-ray absorption would confirm that the host galaxy is responsible for hiding some IR AGNs from X-ray detection.
CONCLUSIONS
We have used Spitzer imaging of galaxy clusters to identify AGNs and to measure the masses and starformation rates of their host galaxies. We find that AGNs identified by this technique have very little overlap with AGNs identified in X-rays. We compared the host galaxies of AGNs identified using the two methods and determined that, while their masses and SFRs are indistinguishable, IR AGNs reside in galaxies with higher sSFRs than both X-ray AGN hosts and the parent sample of cluster galaxies. The hosts of X-ray AGNs have sSFRs that are somewhat lower than but consistent with the sSFRs seen in cluster galaxies as a whole. The difference between X-ray AGN hosts and normal cluster galaxies is significant only when comparing their positions in visible color-magnitude and MIR color-color diagrams. Xray AGN hosts are rarely found in the regions of both diagrams associated with vigorous star-formation.
We also find that accretion rates of both X-ray and IR AGNs correlate strongly with SFR in their host galaxies. This suggests that X-ray and IR AGNs are physically similar and are fueled by the same mechanism. We hypothesize that the larger sSFRs seen in IR AGN hosts indicate larger cold gas fractions in these galaxies, and sug-gest that this could account for the apparent dichotomy between X-ray and IR AGNs. A moderately large cold gas column density of 10 23 cm −2 could suppress the Xray emission from the IR AGNs enough that we would be unable to detect them. The presence of IR AGNs but not X-ray AGNs in galaxies with very red optical colors, indicative of strong absorption, lends credence to this hypothesis. It might also be verifiable directly by deep X-ray observations of either AC 114 or Abell 1689 to search for X-ray emission from IR AGNs and to determine if such X-ray emission shows evidence for absorption intrinsic to the host galaxy. For example, the most luminous IR AGN with no X-ray counterpart in Abell 1689 could be detected by Chandra with S/N = 3 per resolution element at 4 keV -the energy cutoff for objects with N H = 10 23 cm −2 -in 160 ks. This would allow a crude model spectrum to be constructed and the intrinsic absorption column to be measured. Finally, we have obtained NIR spectra of several IR AGN in Abell 1689, which we will examine for high-ionization emission lines that would unambiguously indicate the presence of an AGN.
Following Martini et al. (2007), we compared the radial distributions of AGNs and all cluster members. We eliminated one AGN with a spectroscopic redshift from the literature that incorrectly identified a background quasar as a cluster member. Without this object, the significance of their result that luminous X-ray AGNs (L X > 10 42 erg s −1 ) are more concentrated than cluster members as a whole is reduced to ∼ 90% confidence. While this result is no longer significant, it would be worthwhile to extend the present sample using archival Chandra imaging of additional clusters to either confirm or refute that X-ray luminous AGNs are more concentrated than the galaxy populations of their parent clusters. It is unlikely, however, that a similar exercise using IR AGNs would yield a positive result, as the radial distribution of IR AGNs agrees very closely with the distribution of cluster galaxies. Martini et al. (2007), determined using the biweight estimator of Beers et al. (1990). (2) Velocity dispersions of cluster members estimated by Martini et al. (2009) using the biweight measure of Beers et al. (1990). (3) Total number of galaxies with both MIR and R-band coverage identified as cluster members by Martini et al. (2007) or extracted from the literature using their redshift limits. (4) Astronomical Observation Request (AOR) numbers of Spitzer observations used to contruct IRAC mosaics. (5) The minimum detectable observer-frame 8µm luminosity in each cluster, derived from the 3σ lower limit on measurable flux in a "typical" part of the 8µm mosaic image. Due to the variable coverage across the cluster, lower luminosites are detectable in some cluster members than in others. (6) AORs used to contruct the 24µm mosaics. (7) 3σ lower limits on detectable 24µm luminosities. These are derived in a similar manner to the IRAC limits in column (5) and have the same caveats. 88 ± 0.14 -0.10 ± 0.01 0.08 ± 0.01 < 0.07 < 0.21 < 0.10 ms1008-001 10:10:34.1 -12:39:52 22.58 ± 0.12 21.11 ± 0.09 20.10 ± 0.08 19.41 ± 0.08 0.08 ± 0.01 0.07 ± 0.01 0.04 ± 0.01 < 0.05 -ac114-001 22:58:52.3 -34:46:47 -22.40 ± 0.12 21.73 ± 0.10 21.40 ± 0.11 0.00 ± 0.00 0.00 ± 0.00 < 0.01 < 0.02 -Note. -Visible and MIR photometry for a small selection of example galaxies. The full table is available from the online version of the journal. (1) The name of this object, constructed from a shorthand of its parent cluster and the order in which each object appears in the list of cluster members extracted from NED. (2-3) Positions of this object in J2000 coordinates, as derived from the R-band images. (4-7) Visible photometry for each object, where detectable, in Vega magnitudes. Fluxes are measured in the R-band Kron-like aperture. Objects with no quoted magnitudes in a given band have either no coverage or no detection in that band. No upper limits are quoted. (5-8) MIR fluxes measured in R-band Kron-like aperture. Where appropriate, 3σ upper limits on measured MIR fluxes, derived from the appropriate uncertainty mosaic, are given. Galaxies with no quoted upper limit for a given band have no coverage in the corresponding image. Note. -Brief sample table summarizing AGNs identified either by their X-ray luminosity or their SED shapes. The full table is available from the electronic edition of the journal. (1) The name of this object in Table ??.
(2) The name given to the X-ray source by Martini et al. (2006). (3-4) Position of this AGN in J2000 coordinates, as derived from the R-band image. (5) The bolometric luminosity derived by integrating the direct component of the AGN contribution to the model SED. These luminosities are quoted only for IR AGNs. (6) Rest-frame X-ray luminosities in the 0.3-8 keV band from Table 4 of Martini et al. (2006). X-ray luminosities are given only for X-ray AGNs. (7) Stellar mass derived using the M/L coefficients appropriate for a solar metallicity galaxy with a scaled Salpeter IMF and applying the Bruzual & Charlot population synthesis model (Bell & de Jong 2001, Table 4). Systematic errors are derived by applying the M/L coefficients appropriate for a Salpeter IMF and the Pégase population synthesis model. Upper limits are given at 3σ of the statistical error only. (8) SFR derived either from the 8µm luminosity, the 24µm luminosity or by taking the geometric mean of the two, depending on the measurements available. Uncertainties include only statistical errors, and upper limits are quoted at 3σ in the more sensitive of the 8µm and 24µm bands. | 2011-01-04T21:00:04.000Z | 2011-01-04T00:00:00.000 | {
"year": 2011,
"sha1": "aa191e63e3600ab69ef2899ffb3c9accbd8d7d17",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1101.0812",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aa191e63e3600ab69ef2899ffb3c9accbd8d7d17",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
234417187 | pes2o/s2orc | v3-fos-license | BEACONDRIOD: AN AUTOMATED STUDENT ATTENDANCE SYSTEM
,
INTRODUCTION
enerally speaking, student"s attendance is mandatory in many institutes and universities. It even requires a certain attendance rate in class in order to pass the course. It takes a considerable amount of time/effort to record when the process is paper-based and/or the number of students is high and it is prone to error. This gives us no choice but to have them recorded in the best way possible with less human involvement or/and time-consumption, and with high accuracy. Although it is still arguable that recording student"s attendance to be one of the requirements, the study shows student attendance will affect students" lives even after graduation in their place of work, and absenteeism will have an impact on their grades and knowledge for the class (Credé, Roch and Kieszczynka, 2010).
Several researches take the advantages of smartphones for taking attendance due to the fact that smartphones now have become the necessity of people's life (Bhih, Johnson and Randles, 2016). All in all, those systems that have been developed for the purpose of recording attendance can fall into three categories based on what they are focusing on (A: Accuracy, B: Speed, C: Cost) or a bit of mixed among them. In this study, we propose the implementation of a smart fully automated beacon-based system in a/an (accurate, speedy, costless) way to be used for taking students" attendance by instructors.
Bluetooth Low Energy (BLE) is a Personal Wireless Area Network (PWAN) like classical Bluetooth. The main difference is the former has significantly lower power consumption and costs less. The lower power consumption of BLE has made it popular in apps of Internet of Things and Beacon technology. Beacons are small hardware G Journal of University of Duhok,Vol.32,No.2 (Pure and Eng. Sciences),3232 (Special Issue) 3 rd international conference on recent innovations in engineering (ICRIE) Duhok, September 9-10-2020 iBeacon from Apple and Eddystone from Google are the two main standard protocols based on which many companies have manufactured beacon devices. Both Android and iOS support acting as either Peripheral or Receiver allowing the proposed system to use the instructor's smartphone as receiver and student"s mobile device as the peripheral (Android Beacon Library, no date; Turning an iOS Device into an iBeacon Device | Apple Developer Documentation, no date). The proposed system uses the open-source Altbeacon Android Liberary on both of the applications turning one to a beacon scanner while the other to a beacon transmitter (Android Beacon Library, no date).
After the instructor enters the classroom and opens the app installed on his/her device, s/he is presented with a list of classes s/he is teaching, from which s/he can select the one to take attendance for. Students need to switch on their device"s Bluetooth to be found by the instructor"s app in the minimum amount of time.
In order to aid the readers, this paper is written as follows: Section II, which highlights the related works, followed by section III discusses the proposed system and working. Subsequently, section IV shows the requirement of the system with some of its subsections, section V which explains the implementation of the system and finally the conclusion.
RELATED WORKS
An equitable number of researchers has conducted in this area, all of them were to facilitate the process of taking student attendances and improve it in (performance, accuracy or cost) or sometimes in all of them. There are several approaches/techniques to solve this problem, such as using an RFID-based system, Bluetooth-based system, facial recognition, and Beacon® to transmit/send data.
Recording student attendance is not a new issue, in fact, it has been around for many great years that has led to many researches being published to tackle it. A semi-automatic attendance system is explained that works on facial recognition, the downside of such an approach is the need of extra high-quality equipment and computational power leading to extra cost "Facial Recognition system needs a large image to be used for face detection" (Chintalapati and Raghunadh, 2013). The cost is eliminated in a proposed system that uses the personal mobile phone camera to take a photo of the students in the class and send it to a server for face detection and recognition process, this solution is great in terms of budget as it does not require any extra devices, however, it is argued that pose variation, lighting conditions or facial expression impact the result let alone the time that needs to prepare the list of present/absent students. The downside of this system is cost and time to configure it and also not that much speed during its use (Samet and Tanriverdi, 2017;Akbar et al., 2019).
A different type of semi-automated system was proposed that uses RFID/QR technique, the pros of such an approach are creating queue at the entrance of the classroom or at any place where the RFID reader is installed, time-consuming, the possibility for students to carry their friends" card without being in the class physically (Verma and Gupta, 2013;Chennattu et al., 2019). Fingerprint technology is another technique to be used for taking attendance, but overall, those biometrics systems are costly and again students" queue may occur (Badejo et al., 2018;Koppikar et al., 2019).
Another approach that seems to be more popular for taking attendance is using Beacon. In such approaches, a Beacon device is installed in the classroom and students require to install a provided app that scans for beacon"s signal. Once the beacon is detected, the app sends the students details together with the beacon ID to a server to mark students as present. The cost of buying a beacon for each class is a major downside of this approach which is common with other approaches (Noguchi et al., 2015;Apoorv and Mathur, 2017;Azmi et al., 2019).
Overall, the systems that have been introduced and implemented are somehow costly, time-consuming, and also takes more time to record the attendance with. There is also another point that we need to take it into consideration, which is accuracy. The data that is recorded may not be 100% accurate and that causes problems.
PROPOSED MODEL
One of the distinguishing advantages of the proposed model is the fact that the need of buying a beacon device for every lecture room is eliminated resulting in reducing cost to zero. In previous researches beacon devices were the central point of the proposed systems requiring a considerable amount of money to be spent on buying such devices let alone the time required to configure and install them.
In our approach, the instructor's smartphone acts as a beacon receiver (beacon central) that receives data from students" (attendees") smartphones (peripheral). To take attendance, an instructor just needs to launch the app so it starts scanning for nearby beacon transmitters (the students" smartphones) and automatically marks them as present/absent. The instructor saves the result of the scanning once s/he is satisfied with the results which are sent to the cloud to be stored permanently.
The app itself requires no configuration by instructors during its usage as all the data such as student and course information, lecture hall and date/time that the instructor is teaching (currently) will be driven from a server. Both apps integrate with the already available Class Management Information System REST API to receive the necessary data.
IMPLEMENTATION
The proposed system consists of two mobile apps and a REST API, in which ONLY the instructor"s device needs to be connected to the internet, as shown in figure 1 (Overview of the system). One is installed on students" mobile devices which returns them into a beacon rd international conference on recent innovations in engineering (ICRIE) Duhok, September 9-10-2020 aree.ali@komar.edu.iq; bamo.nadir@komar.edu.iq 1 Corresponding author: College of Engineering, Komar University of Science and Technology, Kurdistan Region, Iraq 399 transmitter leveraging the BLE peripheral mode supported by both Android and iOS latest releases. The app uses the student ID as its beacon minor ID and is setup in a way that it transmits the beacon signal in the background only when the time is within the students" class schedule. It uses Android AlarmManager to compare the current time with the list of the class timetables that are retrieved and cached locally the first-time app runs. The first time the student opens the app, it will ask for username and password. After successful authentication, the student is redirected to the home screen in which a list of classes s/he is registered for is displayed. In this screen, the student can view the attendance rate for each of the courses s/he is registered for.
The other app is installed on the instructor"s mobile device that scans for beacon signals. When a valid beacon ID is found, it will be marked as present. Once satisfied, the instructor can save the result by sending it to the REST API. The instructor needs to open the app during every class from which s/he can select the class for which attendance needs to be taken. The app then starts scanning for beacons comparing the beacon"s minor ID with the ID of each registered student in the class. Once a match is detected, the student is marked as present. The app allows manually marking a student as present, absent or late in the event that the beacons scanner is unable to detect an actually present beacon.
Both mobile apps authenticate the user by sending the user credentials to a REST endpoint the response of which is a JSON Web Token (JWT) that is cached in the mobile device and added to the header of every request made to the REST API then after. In this way, the users of the apps do not need to login to the system on every app launch. This is particularly important for the instructor as s/he needs to be connected to the REST API in order to be able to save the attendance. Figure 2 shows the diagram of the instructor's app in which s/he needs login to the app inside the classroom and search for the students" BLE as we extensively mentioned before. As for figure 3 shows the system diagram of students" app, their mobile"s Bluetooth has to be switched on in order to be found by the instructor"s mobile derive. Figure 4 shows that the instructor has the option to change the range of scanning (iBeacon tutorial -Part 3: Ranging beacons -Estimote Developer, no date). As figure 5 illustrates, the instructor can also manually take the attendance in the event of a beacon not being detected .
CONCLUSION
In this paper, we have discussed an automated system to record the student attendance during lectures without using any extra device. The idea is to make students" smartphones to work as Beacons and to transmit data that can be done without even having an internet access. Upside of this system is, since it works with students" smartphones and it is not very likely students give out their smartphones to others just in order to be marked a present without being in the class physically, and there is no need of buying any extra equipment. The proposed system requires both students and the instructor to turn on the Bluetooth on their mobile phones when taking attendance which could be regarded a disadvantage of the system as having Bluetooth on adds extra power consumption. | 2021-05-13T00:03:42.688Z | 2020-12-28T00:00:00.000 | {
"year": 2020,
"sha1": "7036b29c67844910b6a97fe86b4a66a97e25f6b5",
"oa_license": "CCBYNCSA",
"oa_url": "https://journal.uod.ac/index.php/uodjournal/article/download/945/677",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "43445749d5f58b9be3ab43e2f4a29ba1438fa4af",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
230663669 | pes2o/s2orc | v3-fos-license | Catalytically inactive Cas9 impairs DNA replication fork progression to induce focal genomic instability
Abstract Catalytically inactive Cas9 (dCas9) has become an increasingly popular tool for targeted gene activation/inactivation, live-cell imaging, and base editing. While dCas9 was reported to induce base substitutions and indels, it has not been associated with structural variations. Here, we show that dCas9 impedes replication fork progression to destabilize tandem repeats in budding yeast. When targeted to the CUP1 array comprising ∼16 repeat units, dCas9 induced its contraction in most cells, especially in the presence of nicotinamide. Replication intermediate analysis demonstrated replication fork stalling in the vicinity of dCas9-bound sites. Genetic analysis indicated that while destabilization is counteracted by the replisome progression complex components Ctf4 and Mrc1 and the accessory helicase Rrm3, it involves single-strand annealing by the recombination proteins Rad52 and Rad59. Although dCas9-mediated replication fork stalling is a potential risk in conventional applications, it may serve as a novel tool for both mechanistic studies and manipulation of genomic instability.
INTRODUCTION
Cas9 is an RNA-guided endonuclease that cleaves doublestranded DNA at its target site, and the ease of designing single-guide RNA (sgRNA) has made it the most popular tool in genome editing. Catalytically inactive Cas9 (dCas9) bears mutations at two nuclease domains and has enabled a variety of applications. For instance, dCas9 can inhibit the progression of RNA polymerase to suppress transcription of the gene to which it binds (CRISPRi) (1). Moreover, dCas9 has been fused or complexed with fluorescent proteins, transcriptional activation/repression or epigenetic modification domains, and adenosine/cytidine deaminases to enable live-cell imaging of genomic loci, targeted gene activation/inactivation, and base editing, respectively (2). These applications take advantage of the function of dCas9 as a programmable sequence-specific DNAbinding protein. Since dCas9 lacks nuclease activity, it was presumed to be non-mutagenic. However, it was also reported to promote mutagenesis at a frequency of ∼10 −5 via R-loop formation (3). Most of these mutations were base substitutions attributable to spontaneous cytosine deamination of the non-target DNA strand of the dCas9-induced R-loop, whereas others included homopolymer instability and trans-lesion synthesis (TLS) (3). While dCas9 induces base substitutions and small indels, it has not been demonstrated, to our knowledge, to induce large structural variations (SVs). As recent studies reported Cas9 often induces unexpectedly large deletions around its target sites (4,5), the impact of dCas9-binding to genome DNA in vivo should be carefully examined in terms of SVs.
Eukaryotic genomes harbor many repetitive sequences in the form of tandem or interspersed repeats (6). They occasionally induce genomic instability leading to generation of SVs, including both pathogenic and adaptive copy number variations (CNVs). The most famous disease related to CNV of tandem repeats is triplet repeat disease, which is evoked by the expansion of arrays comprised of very short units, such as CAG, GAA, CGG, and CCG trinucleotides (7). In contrast, facioscapulohumeral muscular dystrophy is caused by the contraction of D4Z4 macrosatellite repeats comprised of a 3.3-kb unit harboring the DUX4 gene (8). CNV-mediated environmental adaptation has been well documented in budding yeast (9). When exposed to high concentrations of copper ions, yeast cells amplify the resistance gene CUP1 to rapidly generate adapted progenies. Most yeast strains contain tandemly iterated copies of CUP1 gene. Intriguingly, the presence of copper was shown to accelerate intra/sister chromatid recombination rate at CUP1 array (10). This enhanced recombination likely contributes to the generation of expanded CUP1 arrays and, hence, adapted progenies with enhanced copper resistance. Another example is rDNA comprising ∼150 copies of tandemly iterated units, and its instability is involved in cellular senescence (11). Both examples notably involve replication fork stalling or collapse followed by its repair.
The replisome at the replication fork uses the Cdc45-Mcm2-7-GINS complex (CMG helicase) to unwind DNA for fork progression and DNA synthesis by replicative polymerases. The replicative CMG helicase associates with proteins in the replisome progression complex (RPC), which includes the checkpoint mediator Mrc1, the Tof1-Csm3 complex, the replisome adaptor protein Ctf4, the histone chaperone FACT and DNA topoisomerase I (12). Replication fork stalling occurs through the actions of these proteins upon encountering an obstacle. The accessory helicase Rrm3 removes DNA-bound proteins such as the origin recognition complex, the transcription regulator Rap1, and the replication fork-blocking protein Fob1, whereas the Tof1-Csm3 complex counteracts Rrm3 in the removal process (13,14). Depletion of these proteins induces the contraction or expansion of tandem repeats. For instance, ctf4 cells amplify the copy number of rDNA (15), and mrc1 or rrm3 cells destabilize both the CUP1 and rDNA arrays (14,(16)(17)(18). Prolonged replication fork stalling results in its collapse, with or without DNA double strand breaks (DSBs). Cells have several pathways for coping with collapsed/broken forks, including homologous recombination (HR), non-homologous end joining (NHEJ), break-induced replication (BIR), TLS, template switching (TS) and single-strand annealing (SSA). During the repair process, repetitive sequences around the collapsed fork occasionally trigger the generation of SVs, including CNV of tandem repeat units. For the copper-accelerated CUP1 recombination described above, a model was proposed in which activated promoter activity-induced replication fork collapse is followed by BIR or fork restart using a homologous sequence on the sister chromatid (16).
Here, we report dCas9-induced CNV of tandem repeats. This finding led us to uncover that dCas9 impairs replication fork progression to induce focal genomic instability.
Yeast strains
All yeast strains used in this study are derived from BY4741 (MATa his3 1 leu2 0 met15 0 ura3 0) (19) (Supplementary Table S1). Standard culture media and genetic methods were used in this study (20). We deleted a gene of interest by transforming yeast cells with a DNA fragment composed of KanMX cassette sandwiched by the 5 -and 3 -flanking sequences of the open reading frame of the gene, which was amplified from the corresponding deletant strain in Yeast Deletion Clones MATa Complete Set (invitrogen) using PCR primers listed in Supplementary Table S2. To construct a strain bearing a URA3 insertion at the boundary of two neighboring CUP1 repeat units, we transformed yeast cells with a DNA fragment composed of URA3 cassette sandwiched by the 3 -and 5 -end sequences of the repeat unit, which was obtained by PCR with primers VIII214253::URA3-F and VIII214253::URA3-R listed in Supplementary Table S2. Transformants selected on agar plates of synthetic complete medium lacking uracil (SC−Ura) supplemented with 2% glucose were used for nanopore sequencing to determine the integration site of URA3 in the CUP1 array.
Plasmids
All plasmids used in this study are listed in Supplementary Table S3. All primers for plasmid construction were purchased from Sigma-Aldrich and Eurofins Genomics. Plasmids were constructed with the seamless cloning with HiFi DNA Assembly or Golden Gate Assembly obtained from New England Biolabs (NEB).
The integrative plasmid YIplac128-pCSE4-dCas9-tADH1 (LEU2) harbors a gene encoding Streptococcus pyogenes dCas9 fused with SV40 nuclear localization signal (NLS) as described previously (21) under the control of CSE4 promoter. It was used for yeast transformation after NruI digestion to be integrated to the CSE4 promoter on the genome.
The integrative plasmid YIplac128-pGAL1-dCas9-tADH1 (LEU2) harbors a gene encoding S. pyogenes dCas9 fused with SV40 NLS under the control of GAL1 promoter. It was used for yeast transformation after AgeI digestion to be integrated to the GAL1 promoter on the genome.
The integrative plasmid pFA6a-pACT1-yGEV-tADH1-HphMX (Hyg R ) harbors a gene encoding -estradiolresponsive artificial transcription activator GEV (22) under the control of ACT1 promoter. It was used for yeast transformation after CpsCI digestion to be integrated to the ACT1 promoter on the genome. The GEV-coding sequence was codon-optimized for Saccharomyces cerevisiae.
The integrative plasmid pFA6a-pCUP2-yGEV-tADH1-HphMX (Hyg R ) harbors the codon-optimized GEV-coding gene under the control of CUP2 promoter. It was used for yeast transformation after MfeI digestion to be integrated to the CUP2 promoter on the genome.
Centromeric plasmids for sgRNA expression harbor sgRNA gene under the control of SNR52 promoter or GAL1 promoter. The sgRNA scaffold sequence contains a base-flip and an extension of a stem-loop for stable sgRNA expression (23). To cut off an unnecessary sequence from the 5 -terminal portion of the sgRNA-containing transcript, each sgRNA sequence is preceded by a hammerhead ribozyme (Supplementary Table S3). To define the 3terminus, each sgRNA sequence is followed by SUP4 terminator on the SNR52 promoter plasmid or by the HDV ribozyme on the GAL1 promoter plasmid (Supplementary Table S3). For designing sgRNAs, CRISPRdirect (24) was used to select target sites in the yeast genome listed in Supplementary Table S4.
Gene editing
For constructing rad52 rad59 strains, we performed enAsCas12a-based gene editing. All gene-editing plasmids used in this study are listed in Supplementary Table S3. Each gene-editing centromeric plasmid (URA3, CEN) harbors a gene encoding enAsCas12a (25) fused with SV40 NLS and a gene coding CRISPR RNA (crRNA), both of which are under the control of GAL1 promoter. To improve the efficiency of gene editing, a 9-mer sequence (U 4 AU 4 ) was attached to the 3 -end of crRNA (26). The crRNA is flanked by a hammerhead ribozyme and the HDV ribozyme at its 5 -and 3 -termini, respectively. For designing crRNAs, CRISPOR (27) was used to select target sites listed in Supplementary Table S4.
Yeast cells transformed with the gene-editing plasmid were spread on agar plates of SC−Ura supplemented with 2% galactose. After incubation at 30 • C for 4-5 days, colonies were picked and streaked on a new plate. To examine successful gene-editing at the target site on the genome, we performed PCR to amplify a region spanning the target site. The PCR products were sequenced to reveal the size and position of indels around the target site (Supplementary Table S1).
To eliminate the gene-editing plasmid, the successfully gene-edited strains were grown in yeast extract/peptone/dextrose (YPD) liquid medium at 30 • C overnight and streaked on a YPD agar plate to isolate single colonies. After incubation at 30 • C for 2-3 days, each colony was streaked on agar plates of YPD medium and SC−Ura medium supplemented with 2% glucose to confirm the loss of the gene-editing plasmid.
Cell growth rate measurement
Cell growth rate was measured using the RTS-1 personal bioreactor (Biosan, Riga, Latvia). The properties were set as follow: the volume, 10 ml of SC medium supplemented with 2% glucose; the temperature, 25 • C; the rotation speed, 1500 rpm; the measurement frequency, 10 times/min; and the reverse spin longitude, 1 s.
Cell culture for quantitative PCR (qPCR)
Yeast cells were grown at 25 • C overnight in 5 ml of SC−Ura or SC−His medium supplemented with 2% glucose. On the following day, the OD 600 of each sample was recorded and 10-50 l of the culture was inoculated into 5 ml of the fresh medium containing 10 nM -estradiol, supplemented with or without 5 mM nicotinamide (NAM). From the remaining overnight culture, genomic DNA was extracted with Gentra Puregene Yeast/Bact. Kit (QIAGEN) for qPCR. The same process was repeated every day. The division number per day was calculated from the change of OD 600 .
qPCR
The concentration of genomic DNA was measured with Qubit dsDNA BR assay on Qubit 2.0 Fluorometer or Qubit Flex Fluorometer (Thermo Fisher Scientific). The DNA solution was diluted to a concentration of 0.5 ng/l prior to qPCR. Each qPCR solution (20 l) contained 2 l of DNA (1 ng), 10 l of TB Green Premix Ex Taq II (Tli RNaseH Plus) (Takara), 0.4 l of ROX Reference Dye, 2 pmol each of the forward and reverse primers. The primers used for qPCR are listed in Supplementary Table S2. Each qPCR assay was performed in duplicate, using StepOnePlus or QuantStudio3 (Applied Biosystems) according to the manufacturer's instructions. Amplification condition was initial denaturation at 95 • C for 30 s followed by 40 times iteration of a 3-step thermal cycle composed of 95 • C for 10 s, 55 • C for 30 s and 72 • C for 5 s. All qPCR runs included 10fold serial dilutions to generate standard curves. The quantity of CUP1, ENA1 and URA3 was normalized to that of ACT1. The copy number of CUP1 and ENA1 in the standard curves was calibrated by the results of nanopore sequencing.
Genetic assay for the loss of URA3 inserted into the CUP1 array Yeast cells were grown at 25 • C overnight in 5 ml of SC−His medium containing 2% glucose. On the following day, 15 l of the culture was inoculated into 5 ml of fresh SC−His medium containing 2% glucose and 10 nM -estradiol, supplemented with or without 5 mM NAM. The same process was repeated every day. After four days of cultivation, cells were appropriately diluted and spread onto SC glucose plates supplemented with 0.1% 5-FOA and YPAD plates to determine the frequency of 5-FOA-resistant clones.
Nanopore sequencing
Genomic DNA was extracted using Gentra Puregene Yeast/Bact. Kit (QIAGEN) and purified with 0.4× AM-Pure XP (Beckman Coulter) or Short Read Eliminator kit XL (Circulomics). To obtain high molecular weight DNA, we avoided vortexing and used mixing by gentle pipetting instead. DNA libraries were prepared using the ligation sequencing kit SQK-LSK109 (Oxford Nanopore Technologies) with or without barcoding. For barcoding, we used the native barcoding kit EXP-NBD104 or the rapid barcoding sequencing kit SQK-RBK004 (Oxford Nanopore Technologies) according to the manufacturer's instructions. We modified the protocol of the ligation sequencing kit as follows: DNA fragmentation, omitted; duration of the enzymatic repair steps at 20 and 65 • C, both extended from 5 min to 30 min; and the duration of ligation step, extended from 10 to 30 min. The library was sequenced with the flowcell FLO-MIN106D R9.4.1 using the MinION sequencer (Oxford Nanopore Technologies). MinKNOW software was used to control the MinION device. The run time was set to 72 h. Base calling was performed using Albacore v2.3.1, Guppy v3.6.0, or Guppy v4.0.14. The assessment of sequencing data was performed using NanoPlot (28).
Dot plot analysis of nanopore sequencing reads
We used nanopore sequencing data in FASTA format to draw dot plots using YASS (29). We first selected reads spanning the entire array using 1-kb upstream and 1-kb downstream sequences of the CUP1 or ENA1 array as queries of minialign (https://github.com/ocxtal/minialign) and then used these reads as the first input sequence for YASS. As the second input, we used the reference sequence of interest (CUP1 repeat unit, ENA1 repeat unit, or URA3) or the selected read itself. By manually counting the number of diagonal lines appeared in each dot plot, we determined the CUP1 copy number, the ENA1 copy number and the location of URA3 insertion in the CUP1 array.
Nucleic Acids Research, 2021, Vol. 49, No. 2 957 Computational counting of tandem repeat units in nanopore sequencing reads To computationally count the copy number of tandem repeat units directly from each nanopore read, we developed a Fourier transform-based program termed DNA Sequence Detector, which works as follows.
Seq1 is a long DNA sequence sample to be examined (i.e. nanopore read), whereas Seq2 is a short and known DNA sequence (i.e. reference sequence of interest).
The nucleotide sequence of Seq1 is s 0 s 1 . . . s n−1 , and the nucleotide sequence of Seq2 is r 0 r 1 . . . r m−1 . Note that s i and r i are either A, T, G or C.
Then a matrix M is created as follows: If s i and a i, j are the same nucleotide, a i, j is replaced with '1'. If s i and a i, j are not the same nucleotide, a i, j is replaced with '0'. Let the resulting matrix be M2.
Next, each column in M2 is scanned. If '1' appears k times consecutively, these '1's are replaced with 'k'. Let the resulting matrix be M3.
Next, all numbers below X (where X is a natural constant) are replaced with '0'. Let the resulting matrix be M4.
c i is defined as follows: Vector v is defined as follows: Next, the program searches a region in which values are dense and larger than a certain degree. Suppose that a region R = [i ∼ j] is found as such a region. This R is the region in which Seq2 is located.
Next, the program finds out the number of Seq2 present . d i is defined as follows: By performing a discrete Fourier transform on a region R in v ave , the number of Seq2 in R is obtained.
Two-dimensional agarose gel electrophoresis (2D-AGE)
Yeast cells were grown at 25 • C overnight in 5 ml of SC−Ura medium containing 2% glucose. Following the addition of 10 nM -estradiol, the cells were cultivated for 2 h, diluted, and cultivated for 4 h. The genomic DNA was extracted with Gentra Puregene Yeast/Bact. Kit (QIAGEN) using a modified protocol, in which all vortexing steps were replaced by mixing with gentle pipetting to maintain the integrity of replication intermediates. The 2D-AGE followed by Southern blot hybridization was conducted as described previously (30,31) with some modifications. In brief, two micrograms of genomic DNA was fully digested with KpnI (Takara) or XcmI (NEB), precipitated with 1/10 volume of 3 M sodium acetate and one volume of isopropanol, washed with 70% ethanol, air dried, and finally dissolved in 30 l of 10 mM HEPES-NaOH (pH 7.2). The first-dimension electrophoresis was performed on a 0.55% agarose gel (11 × 14 cm) for 16 h at 22 V at room temperature. The second-dimension electrophoresis was performed on a 1.55% agarose gel (20 × 25 cm) containing ethidium bromide for 4 h at 260 mA at 4 • C. The gel was sequentially soaked in depurination buffer, denaturing buffer, and neutralizing buffer, and the DNA was blotted to Hybond N+ membrane (Cytiva). Following UV-crosslinking, the blot was hybridized with a CUP1 probe at 55 • C overnight. The probe was generated by PCR using the primers listed in Supplementary Table S4 followed by labeling with alkaline phosphatase using the labeling module of AlkPhos Direct Labelling and Detection System kit (Cytiva). Following appropriate washing of the blot at 60 • C, chemiluminescent signals were generated using the CDP-Star Detection Reagent in the kit and detected with ImageQuant LAS4000 (Cytiva). Gel images were processed with ImageJ software (National Institutes of Health) for presentation images. The process involved rotating, cropping, and altering windowlevel settings. The spot of interest was selected as a circle for quantification, and the total internal intensity was divided by its area. The background was defined as the mean of area-normalized intensities of three randomly selected regions with no obvious signals.
Western blot
The expression of FLAG-tagged Rad52 was analyzed by western blotting. Proteins were extracted as described previously (32). Twenty micrograms of proteins (2 g/l) were separated with sodium dodecyl sulfate-polyacrylamide gel electrophoresis using Any kD Mini-PROTEAN TGX Precast Gel (Bio-Rad). Transfer to membrane was performed with iBind Western System (Thermo Fisher Scientific) according to the manufacturer's protocol. Primary and secondary antibodies to detect Rad52-FLAG were FLAG M2 mouse monoclonal antibody (1:1000, Sigma-Aldrich) and goat anti-mouse IgG-HRP (1:2000, Santa Cruz Biotechnology), respectively. Primary and secondary antibodies to detect ␣-tubulin (loading control) were anti-alpha Tubulin antibody [YOL1/34] (1:2000, GeneTex) and goat Anti-Rat IgG H&L (HRP) (1:2000, Abcam), respectively. Following incubation with Clarity Western ECL Substrate (Bio-Rad), chemiluminescent signals were detected with ChemiDoc-Touch system (Bio-Rad). Gel images were processed with ImageJ software. The process involved cropping and altering window-level settings.
dCas9 induces copy number reduction of tandem repeat units
Cup1 is a metallothionein that buffers the concentration of intracellular copper in the budding yeast Saccharomyces cerevisiae (33,34). A ∼2.0-kb unit including the CUP1 gene (CUP1 repeat unit) is tandemly iterated more than 10 times in the reference strain S288c to compose the CUP1 array on chromosome VIII (35). The CUP1 array in the parental strain used in this study was composed of ∼16 repeat units (see below). The level of copper resistance linearly correlates with the CUP1 copy number (36)(37)(38). During experiments to target dCas9 to CUP1, we observed that the CUP1 copy number was decreased in a strain constitutively expressing CUP1-targeted dCas9 (Supplementary Figure S1A). The copy number was maintained in a control strain constitutively expressing dCas9 targeted to TEF1 on chromosome XVI (39) (Supplementary Figure S1A). Moreover, the former strain, but not the latter, showed a sign of further decrease in the copy number during cultivation (Supplementary Figure S1B).
To further investigate this phenomenon, we constructed a strain in which dCas9 can be induced using -estradiol without affecting cell growth ( Figure 1A and Supplementary Figure S1C). This strain utilizes the artificial transcription factor GEV (Gal4 DNA-binding domain, estrogen receptor and VP16 transcription activation domain) (22), which translocates to the nuclei upon binding to -estradiol and activates the GAL1 promoter to induce dCas9 expression. We grew the strain with daily dilution of the culture with fresh medium, extracted genomic DNA at various time points after -estradiol addition, and measured the CUP1 copy number by qPCR. When TEF1-targeted dCas9 was induced, the copy number (∼16 copies) did not show any significant change throughout the experiment ( Figure 1B). In contrast, induction of CUP1-targeted dCas9 rapidly decreased the copy number in a time-dependent manner (Figure 1B). The extent of decrease varied from one sgRNA to another and was enhanced by the concurrent expression of three sgRNAs (CUP1a+b+c) ( Figure 1B). The rate of decrease gradually slowed down, and the copy number appeared to reach a plateau in an extended culture (Supplementary Figure S1D).
As all three abovementioned CUP1 sgRNAs (CUP1a, b and c) bind to the same DNA strand, we designed an sgRNA that binds to the opposite strand (CUP1d) to test whether dCas9 reduces the CUP1 copy number in a strandspecific manner. CUP1d reduced the copy number with an efficiency largely comparable to that of CUP1a (Figure 1C). When combined, the two sgRNAs accelerated the copy number reduction ( Figure 1C). These results suggested that dCas9 targeted to either DNA strand likely reduces the CUP1 copy number.
We next sought to determine whether the effect described above was specific to the CUP1 array. The yeast genome has several tandem repeats other than the CUP1 array, including the ENA1 array encoding P-type ATPase sodium pumps (40). The ENA1 array comprises a tandem array of three paralogous genes in the S288c reference genome sequence, namely ENA1, ENA2 and ENA5, but some strains harbor four or more paralogs (34,41). It was determined by nanopore sequencing that the strain used in this study had five paralogs ( Figure 1D and Supplementary Figure S1E). We designed three sgRNAs for ENA1 to examine whether dCas9 targeting affects the copy number of ENA1 paralogs ( Figure 1D). When dCas9 was targeted to ENA1 (ENA1a+b+c), the copy number decreased slowly ( Figure 1E). This decrease was apparent in the presence of nicotinamide (NAM) ( Figure 1E, see below).
Taken together, these results demonstrated that when targeted to tandem repeats, dCas9 reduces the copy number of repeat units in a sequence-specific manner.
NAM accelerates dCas9-induced copy number reduction of tandem repeat units
A previous study reported that NAM induces CUP1 CNV (16). NAM is an inhibitor of the NAD + -dependent his- tone deacetylase family, which includes Sir2, Hst1, Hst2, Hst3 and Hst4. Accordingly, concurrent deletion of SIR2, HST3 and HST4 destabilized the CUP1 array (16). Conversely, deletion of RTT109, encoding the sole histone acetyltransferase responsible for histone H3 acetylated at Lys-56 (H3K56ac), suppressed the NAM-induced CNV (16). In our study, the effect of NAM on the CUP1 copy number was not significant in the control strain with TEF1targeted dCas9 ( Figure 1F). In contrast, NAM substantially accelerated copy number reduction in the presence of CUP1-targeted dCas9 ( Figure 1F). NAM also accelerated the copy number reduction of ENA1 paralogs induced by ENA1-targeted dCas9 ( Figure 1E). Consistent with the previous study (16), NAM failed to exert its effect on dCas9induced CUP1 CNV in the absence of Rtt109 (Supplementary Figure S1F). These results suggest that NAM enhances dCas9-induced destabilization of tandem repeats through the elevation of H3K56ac.
Binding of a single dCas9 molecule can destabilize the CUP1 array
We wondered whether binding of a single dCas9 molecule can affect the copy number of tandem repeat units. To address this issue, we deployed a classical genetic assay based on the loss of URA3 integrated into the CUP1 array. For this assay, we generated a strain carrying a URA3 cassette in the center of the CUP1 array comprising 16 repeat units (Figure 2A and Supplementary Figure S2A). Upon destabilization of the CUP1 array, a fraction of recombination events between the repeat units led to the loss of the URA3 cassette, conferring on cells resistance to 5fluoro-orotic acid (5-FOA). A four-day induction of CUP1targeted dCas9 reduced the average CUP1 copy number ( Figure 2B). Following this, we spread the cells onto agar plates supplemented with or without 5-FOA. As expected, the CUP1-targeted strain contained more 5-FOA-resistant cells than the control TEF1-targeted strain (51.7% versus 0.1%, 374.9-fold) ( Figure 2C). Confirming the performance of the genetic assay with CUP1-targeted dCas9, we next tested whether URA3targeted dCas9 destabilizes the CUP1 array. We used four sgRNAs (URA3a, b, c and d) to generate four strains with URA3-targeted dCas9 and subjected them to both the qPCR and genetic assays (Figure 2A). The qPCR assay failed to detect any significant decrease in the average CUP1 copy number in the four strains, presumably because copy number reduction occurred only in a limited fraction of the cell population ( Figure 2B). However, in the genetic assay, the URA3-targeted strains generated 5-FOA-resistant clones much more frequently than the control TEF1-targeted strain (URA3a, 0.6%; URA3b, 4.3%; URA3c, 2.6%; URA3d, 1.4%; TEF1, 0.1%) ( Figure 2C). Point mutations of URA3 could also confer 5-FOA resistance, and dCas9 was reported to induce base substitutions and indels (3). However, qPCR using DNA isolated en masse from 5-FOA resistant colonies confirmed deletion of URA3 cassette, indicating that the contribution of point mutations was negligible (Supplementary Figure S2B). We thus concluded that even a single molecule of dCas9 can destabilize the CUP1 array, albeit much less ef-ficiently than multiple dCas9 molecules targeted to individual repeat units.
dCas9 both contracts and expands the CUP1 array
The qPCR assay using an aliquot of liquid culture revealed the population average copy number but did not demonstrate the cell-to-cell variation. To determine this variation, we isolated single colonies from the cells cultivated in a liquid medium supplemented with -estradiol and determined the CUP1 copy number of each clone by qPCR ( Figure 3A). As expected from the decreased population average, most of the 38 clones examined had reduced copy numbers. However, three clones appeared to have higher copy numbers than the original strain (#36-#38, Figure 3A).
The observed increase in the CUP1 copy number does not necessarily indicate the expansion of CUP1 array, as the CUP1 repeat unit has been shown to exist as an extrachromosomal circular DNA (42). Furthermore, aneuploidy may include chromosome VIII bearing the CUP1 array. We thus performed long-read sequencing using the Min-ION nanopore sequencer to reveal the CUP1 array structure in the three clones ( Figure 3A). We selected reads containing both 5 -and 3 -flanking regions of the CUP1 array (i.e. reads spanning the entire array), generated a dot plot between each read and the reference sequence of CUP1 repeat unit, and manually counted the number of units from each plot ( Figure 3B). We also developed a Fourier transform-based algorithm to calculate the copy number directly from nanopore reads to validate the results of manual counting (Supplementary Figure S3A). Consequently, the clones #36, #37 and #38 were demonstrated to harbor CUP1 arrays composed of 22, 19 and 21 repeat units, respectively ( Figure 3B). We next examined the population structure of the CUP1 array by sequencing genomic DNA prepared from CUP1and TEF1-targeted strains at days 0 and 5 of induction. In the TEF1-targeted strain, the CUP1 copy number in the array did not change during the culture (16.0 and 15.7 copies on average at days 0 and 5, respectively, by both manual and computational counting) ( Figure 3C and Supplementary Figure S3B). In contrast, in the CUP1-targeted strain, the copy number distribution was obviously different between days 0 and 5. The copy number at day 0 showed a relatively homogenous distribution within a range of 14-16 copies (15.4 and 14.9 copies on average by manual and computational counting, respectively). This low level of heterogeneity was presumably attributable to leaky expression of dCas9, as it was observed only in the presence of CUP1 sgRNA. The copy number at day 5 exhibited a significantly heterogenous distribution (5.0 and 4.7 copies on average by manual and computational counting, respectively) ( Figure 3C and Supplementary Figure S3B). While 16 reads contained only a single copy of the CUP1 repeat unit, two reads spanned arrays comprising 19 and 28 units (Supplementary Figure S3C). Note that longer arrays are underrepresented in population analysis by nanopore sequencing compared to that by qPCR, as a longer array will have fewer reads spanning the entire array (Supplementary Figure S3D). Nevertheless, nanopore sequencing unequivocally demonstrated the expansion of the CUP1 array. Interestingly, it Figure 1B (n = 3 biological replicates). Statistical significance of copy number alteration was examined between day 0 and day 4 using t-test (*P < 0.05). (C) Frequency of URA3 loss. Following four-day induction of dCas9 with the indicated sgRNAs in the presence or absence of 5 mM NAM, cells were spread on agar plates supplemented with or without 5-FOA. Data are represented as mean ± standard deviation (n = 3 biological replicates). Statistical significance was examined between TEF1-targeted strain and each of the other strains and between the conditions with and without NAM in each strain using t-test (*P < 0.05). also revealed CUP1 arrays with interstitial deletions (Supplementary Figure S3C), which led to the slight difference between the copy numbers estimated by manual and computational counting ( Figure 3C and Supplementary Figure S3B).
Taken together, dCas9 contracts and expands the CUP1 array in the majority and minority of cells, respectively, thereby inducing heterogeneity in the array structure.
dCas9 blocks replication fork progression in vivo
We hypothesized that dCas9-induced destabilization of the CUP1 array stems from dCas9-mediated impairment of replication fork progression. Indeed, CUP1-targeted dCas9 failed to alter the copy number when the culture was saturated to terminate DNA replication ( Figure 4A). To test this hypothesis directly, we conducted neutral-neutral 2D-AGE (43) and analyzed the status of DNA replication intermediates including the CUP1 repeat unit by Southern blot hybridization ( Figure 4B).
Prominent spots appeared on Y-arcs upon the induction of CUP1-targeted dCas9 ( Figure 4C and D, blue arrows). These results indicated that dCas9 induced replication fork stalling in the CUP1 unit. When KpnI-digested fragments were analyzed, the spot was observed approximately at the apex of Y-arc, indicating replication fork stalling around the midpoint of the restriction fragment. Moreover, another spot appeared at the tip of X-spike, suggesting an accumulation of highly branched X-shaped molecules (Figure 4C, red arrow). It could be interpreted as two replisomes colliding around the midpoint of the fragment. When XcmI-digested fragments were analyzed, a spot was detected in the descending part of Y-arc upon induction of CUP1-targeted dCas9 ( Figure 4D), suggesting replication fork stalling near an end of the fragment. Considering the dCas9-bound sites in the XcmI fragment, we speculate that the stalled replication fork likely originated from ARS813, located ∼33-kb upstream of the CUP1 array ( Figure 4B). Although each CUP1 repeat unit has a weak replication origin (ARS810/811) (44,45), we failed to observe the bubblearc corresponding to the DNA replication bubble. This was presumably because ARS810/811 fires much less frequently than the closest neighboring replication origin for the CUP1 array. In summary, dCas9 impairs replication fork progression in the vicinity of its binding sites.
The RPC, accessory helicase and recombination proteins modulate dCas9-induced CUP1 CNV
A stalled replication fork may either resume progression or collapse. In the latter case, the cell exploits recombinational repair pathways to rescue the collapsed fork. We hypothe- Figure 1B) were used for qPCR. (B) CUP1 array structure revealed by nanopore sequencing. DNAs prepared from clones #36, #37 and #38 in Figure 3A were sequenced with MinION. Dot plots were generated between the reference sequence of CUP1 repeat unit (vertical axis) and nanopore sequencing reads (horizontal axis). Two representative reads are shown for each clone. (C) Population structure of CUP1 array. DNAs prepared from the cells with the indicated sgRNAs at days 0 and 5 of dCas9 induction ( Figure 1B) were sequenced with MinION. CUP1 copy number determined from dot plots are shown.
sized that dCas9-induced destabilization of tandem repeats is attributable to a repair process of replication forks stalled by dCas9. To obtain genetic evidence for and mechanistic insights into this process, we evaluated dCas9-induced reduction of the CUP1 copy number in a series of strains in which genes involved in replication fork stability and major DNA damage repair pathways were deleted ( Figure 5A and Supplementary Figure S4A-C). We used qPCR to estimate the CUP1 copy number at days 0 and 2 of CUP1targeted dCas9 induction. To normalize the effects of differential growth rates among the strains, we evaluated the effects of gene deletions using a variation index (VI) defined as copy number change (%) per cell division (Supplementary Figure S4A, B). None of the 29 deletants examined abolished the dCas9-induced copy number alteration. The response to dCas9-mediated replication fork stalling may thus be redundant, with one pathway likely serving as a back-up for another. Nevertheless, certain strains showed significant change in VI ( Figure 5A).
These results collectively indicated critical roles for the RPC components, accessory helicase, and recombination proteins in dCas9-induced CUP1 array contraction.
dCas9-induced CUP1 CNV involves SSA by Rad52
As RAD52 deletion had the largest impact on the VI ( Figure 5A and Supplementary Figure S5A), we sought to determine how Rad52 contributes to dCas9-induced CUP1 array contraction. To this end, we took advantage of reported separation-of-function rad52 alleles. Rad52 is composed of Each strain with three sgRNAs (CUP1a+b+c) was cultivated for two days in the presence of -estradiol. Percentile change of CUP1 copy number estimated by qPCR was divided by the number of cell division estimated from absorbance to calculate VI. Data are represented as mean ± standard deviation (n = 3 or more biological replicates). Statistical significance was calculated using t-test compared to the wild-type (WT) strain (*P < 0.05; **P < 0.01; ***P < 0.001). (B) Replication fork stalling in the wild-type and rrm3 strains. 2D-AGE patterns of KpnI-digested DNA are shown for the WT and rrm3 strains. Blue and red arrows indicate the spots on the Y-arc (stalled replication fork) and the X-spike (highly branched X-shaped molecule), respectively. The samples of the wild-type strain are identical to those in Figure 4C but exposure time for signal detection was 1 h. (C) Schematic of Rad52 domain structure with separation-of-function mutations. (D) Suppression of defective dCas9-induced CUP1 CNV in the rad52 strain by wild-type and separation-of-function alleles. Data are represented as mean ± standard deviation (n = 3 or more biological replicates). Statistical significance was examined between the strains harboring the WT and the separation-of-function alleles on YCpRAD52 using t-test (*P < 0.05). three domains ( Figure 5C). The N-terminal domain is evolutionarily conserved and mediates interactions with DNA, Rad52 (self-oligomerization), and Rad59. Class C mutants bearing mutations in this domain (rad52-Y66A, -R70A, -W84A, -R85A, -Y96A, -R156A, -T163A, -C180A and -F186A) are defective in SSA but proficient in HR (50)(51)(52). The central domain is required for binding to the singlestranded DNA (ssDNA)-binding protein RPA and forma-tion of the Rad52 repair center. A mutant allele of this domain, rad52-QDDD308-311AAAA, encodes a protein that is defective in RPA-binding and mediator activity to exchange RPA for Rad51 on ssDNA, but is proficient in DNA binding, Rad51 binding, and SSA in vitro (53). The Rad52 C-terminal domain is also required for mediator activity. A mutant allele of this domain, rad52-Y376A, encodes a protein incapable of binding to Rad51 (54)(55)(56).
Nucleic Acids Research, 2021, Vol. 49, No. 2 965 The wild-type and separation-of-function alleles were expressed in the rad52 strain under the control of the RAD52 promoter on a centromeric plasmid vector. Immunoblot analysis confirmed comparable expression levels among the wild-type and mutant proteins except for Rad52-C180A (Supplementary Figure S5E). The RAD52 allele suppressed defects in the rad52 strain, elevating the VI to a level comparable to that of the wild-type strain ( Figure 5D and Supplementary Figure S5F-H). However, class C mutant alleles (rad52-Y66A, -R70A and -C180A) only partially suppressed the defect, indicating the involvement of the N-terminal domain and hence SSA ( Figure 5D and Supplementary Figure S5F-H). Intriguingly, the rad52-QDDD308-311AAAA allele barely suppressed the defect ( Figure 5D and Supplementary Figure S5F-H). In contrast, the rad52-Y376A suppressed the defect as efficiently as RAD52, demonstrating the dispensability of Rad51 binding and hence mediator activity ( Figure 5D and Supplementary Figure S5F-H). Interestingly, depletion of Rad59, a C-terminally-truncated Rad52 paralog with SSA but not mediator activity, exerted a decelerating effect second only to that of Rad52 ( Figure 5A). Furthermore, Rad51 depletion did not affect destabilization ( Figure 5A). Thus, the data on the separation-offunction alleles were consistent with the deletant data.
Taken together, Rad52 likely contributes to dCas9induced destabilization of tandem repeats through its involvement in the annealing of RPA-coated ssDNA, but not via its mediator activity.
dCas9 as a replication fork barrier
The highly plastic nature of the CUP1 array structure has been attracting attention for its role in environmental adaptation. A previous study reported critical roles for promoter activity and H3K56ac in CUP1 CNV (16). Here we showed that dCas9 induces CNV in the CUP1 and ENA1 arrays (i.e., array contraction and expansion), especially in the presence of NAM (Figures 1 and 3). Notably, dCas9 rapidly decreased the copy number even in the absence of transcriptional CUP1 induction ( Figure 1). Although RTT109 deletion stabilizes the CUP1 array even in the presence of active transcription (16), CUP1-targeted dCas9 still destabilized the array in the rtt109 strain ( Figure 5A). These results imply a high efficiency of dCas9 in inducing focal genomic instability. To our knowledge, this study is the first demonstration of dCas9-induced SVs. Intriguingly, NAM enhanced dCas9-induced CUP1 CNV in a H3K56ac-dependent manner (Supplementary Figure S1F). Since H3K56ac regulates replication-coupled nucleosome assembly (57), its hyperelevation likely alters chromatin status, thus affecting the stability of stalled replication fork and the accessibility of recombinational repair proteins and/or dCas9. These possibilities remain to be examined in future studies.
As both transcription and dCas9 induce CUP1 CNV, dCas9 may serve as a mimic of RNA polymerase through its R-loop formation, with dCas9 generating an even longer Rloop than RNA polymerase. Notably, R-loop formation is a major threat to genome stability (58). We thus hypothesized that dCas9 and replisomes induce a conflict similar to that observed between transcription and replication. Consistent with this scenario, we found that dCas9 impedes replication fork progression in vivo ( Figure 4). Interestingly, a recently published report demonstrated the ability of dCas9 to block replisome progression in vitro (59).
Proteins tightly bound to DNA can impair replication fork progression, and some of them serve as physiological blocks (60). Examples of fork-blocking proteins include Tus binding to the replication terminator Ter of Escherichia coli, Fob1 binding to the replication fork barrier in rDNA of budding yeast, and Rtf1 binding to RTS1 in the mating locus of the fission yeast Schizosaccharomyces pombe (60). While these proteins function in an orientation-dependent manner, dCas9 appears to block the replication fork approaching from either side in vivo (Figure 1), consistent with the previous finding in vitro (59). The accumulation of highly branched X-shaped molecules at the tip of X-spike in 2D-AGE appeared to be consistent with replication fork stalling at both sides of dCas9-bound sites, although we should note that it may also reflect entanglements between sister chromatids ( Figure 4C). It remains to be seen in future studies whether dCas9 is equally effective in blocking the replication fork approaching from either direction.
Aside from the professional fork-blockers, proteins such as LacI and TetR can impair replication fork progression when bound to highly iterated arrays of lacO and tetO, respectively (60). We investigated whether even a single molecule of dCas9 can serve as a sufficiently strong barrier to induce genomic instability. To address this issue, we repurposed the classical genetic assay using a URA3-bearing CUP1 array to sensitive detection of focal genomic instability induced by a single molecule of URA3-targeted dCas9 ( Figure 2). The results showed that the binding of even a single molecule of dCas9 can destabilize the array. If each dCas9 molecule independently destabilizes the CUP1 array, then targeting of dCas9 to each repeat unit multiplies the destabilizing effects, thereby leading to rapid contraction of the array.
Whether the replisome and dCas9 collide on the genome and have a mutual contact remains unclear. As the replisome approaches dCas9 in vivo, the torsional stress between their binding sites likely increases and may finally prevent replisome progression. This could explain why 2D-AGE indicated replication fork stalling around the midpoint of KpnI fragment, although the dCas9-bound sites were slightly off-centered (Figure 4). A higher-resolution method for mapping the position of the stalled replication fork is required to address this issue in future studies.
Cellular responses to dCas9-mediated replication fork stalling
When the replication fork encounters with an obstacle on DNA, the stability of the former and the removal of the latter should be critical. Ctf4 was demonstrated to protect arrested replication forks against breakage to suppress genome rearrangements, including hyper-amplification of rDNA (47). Mrc1 interacts with Tof1-Csm3 to form the heterotrimeric fork protection complex. Deletion of CTF4 and MRC1 significantly accelerated dCas9-induced reduction of the CUP1 copy number ( Figure 5A). In the ctf4 and mrc1 strains, dCas9-mediated replication fork stalling appeared to be diminished compared to the wild-type strain (Supplementary Figure S4E-G), presumably reflecting the breakage of destabilized replication forks. The accessory helicase Rrm3 is responsible for the removal of obstacles in front of the replisome. In the rrm3 strain, the VI was significantly increased and dCas9-mediated replication fork stalling appeared to be enhanced ( Figure 5A, B and Supplementary Figure S5G). Consistently, depletion of Tof1, which counteracts Rrm3, resulted in a modest but significant decrease of the VI ( Figure 5A). These data collectively underscored the importance of replisome protection and dCas9 removal in tandem repeat stability.
Despite the activities for fork protection and obstacle removal, stalling is occasionally prolonged to result in fork collapse. Cells have various mechanisms to cope with collapsed forks, which likely induce CUP1 CNV. Our genetic analysis indicated that RAD52 and its paralog RAD59 have the largest and second-largest contributions to CUP1 CNV, respectively ( Figure 5A). Genetic analysis using separationof-function alleles indicated that Rad52 destabilizes tandem repeats via its SSA activity, but not its mediator activity to exchange RPA for Rad51 ( Figure 5D). Although Rad52 and Rad59 mediate SSA, Rad52 but not Rad59 can perform this function in the presence of RPA (61). Based on the result of rad52 allele encoding a protein defective in RPA binding, we assumed that Rad52 mediates the annealing of RPA-coated ssDNA. Note that Rad52 was dispensable to restore rDNA copy number in the absence, but not the presence, of histone chaperone Asf1 (62). It would be intriguing to examine whether the requirement of Rad52 for dCas9induced CUP1 CNV is mitigated in the absence of Asf1.
Conventional SSA occurs after DSB and subsequent end resection. However, qPCR failed to provide evidence for DSB around dCas9-bound sites. Similarly, time-lapse imaging failed to reveal significant difference in Rfa1 focus formation, indicative of DSB, between the strains with CUP1targeted dCas9 and with no sgRNA. Thus, we have so far not obtained clear evidence for dCas9-induced DSB. The decease of VI in the exo1 strain defective in end resection was less significant compared to the rad52 and rad59 strains ( Figure 5A). It is conceivable that dCas9 induces destabilization without forming prominent DSBs, such as those generated by replication fork breakage. In this context, a new mechanism termed inter-fork strand annealing (IFSA) has attracted our attention. IFSA explains the interrepeat recombination induced by Rtf1/RTS1 system in fission yeast, involves Rad52 and Exo1 but not Rad51, and occurs without replication fork breakage (63). An IFSAlike mechanism may operate in budding yeast to mediate dCas9-induced CNV of tandem repeat units. Alternatively, tandem repeat structure may help SSA-mediated DSB repair to proceed too quickly to be detected with conventional approaches. Interstitial deletions occasionally found by nanopore sequencing may indicate at least a limited involvement of DSB in copy number alterations (Supplementary Figure S3C).
Less prominent but significant effects on destabilization were observed in the mms2 and rad6 strains ( Figure 5A). Both MMS2 and RAD6 encode components of the errorfree TS pathway (64). However, the mms2 and rad6 strains exerted mutually opposite effects. Moreover, depletion of the other components of this pathway (Rad18, Ubc13 and Rad5) failed to have significant effects on VI. Since all these proteins are involved in ubiquitination of proliferating cell nuclear antigen (PCNA), it is intriguing to examine the ubiquitination-defective PCNA mutant (Pol30-K146R). In anyway, we assumed that TS has little if any contribution to dCas9-induced tandem repeat destabilization. Similarly, HR, NHEJ, BIR and TLS did not appear to play major roles because no significant change of VI was observed in the rad51 , dnl4 , pol32 and rev1 strains, respectively ( Figure 5A).
Taken together, it remains to be seen in future studies how Rad52 and Rad59 mediate the cellular response to dCas9mediated stalling of replication fork. It is also intriguing to examine the response in other species, including mammals, in which the preference in the choice of recombinational repair pathways may be different from that in the budding yeast.
Potential risk and application of dCas9-mediated replication fork stalling
This work has identified a potential risk of dCas9, distinct from the previously reported mutagenicity of the R-loop (3). For instance, for the sake of sensitivity, live-cell imaging studies often target dCas9 fused or complexed with fluorescent proteins to tandem repeats. Extended cultivation of such cells may result in contraction of the targeted tandem repeats, leading to not only compromised sensitivity but also an unexpected outcome. Even at a single-copy target site, dCas9 can impede replication fork progression and may thus induce SVs. This is especially true when recombinogenic genomic features are present around the target site, as was in the case for the URA3 cassette integrated in the CUP1 array ( Figure 2). In this context, it is intriguing to note that the results of our genetic analysis ( Figure 5) suggests a potential utility of Rad52 inhibitors (65,66) in reducing the risk of dCas9-induced focal genomic instability, albeit at the expense of general defects in various types of recombination.
Conversely, our findings imply that dCas9 provides a versatile tool for impeding replication fork progression at the genomic site of interest in vivo. Indeed, controlled replication fork stalling can accelerate mechanistic studies on genome stability. For this purpose, the Tus/Ter-system has been successfully used in both yeast and mammalian cells (67,68). However, this system requires its users to integrate Ter sequences into the regions of interest. In contrast, dCas9 is readily targetable to virtually any genomic regions by simply designing appropriate sgRNAs. Moreover, since dCas9mediated replication fork stalling works without modifying the genomic sequence, it would enable recapitulation of natural SV generation, thus providing a novel approach for modeling evolution and pathogenesis. Initial amplification of a single-copy gene likely involves mechanisms such as re-replication-induced gene amplification (69) and origindependent inverted repeat amplification (70). In both mechanisms, the borders of amplified regions are defined by the positions of replication fork collapse. We therefore expect that dCas9-mediated replication fork stalling provides a versatile tool to manipulate SVs including gene duplication, a critical driver of evolution.
DATA AVAILABILITY
The source code of DNA Seq Detector used in this study is available at GitHub (https://github.com/poccopen/ DNA Sequence Detector). Nanopore sequencing data used in this study were deposited in DRA under accession number DRA010708. | 2021-01-06T06:18:55.593Z | 2021-01-04T00:00:00.000 | {
"year": 2021,
"sha1": "b342774ab70f9b7d30345d61f43eaeb349278388",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/49/2/954/36085008/gkaa1241.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0bef771d78833995ae8bb4dc92261cd58d6802b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
244159316 | pes2o/s2orc | v3-fos-license | Fundamental Responsiveness in European Electricity Prices
: We estimate fundamental pricing relationships in selected European day-ahead electricity markets. Using a fractionally integrated panel data model with unobserved common effects, we quantify the responsiveness of hourly electricity prices to two fundamental leading indicators of day-ahead markets: the predicted load and renewable generation. The application of fractional cointegration analysis techniques gives further insight into the pricing mechanism of power delivery contracts, enabling us to measure the persistence of fundamental shocks.
Introduction
From personal devices and computers to large industrial equipment, electricity is pivotal to modern life. It is also a commodity possessing unique characteristics. Electricity cannot be stored at a reasonable cost nor can be easily transmitted over very long distances. For its steady supply, a continuous equilibrium between power production and consumption is required. The demand for electricity heavily depends on the business activity (weekdays vs. weekends/holidays) and regional climate/weather factors (e.g., sunshine hours, temperature, precipitation and wind speed).
In the past thirty years, the global electricity industry has experienced continuous deregulation due to the political willingness to open power production/retailing to competition. Deregulation has created the need for structural pricing models [1]. Despite the fact that in each country the transition to a fully competitive market has passed through several phases (legislation reform, initial deregulation, privatization, third-party entrance, re-regulation, etc.), each step has resulted in a more complex state of the electricity market (see [2] for a detailed discussion). No doubt, contemporary electricity markets form a complex network of zonal/regional/national exchanges, each with its own specificities and regulations.
The developments mentioned above have infused special properties to electricity prices. A typical price series is characterized by volatility, periodicity, spikes, negative values and long memory (persistence) [3,4]. The latter property-which receives special attention in this study-refers to the fact that price autocorrelation decays slowly with the time distance between two observations [5,6]. The complex behavior of power prices, originating from the very nature of the commodity and the design of power markets, makes it difficult to derive accurate forecasts both in the short and medium term. The participants of the electricity wholesale market are particularly concerned about these complexities. As Joskow [7] claims, utilities can experience huge financial losses in periods of excessive price volatility and spikes. An example is the California electricity power crisis in 2000-2001, during which wholesale electricity prices climbed to USD 300/MWh (almost ten times higher than where they were in previous years) but retail prices remained at the Energies 2021, 14, 7623 2 of 14 same level. Understanding the power price dynamics becomes of paramount importance for determining the optimal participation strategy in an electricity market and managing price risk.
The purpose of this study is to estimate fundamental pricing laws in six European day-ahead electricity markets. We investigate the extent at which temporal price variations can be attributed to shifts in two fundamental indices of power markets: the predicted load and the renewable energy (RE) generation prognosis. The rest of the text is structured as follows: Section 2 reviews existing literature on electricity price modelling, with a focus on fundamental valuation. Section 3 states the purpose and the contribution of the study. Section 4 describes the modelling techniques and Section 5 presents empirical results. Section 6 concludes the study and discusses directions for future research.
Literature Review
According to Weron [8], there are two main communities of researchers interested in modelling electricity prices (electrical engineers and statisticians) but a plethora of approaches. Researchers have tried to categorize published studies into two literature streams. Niimura [9] reviewed more than 100 published papers and tagged them as either simulation or statistical/econometric studies. Aggarwal et al. [10] evaluated the practical relevance of a rich group of simulation models and statistical techniques (moving average, regression analysis, artificial intelligence models) for the analysis of electricity markets. They argue that there is no technique that clearly dominates others in terms of faithfully reproducing the statistical properties of power prices and providing accurate forecasts.
In the statistical literature, there is a variety of model specifications that promise forecasting superiority. Contreras et al. [11] apply ARIMA techniques to hourly price data from the Spanish and the Californian power market and report an average forecasting error of 10% and 5%, respectively. A seasonal ARIMAX approach to the modelling of Nordool prices is also presented in Kristiansen [12]. Using data from the Californian electricity market, Knittel and Roberts [13] employ a combined ARMA model (with seasonal indicators) for the conditional mean and an EGARCH parametrization for the conditional variance. They claim that an improvement of the forecasting performance is not possible without explicitly considering some stylized facts of power prices, such as volatility clustering and long-range dependence. An interesting additional empirical property that is highlighted in the aforementioned paper is the inverse leverage effect, a term which describes the higher responsiveness of price volatility to a positive price shock.
Many researchers advocate fundamental valuation analysis, i.e., the use of market variables or weather indicators to explain the properties of price time series. According to Weron [8], this requirement can be met in practice by supplementing AR models with "exogenous" variables, such as load, ambient temperature, generation fuel prices, the value of CO 2 allowance rights, etc. (see also [1,12,14]). Woo et al. [15] employ an iterated SUR model to quantify the solar and wind merit-order effect in regional (day-ahead and real-time) Californian markets. Macedo et al. [16] provide empirical evidence for the wind merit-order effect in a Swedish bidding zone (SE3-BZ), using a mixture of seasonal ARMA and GARCH modelling techniques. Adopting a similar modelling approach, Papaioannou et al. [17] measure the impact of fundamental drivers under the recent regulatory reforms in the Greek day-ahead market. Afanasyev et al. [18], using data from two pricing zones of the Russian market and the English APX exchange, investigate the responsiveness of electricity price to three fundamental factors (load, coal/natural gas price). Employing a flexible specification univariate modelling framework, they conclude that price responsiveness to fundamentals is time dependent. In all markets, the coal and the natural gas supply cost shapes the medium-or long-term course of power prices, but load can also cause significant variations in the short run; particularly in the UK market and in one of the two Russian pricing zones (European Russia and Ural).
De Menezes et al. [19] investigate the convergence of European electricity prices and long-run linkages with the value of the CO 2 emission rights (obtained from the EU Emissions Trading System) and other fossil fuels (coal prices from the API2 Coal and natural gas prices from the UK NBP Natural Gas). Using recent daily data from the English, Nordic and French electricity market, they detect a cointegrating relationship between electricity price and generation fuel cost in the UK market, while in France and Scandinavia this long-run linkage is only observed with price developments in adjacent interconnected markets. Collecting hourly data from the Spanish day-ahead market, Ballester and Furió [20] analyze the impact of renewable energy generation on electricity price. They conclude that increasing levels of clean energy supply are typically associated with a significant reduction in prices and a lower probability of observing positive spikes. Still, in periods of large renewable energy in-feed, prices become more volatile. Another example of a fundamental approach to power valuation is the work of Vahviläinen and Pyykkönen [21]. They use a rich set of endogenous and exogenous indicators (nuclear power production, unregulated hydro-generation, temperature dependent components, precipitation, etc), which affect both the demand and supply side of the Nordic market and potentially drive monthly price changes. Finally, Karakatsani and Bunn [1] employ regime-switching regression techniques to perform a fundamental analysis of half-hourly British electricity prices. They conclude that the inclusion of market indicators improves the predictive ability of their model.
A theoretical justification of the fundamental approach to electricity price modelling is given by Geman and Roncoroni [22]. They argue that fundamentals determine the equilibrium level of electricity prices. Still, the equilibrium is not static but varies in response to the course of fundamental factors, which is also shaped by unanticipated events. This implies that many of the proclaimed causal relationships between prices and fundamentals are often the result of latent (unobserved) factors that influence both dependent and explanatory variables.
The spurt increase in computing power has given many researchers the opportunity to investigate more complex power pricing laws using multivariate modelling paradigms. In the context of day-ahead electricity markets, multivariate models are not just an alternative approach but often the right way to proceed with the modelling of hourly price series. Participants in the day-ahead market set the price for power delivery in each of the 24 hourly frames of the following trading day simultaneously based on information collected up to the gate closing time. This discontinuity in the trading process invalidates many of the univariate paradigms that merge hourly prices in a single time series (see also [23][24][25] for a discussion). In fact, in the case of the German EEX market, Cuaresma et al. [26] report an improvement in the predictive ability of ARMA specifications if a separate model is used for each hourly trading session. Similar evidence in favor of the multivariate treatment of hourly prices is provided by Weron and Misiorek [14] in the context of the Californian electricity market.
Purpose of the Study
The purpose of this study is to quantify the impact of load and renewable generation on electricity prices across several European day-ahead markets (Belgian, French, Italian, Polish, Portuguese and Spanish). These markets reflect the diversity of Continental power systems, in terms of generation technologies, grid connectivity and load. The assessment of the importance of the previously mentioned fundamental indicators is conducted for each hourly trading session, in accordance with how information flows into electricity markets. The literature reviewed in the previous section stresses the need for a multivariate (system) approach to the modelling of electricity prices. In this spirit, recent studies ( [27][28][29]) advocate the use of panel data models, which mitigate many of the weaknesses of the univariate specifications and other multivariate paradigms (such as vector autoregressions). Panel data models can vary the pricing relationship between hourly trading sessions (this is the so-called slope heterogeneity property) by using regressors that are pertinent to each session (hour-specific regressors). Recent developments in panel data techniques, such as the common correlated effects estimator of Pesaran [30] and its refinements proposed by Ergemen [31] and Thomaidis and Biskas [29], make it possible to consistently estimate price responsiveness to fundamental variates, taking also into account unobserved crosscorrelation, long-range dependance and short-memory dynamics in innovations. Although the focus of interest may be the derivation of empirical fundamental pricing laws, all "secondary" aspects of the system dynamics can adversely affect the above task, often leading to inconsistent estimators for the slope coefficients. The panel data model employed in this study (assuming long memory and interactive fixed effects) manages to deliver consistent estimates under such a rich data-generating processes. It also allows us to investigate possible equilibrium relationships between price and fundamentals and measure variations in the cointegration strength between peak and off-peak hours.
This paper is an empirical investigation of fundamental pricing laws across European markets. The selected markets differ not only in terms of the total supply capacity but also in their generation mix. Later in this paper, we attempt to explain the estimated price elasticities to various fundamental drivers on the grounds of the involved generation technologies and their relative share in the total energy supply. Other studies adopt a similar design ( [28,29]) but use older data and focus on a single market. Afanasyev et al. [18] also examine pricing laws in three European markets employing univariate modelling techniques, with all the risks mentioned earlier in this paper. The methodological weaknesses of the univariate approach are also likely to explain the conflicting empirical evidence presented in de Menezes et al. [19] as to the existence of cointegrating relationships. Similarly to Ballester and Furió [20], we study the relationship between renewable generation and electricity prices. Still, our study is an improvement over [20], as it takes into account more markets (on top of the Spanish), adopts a multivariate framework and considers a second fundamental driver (load) to identify more accurately the sources of price fluctuations.
Methodology
In our study, we apply an heterogeneous panel data model with common correlated effects to unveil systematic determinants of electricity prices in each national market (Due to space restrictions, we are confined to a high-level (operational) description of the core methodology. Readers interested in more mathematical details can consult references [29,31]). In the style of [28,29,31], we assume that hourly prices follow the law: where Y ti denotes the log-price of the contract delivering power in hour i = 1, 2, . . . , N = 24 of the operational day t + 1, D t is a K D × 1 vector of deterministic variables, X ti is a K X × 1 vector of individual regressors and F t is a K F × 1 vector of common unobserved factors. Two individual regressors of potentially high explanatory power are employed in this study: the forecasted load and the renewable generation prognosis pertaining to trading session i of each national market. For the shake of uniformity, we also use the logarithms of the above variables as individual regressors. Further details are given in the empirical section. Our model assumes the following data-generating process for individual regressors: The vector X ti is exposed to the same deterministic (observed) and latent (unobserved) common factors. In the terminology of panel data econometrics, the term a yi captures individual fixed effects, whereas b i is the vector of slope coefficients, which measure how sensitive prices are to variations in load and renewable generation prognosis. Since both the dependent variable and individual regressors enter in logarithms, each slope coefficient b ik is perceived as the price elasticity to the fundamental covariate X ik . Coefficients b i and other model parameters are estimated separately for each electricity market. As Equations (2) and (4) show, our model assumes the existence of common latent effects F t in the innovations of price and individual regressors (denoted by u yti and u xti , respectively). Additionally, we postulate that the vector v ti = v yti , v xti of system errors (shocks) is characterized by both contemporaneous and lagged cross-dependences, which can be sufficiently modelled using a VAR(q) structure (see [29,31] for details). Another interesting aspect of the dynamics implied by Equations (1)-(4) is the fact that system shocks do not die out instantly but have a long-lasting effect on prices and hour-specific regressors. This is modelled using fractional integration techniques (see [6] for a thorough discussion). A process {y t } is said to be fractionally integrated of order θ, if y t = ∆ −θ e t ≡ (1 − L) −θ e t for a stationary ARMA process {e t } of finite order (L is the lag operator). In empirical applications, we often use the truncated fractional difference operator ∆ −θ + , which can be expressed through ratios of Gamma functions: It can be shown that if 0 < θ < 0.5, {y t } is mean and covariance stationary, while if 0.5 ≤ θ < 1, the process retains the mean-reversion property but becomes non-stationary in covariance. We assume that (after removing deterministic effects) variables Y ti , X ti , and F t are fractionally integrated of orders θ yi , θ xi and θ f , respectively. Coefficient δ i is of particular interest in our study, as it determines, among other things, the strength of coupling between Y ti and X ti . This can be perceived as the tendency of the price to revert to a notional fundamental level dictated by load and renewable generation. The intuition is that if Y ti (corrected for common latent and deterministic effects) has a memory length well above δ i , much of its long-range dependence can be attributed to slow-cycling fundamentals. In this case, Y ti and X ti are considered fractionally cointegrated.
The confluence of interactive fixed effects, long-memory and cross-dependence in system innovations makes common regression techniques (such as OLS) inappropriate for the estimation of fundamental pricing relationships. In particular, OLS would result in inconsistent estimators for the true price elasticities b i . To avoid the adverse consequences of enriching the model specification, we applied the Ergemen's [31] version of the common correlated effects estimator, originally proposed in [30,32].
Data
Sample data include hourly quotes of electricity prices, forecasted load, and predicted renewable production for six day-ahead European electricity markets (Italian, Belgian, Portuguese, French, Spanish and Polish). Our data source is the Transparency Platform (https://transparency.entsoe.eu/dashboard/show, (accessed on 10 April 2021)) of the European Network of Transmission System Operators for Electricity (ENTSO-E) (The structure of the Italian market is not fully represented in our dataset. Even though generating units are remunerated in the zonal prices, load representatives buy energy at the National Single Price (called PUN), which is defined as a weighted average over all Italian bidding areas. (Source: https://www.mercatoelettrico.org/En/Mercati/MercatoElettrico/ MPE.aspx, (accessed on 5 April 2021)). The Transparency Platform of ENTSO-E only reports zonal prices, not the PUN. In our study, we solely used data for Southern Italy that has a generation mix similar to other markets in discourse, as opposed to Northern Italy that has hardly any wind generating capacity. In this bidding zone, hydroelectric and solar power generators are the two main suppliers of renewable energy. Source: https://download.terna.it/terna/PROVISIONAL%20DATA%20OF%20THE%20 ITALIAN%20ELECTRICITY%20SYSTEM_2019_EN_WEB_8d7f8db3334aef3.pdf (accessed on 7 April 2021)). Sample observations span the period 01/01/2015 to 31/12/2020. The Transparency Platform stacks hourly quotes on top of each other, so raw data had to be arranged in panels to meet the requirements of our methodology. After the removal of Day-ahead prices are released at D-1 at a different local time (this is e.g., 12:00 in France, 14:00 in Belgium and 14:30 in Poland (https://ec.europa.eu/energy/sites/default/files/ documents/overview_of_european_electricity_markets.pdf, (accessed on 6 April 2021)). In the estimation of our models, the vector D t of deterministic components is composed of one holiday, eleven monthly (February to December) and six weekday (Monday to Saturday) dummy variables (the holiday dummy is constructed separately for each country according to the list of national holidays).
Preliminary Analysis
All countries share a similar intraday (forecasted) load profile, which is briefly presented here due to space limitations. Typically, the load peaks in the afternoon (between hourly sessions H14 and H15) and early at night (between H20 and H21), with its lowest value attained in early morning hours (H04-H05). The predicted renewable generation intraday profile is also common amongst all countries. The renewable energy in-feed is maximized in the afternoon (between H14 and H15), as the solar energy production reaches its peak, levels at night (H22-H03) and hits its lowest point between H04 and H06, when solar power stations hardly produce any energy and wind blows mildly. The Polish and the Portuguese market deviate from the norm. Both ENTSO-E and the Polish transmission system operator (PSE) do not report the forecasted and the actual solar generation for Poland until 10 April 2020, which results in reduced levels of renewable energy production between H08 and H14. On the contrary, the renewable energy in-feed increases at night (between H22 and H03), which is mainly due to the operation of wind farms. In the case of Portugal, the renewable generation prognosis has two local maxima (located at H16 and H22). This feature may be attributed to the increasing share of wind power generation. In 2020, the wind energy supply in Portugal accounted for 24.43% of the total electricity generation [33]. Figure 1 shows the time series of price, load and renewable generation for the H15 trading session of the Italian market. A similar time evolution of the basic model variables is observed across all markets and reference hours. The displayed time series have a clear seasonal signature. The price exhibits way more outliers, although most of the quotes range from 0.2 to 4.5 (in logarithmic scale). Load forecasts show lower dispersion, varying between 7.5 and 8.4, while the typical logarithmic range of renewable generation prognosis is 3.0-8.5.
Fractional Integration and Persistence
A preliminary analysis of model residuals revealed substantial remaining autocorrelation in all reference hours and markets. In many cases, significant sample autocorrelations extend up to the 14th lag. In response to this finding, we assumed that system shocks follow a vector autoregressive process of orders q = 7 in all markets except France, where a more parsimonious VAR(4) filter proved successful in removing serial and lagged crosscorrelation in system errors.
In Section 4, we briefly reviewed the concepts of fractional integration and cointegration. In this section, we empirically assess the persistence of electricity prices across the selected European markets. As in [29], we compared the estimates of θ yi and δ i to indicate whether the chosen covariates (load and renewable generation) are responsible for the drifting behaviour of power prices (For the estimation of θ yi , we applied the conditionalsum of squares technique to the seasonally adjusted and defactored power prices. For details, see [29], Section 5.4. As the panel model employed in this study does not include common stochastic regressors, we were able to approximate the latent factor structure using cross-sectional averages of individual regressors and the dependent variable.). A large value ofθ yi −δ i is evidence for the existence of a rational stochastic trend in price (i.e., price drifts because it follows slow-cycling fundamentals).
Fractional Integration and Persistence
A preliminary analysis of model residuals revealed substantial remaining autocorrelation in all reference hours and markets. In many cases, significant sample autocorrelations extend up to the 14 th lag. In response to this finding, we assumed that system shocks follow a vector autoregressive process of orders = 7 in all markets except France, where a more parsimonious VAR(4) filter proved successful in removing serial and lagged cross-correlation in system errors.
In Section 4, we briefly reviewed the concepts of fractional integration and cointegration. In this section, we empirically assess the persistence of electricity prices across the selected European markets. As in [29], we compared the estimates of and to indicate whether the chosen covariates (load and renewable generation) are responsible for the drifting behaviour of power prices (For the estimation of , we applied the conditionalsum of squares technique to the seasonally adjusted and defactored power prices. For details, see [29], Section 5.4. As the panel model employed in this study does not include common stochastic regressors, we were able to approximate the latent factor structure using cross-sectional averages of individual regressors and the dependent variable.). A large value of -is evidence for the existence of a rational stochastic trend in price (i.e. price drifts because it follows slow- The estimated memory length coefficients of (seasonally adjusted and defactored) price series ranged between 0.92 and 1.08 across all hours and countries, being indicative of a non-stationarity (both in mean and covariance) process. Following Ergemen et al. [28] and Thomaidis and Biskas [29], we also calculated the reduction in the estimated price persistence levels (i.e., the differenceθ yi −δ i ). Through bootstrapping, we were able to calculate confidence intervals for the estimated cointegration gap and thus gauge its statistical significance. Figure 2 shows the 5th, 50th and the 95th percentile of the cointegration gap bootstrap distribution across reference hours and countries. Overall, our results are supportive of the existence of fractional cointegration between prices and fundamental covariates. In all graphs, the intraday curve of the 5th percentile lies above 0, implying a significant reduction in price persistence when prices are corrected for time variations in load and renewable generation. The biggest gap is typically observed in working hours, while for near-midnight and early morning prices, the reduction in the persistence levels is less prominent. Notably, in many cases, the estimated p 5 exceeds 0.5, which allows us to conclude that the non-stationarity of prices is largely a property inherited by fundamentals (i.e., fundamental shocks are persistent). This holds true for all Belgian power delivery contracts and Italian morning contracts. After removing load or renewable generation cycles, prices become mean reverting with bounded variance.
tistical significance. Figure 2 shows the 5th, 50th and the 95th percentile of the cointegration gap bootstrap distribution across reference hours and countries. Overall, our results are supportive of the existence of fractional cointegration between prices and fundamental covariates. In all graphs, the intraday curve of the 5th percentile lies above 0, implying a significant reduction in price persistence when prices are corrected for time variations in load and renewable generation. The biggest gap is typically observed in working hours, while for near-midnight and early morning prices, the reduction in the persistence levels is less prominent. Notably, in many cases, the estimated exceeds 0.5, which allows us to conclude that the non-stationarity of prices is largely a property inherited by fundamentals (i.e. fundamental shocks are persistent). This holds true for all Belgian power delivery contracts and Italian morning contracts. After removing load or renewable generation cycles, prices become mean reverting with bounded variance. Table 1 shows the estimated slope coefficients for the hourly regressors of our model (load forecast and RE generation prognosis). The symbols *, **, and *** indicate statistical significance at 10%, 5% and 1%, respectively. The level of significance of each estimate Table 1 shows the estimated slope coefficients for the hourly regressors of our model (load forecast and RE generation prognosis). The symbols *, **, and *** indicate statistical significance at 10%, 5% and 1%, respectively. The level of significance of each estimate was determined through a resampling scheme (following a design similar to [29]). Overall, results are consistent with economic theory and empirical findings. In almost all power delivery contracts, load is positively correlated with price. In some countries, load elasticity estimates have a mixed sign, although negative estimates are not statistically significant. This result can be attributed to the effect of outliers, which tend to "pull" slope coefficient estimates in the negative segment of the real line. Still, these extreme values do not show up in repeated samplings, hence the insignificance of the estimate. The estimates for the renewable generation elasticity are stably negative across all hours and countries. In all but two hourly power delivery contracts, the hypothesis that the true value of the RE slope coefficient is zero is rejected with 95% confidence (in most cases the null hypothesis can be rejected with higher statistical certainty). The importance of fundamental covariates to the shaping of price varies across markets. In Poland, the average impact of a 1% increase in the load forecast to the hourly electricity price is 0.75%. The full range of significant slope estimates for load is 0.32 to 1.11. On the contrary, an 1% increase in the renewable generation prognosis is expected to drive the hourly electricity price down by 0.07% on average (hourly significant estimates vary between −0.09 and −0.05). In the French market, the average price elasticity to load is much higher (1.79), although significant slope estimates vary from 0.87 to 4.78 across trading hours.
Price Elasticities
We also find that a 1% increase in the renewable energy supply (uniformly across all trading hours) is expected to reduce the French electricity price by 0.17% on average. Significant estimates for the renewable generation elasticity vary between −0.50 and −0.07. The Spanish market has an average price elasticity to load equal to 1.21 (the min and max of the significant load coefficient estimates is 0.75 and 2.22, respectively) while the renewable energy coefficients range from −0.33 to −0.12, with an average of −0.21.
The estimates of the load slope coefficients are indicative of a complex pricing relationship in particular hours and/or countries. Theory dictates that the sensitivity of price to load variations should increase when electricity demand is high. In this system condition, electricity demand crosses the energy offer curve at a steep point, dominated by units of high marginal generation cost. Figure 3 depicts each country's generation mix per power delivery hour. The vertical space between two consecutive lines shows the average share of a generation technology in the hourly energy supply. Raw data were obtained from the ENTSO-E platform (Aggregated Generation per Type, codes 16.1.B and C). As markets are in continuous balance, the top line in each graph also reflects the intraday profile of (actual) load. In the Spanish market, load elasticity estimates are relatively elevated in sessions H15-H19 (ranging from 1.28 to 2.22). As the bottom-right panel of Figure 3 shows, the demand for electricity in these hours is also high. In the French market, exceptionally large estimates for the load coefficient are reported for H13-H15, which are also peak hours according to the top-right panel of Figure 3. Table 1 shows that in most countries the intraday curve of load elasticities has two local maxima that are not necessarily located in hours of increased demand. As opposed to load, the intraday pattern of price elasticity to renewable generation varies significantly across countries. Table 1 shows that in Belgium and France the meritorder-effect is more notable in the afternoon (H13-H15), while in other countries large (absolute) values of renewable generation elasticity are observed later (H16-H17) or earlier (H02-H05) in the day. These variations can be explained by the intraday generation shares of wind and solar power stations in each country. Figure 3 illustrates that solar power generation is almost absent in all countries early in the morning and late at night. The contribution of hydroelectric energy is also relatively low in these trading sessions, while nuclear, fossil-fuel-burning and wind power plants become the main sources of energy. The flat proportion of wind energy in the generation mix is significantly larger in Portugal/Spain compared to France/Italy. Solar energy is underrated in the Portuguese generation mix. This is also the case with Poland, but as we pointed out in Section 5.2, the small share of solar energy production can be attributed to incompleteness of data for this country. Solar energy in Italy, Belgium and Spain occupies a large portion of the average energy supply in daylight hours. Fossil generation fuels play a decisive role in the Polish market and a less important part in the Italian, Portuguese and Spanish generation mix. Their contribution is much lower in Belgium and France, where the nuclear energy is the dominant source across all hourly sessions (The ENTSO-E Transparency platform does not provide further information on the source tagged as "Other" in the Italian generation mix. We conjecture that it refers to imported energy. TERNA reports that in 2019 domestic power generation covered 88% of the annual demand, with the remaining 12% was met by imports. See the "2019 Provisional Data On Operation Of The Italian Electricity System" available from https://download.terna.it/terna/PROVISIONAL%20DATA%20OF%20THE%20 ITALIAN%20ELECTRICITY%-20SYSTEM_2019_EN_WEB_8d7f8db3334aef3.pdf (accessed on 7 April 2021)). A closer inspection of Table 1 estimates reveals that load and RE generation elasticities typically attain their maxima in adjacent trading sessions. Representative of this pattern is the H13 power delivery contract of the French market. The price of this contract is exceptionally elastic to load (the slope coefficient estimate is 3.17) but also very reactive to RE generation variations (the elasticity estimate is −0.50). In the H13 trading session of the French market, shifts in the forecasted load can have a high impact on price, unless more renewable energy is fed into the grid. Renewable energy in-feeds tend to shift the supply curve to the right. Even if demand is high, it will cross the power supply curve at a less steep point resulting in a lower (net) price disturbance. A similar "dipolar" profile of elasticities is observed in the Belgian and the Spanish market, although large absolute values for the slope coefficients do not always coincide with peaking load
Discussion and Future Research
The aim of this study was to estimate and assess the importance of fundamental price drivers in several European day-ahead electricity markets (Belgian, French, Italian, Polish, Portuguese and Spanish). Among the many possible indicators that determine the evolution of hourly prices, we examined the load forecast and the predicted in-feed from wind and solar power plants. Applying new panel data techniques that take into account slope heterogeneity and common unobserved effects, we found that the renewable generation prognosis has a negative impact on hourly prices, uniformly across all power delivery hours and countries. This finding is supportive of the merit-order-effect hypothesis, which has also been a topic of research in other studies on electricity markets. The impact of the load forecast is more concentrated; load seems to be a decisive determinant of electricity price in selected hourly trading sessions. Experimental results show that in many markets renewable energy supply can largely offset the load elasticity thus reducing price risk.
An additional objective of this study was to explore the long-memory property of electricity prices. We found that the price of contracts for power delivery in all hours of the subsequent day and across all selected markets is very persistent. The degree of persistence remains high (similar to random walk) even if one purges seasonality and unobserved commonality. In many contracts, non-stationarity can be attributed to swings in the postulated fundamental drivers; price residuals appear to be mean-reverting and have bounded unconditional variance. This finding may be indicative of the inability of market players to exert power in these trading sessions, as price shocks do not seem to have a long-lasting effect. Still the mean-reversion rate varies significantly with the reference hour, which does not allow us to draw safe conclusions on the degree of market competitiveness.
Our study attempts to highlight the forces that shape the intraday price curve dynamics, thus assisting in the development of more efficient strategies for managing market risks. The unique characteristics of electricity prices have motivated market participants to seek hedging strategies that offer protection against power price shifts. In the light of a merit-order effect, these strategies need to be extended to the control of volumetric risk (associated with fluctuations in the output of variable generators), which until now has only been the concern of renewable energy traders.
There are several directions at which our study can be extended in the future. The recent COVID-19 pandemic has initiated a literature stream that investigates possible effects on the operation of electricity markets around the globe. No doubt, COVID-19 has caused disruptions in both commercial and industrial activity, altering the patterns of electricity consumption. Ghiani et al. [34] attempt to quantify the effect of the COVID-19 pandemic on electricity demand in Italy. They claim that although more people have started working from home, the increase in residential consumption could not balance the negative demand shock coming from the lockdown of industries. Their study reports a 37% decrease in electricity consumption (compared to the years preceding the pandemic outbreak) and a parallel 30% reduction in the wholesale electricity price. According to the authors, the observed decline in the wholesale price cannot be solely attributed to the (net) negative demand shock, as during the pandemic the renewable energy generation share has also increased. Santiago et al. [35], using data from the period 2015-2020, estimated an average reduction of 13.49% in electricity consumption during the COVID-19 pandemic period (the reduction is 14.53% on working days and 10.62% on weekends). The authors also report significant changes in the intraday demand profile. The largest decline has occurred in the evening and morning hours when electricity consumption normally peaks. Agdas and Barooah [36] present opposite evidence on the effect of the pandemic on the U.S. electricity consumption. Based on power system data from three states (Florida, California and New York) spanning a period of two years (2019 and 2020), they conclude that there is no clear indication of a shift in electricity demand. All in all, the available literature evidence does not firmly suggest that the COVID-19 pandemic has infused a structural break in the electricity markets operation. Yet this constitutes a promising area for future research. | 2021-11-17T16:11:40.173Z | 2021-11-15T00:00:00.000 | {
"year": 2021,
"sha1": "749f6dbb5dcc6114ab20bc7b87d90528c73dca48",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/22/7623/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "485b2cfff0f16c93fc2b439717ec315c5ac25a32",
"s2fieldsofstudy": [
"Economics",
"Engineering"
],
"extfieldsofstudy": []
} |
266877679 | pes2o/s2orc | v3-fos-license | Surgical Management of Hook of Hamate Fractures: A Systematic Review of Outcomes
Purpose This review aimed to compare the postoperative outcomes of open reduction internal fixation (ORIF) versus excision in the surgical treatment of hook of hamate fractures. Methods A systematic review of PubMed and EMBASE databases from 1954 to 2023 was performed using the search term “hook of hamate fracture” to identify all publications regarding the use of ORIF or excision in the treatment of hook of hamate fractures. Outcomes included a return to sport, pain, ulnar nerve dysfunction, flexor tendon dysfunction, union rate, wrist range of motion (ROM; % of contralateral hand), grip strength (% of contralateral hand), and quick disabilities of arm, shoulder, and hand scores. Results Twenty-seven of the 705 total screened articles were included. Excision of the hook of hamate (n = 779) resulted in a shorter return to sport time (6 vs 7.8 weeks), lower rates of postoperative pain (6.1% vs 33.3%), higher rates of ulnar nerve sensory dysfunction (4.2% vs 0%), and higher rates of ulnar nerve motor dysfunction (1.5% vs 0%) relative to ORIF (n = 51). Chronic fractures had a longer return to sport time (7.2 vs 5.7 weeks) relative to nonchronic injuries. Conclusions Both surgical procedures appear to yield acceptable outcomes in the treatment of hook of hamate fractures. However, based on the sparsity of available data, we are unable to determine a consistent difference between hook of hamate excision and ORIF. Clinical relevance To our knowledge, no current consensus on the optimal surgical treatment for hook of hamate fractures exists. Our findings emphasize the need for a large prospective cohort study using standardized outcomes to provide strong evidence as to whether surgical excision or ORIF yields greater outcomes in the treatment of hook of hamate fractures.
Hook of hamate fractures comprise approximately 2% to 4% of all carpal fractures.1e3 The hook of hamate fracture frequently occurs in sports where repeated impact exerting a direct force against the hamate exists, such as tennis, baseball, and golf.4e6 The hook of the hamate's peculiar anatomy places it at risk of fracture.A fracture of this area can result in weakness of grip and persistent ulnar-sided wrist pain, hindering everyday tasks and sports. 7,8If not optimally treated, hook of hamate fractures can cause chronic pain, nonunion, ulnar nerve irritation, degenerative changes, and tendon rupture. 5,9These potential complications have made injuries to the hook of the hamate historically challenging to manage.
Multiple reports of nonunion have been noted, even in patients in whom the correct diagnosis and proper immobilization were initiated early on. 10As such, current treatment modalities are directed toward early surgical intervention in the form of excision or ORIF.Although extensive methods of surgical management have been described, few studies provide direct comparison among techniques, 5,9 and there remains a lack of consensus on the best approach to fracture treatment, particularly among athletes. 5,9The purposes of this study were to review the current literature on hook of hamate surgical treatment (excision vs ORIF) and analyze validated clinical and functional outcomes.We hypothesize that excision will be superior to ORIF in both clinical and functional outcomes.
Study selection
A literature search was performed on July 31, 2023, via the electronic databases PubMed and EMBASE using the search term "hook of hamate fracture."Articles in the literature searched ranged from 1954 through July 2023.All articles subsequently underwent a 2-step review process by 2 independent reviewers as follows: (1) article title and abstract were reviewed and (2) those articles meeting eligibility criteria underwent a full-text review.This systematic review of the relevant literature was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 11
Eligibility criteria
Inclusion criteria for articles included the following: Englishlanguage articles, levels of evidence I through IV, specification of a fracture, specification of treatment or lack thereof, and inclusion of clinical outcomes data.Exclusion criteria included the following: non-English articles, review articles, case reports, studies evaluating hamate body fractures, studies evaluating newly implemented surgical procedures or technique guides, studies, including multiple fractures, and no reporting data separated by type of treatment.
Data abstraction/analysis
Two independent reviewers examined the selected full-text articles after abstract review for inclusion.Data, including patient characteristics, treatment methodology, and functional outcomes, were extracted from the articles selected for inclusion.The outcomes commonly reported were return to sport, pain, union, and ulnar nerve dysfunction.
Posttreatment range of motion (ROM) was reported as the percentage of the contralateral uninjured wrist ROM.Grip strength was reported as a percentage of injured hand strength to contralateral uninjured extremity.The quick disabilities of arm, shoulder, and hand (QuickDASH) is a patient-reported questionnaire comprising 11 questions pertaining to disability and severity of symptoms totaling to a maximum score of 100 with higher scores correlating to increased disability and symptoms.Weighted averages were calculated using studies that provided 1 or more of the above metrics.Time to surgery/diagnosis was calculated as the majority of studies reported either time to surgery or time to diagnosis.In one study, both were reported, but time to surgery was used to calculate time to surgery/diagnosis.All studies were available for synthesis and variables of interest that were not reported were left blank for the respective study.A meta-analysis could not be performed because of the heterogeneity of the literature collected.The protocol for this systematic review was not registered.
A preplanned risk assessment was not assessed within this systematic review because of the lack of high-quality evidence investigating the clinical outcomes in the surgical management of hook of hamate fractures.The systematic review consists of case series and retrospective cohort studies that contain inherent biases and the lack of a control group.
Results
A total of 705 articles were identified, and 216 duplicates were removed.Subsequently, 60 articles met eligibility requirements in the first review of titles and abstracts.After full-text assessment, 27 articles were included in the current review.The literature review process is detailed in a Preferred Reporting Items for Systematic Reviews and Meta-Analyses flowchart (Fig) .3e5,7,8,10,12e31 Additionally, of the 27 articles included in the final review, 17 articles provided either time to diagnosis or time to procedure.In addition to stratification based on the type of operation, these articles were stratified based on the chronicity of the injury into 2 groups chronic (time to operation or time to diagnosis >12 weeks) and nonchronic (time to operation or time to diagnosis 12 weeks).
Patient characteristics
Appendix A (available on the Journal's website at www.jhsgo.org) lists patient and study characteristics in more detail.Of the 27 articles included, data were available for 830 hands from 827 patients.In the ORIF group (51 patients; 51 hands; 8 papers), 15.6% of the patients were women and the average age for all patients was 40 years.The average time to surgery/diagnosis was 5.1 weeks, and the average time for follow-up was 25.1 months.Pain was present in 100% of the patients presenting with hook of hamate fractures.Sports-related injury was the cause of fracture in 30% of the patients who underwent ORIF.In the excision group (776 patients; 779 hands; 19 papers), 2.2% of the patients were women, and the average age for all patients was 23.2 years.The average time to surgery/diagnosis was 16.2 weeks, and the average time for followup was 13 months.Pain was present in 88% of the patients presenting with hook of hamate fracture.Sports-related injury was the cause of fracture in 93.7% of the patients who underwent hook of hamate excision.
Categorizing studies on chronicity of injury, in the chronic fracture group (135 patients; 135 hands; 8 papers), 7.1% of the patients were women, and the average age was 30.2 years.The average time to surgery/diagnosis was 20.8 weeks, and the average time for follow-up was 49.4 months.Pain was present in 100% of patients presenting with hook of hamate fractures.Sports-related injury was the cause of fracture in 77.8% of the patients with chronic injury.Hook of hamate excision was performed in 93.4% of these patients.In the nonchronic group (106 patients; 109 hands; 9 papers), 7.5% of the patients were women, and the average age was 26.4 years.The average time to surgery/diagnosis was 6.2 weeks, and the average time for follow-up was 23.1 months.Pain was present in 100% of the patients presenting with hook of hamate fractures.Sports-related injury was the cause of fracture in 41.5% of the patients with nonchronic injury.Hook of hamate excision was performed in 68.8% of these patients.
Clinical outcomes
The full list of outcomes was split by surgical intervention (Table 1) and chronicity of injury (Table 2).In ORIF studies, the ROM and grip strength were reported or calculated in 2 of 8 papers (19 hands), return to sport was reported in 1 of 8 paper (6 hands), final QuickDASH (average 13 months post-ORIF) score was reported in 3 of 8 papers (16 hands), pain was reported in 6 of 8 papers (39 hands), ulnar nerve and tendon dysfunction were reported in 3 of 8 papers (14 hands), and union was reported in 7 of 8 papers (38 hands).In excision studies, the ROM was not reported or calculable in any papers, grip strength was reported in 2 of 19 articles (32 hands), return to sport was reported in 14 of 19 papers (751 hands), final QuickDASH (average 19 months after excision) was reported in 1 of 19 papers (12 hands), pain was reported in 13 of 19 papers In chronic fractures, the ROM was not reported or calculable.Grip strength was reported or evaluated in 2 of 8 papers (32 hands), return to sport was evaluated in 5 of 8 papers (119 hands), final QuickDASH (average 6 months posttreatment) was reported in 1 of 8 papers (4 hands), pain was reported in 7 of 8 papers (118 hands), ulnar nerve and tendon dysfunction were reported in 6 of 8 papers (110 hands), and union was reported in 2 of 8 papers (38 hands).In nonchronic fractures, the ROM and grip strength were reported or calculated in 2 of 9 papers (19 hands), return to sport was reported in 5 of 9 papers (74 hands), and final QuickDASH (average 18 months posttreatment) was reported in 3 of 9 papers (24 hands), pain was reported in 6 of 9 papers (89) hands, ulnar nerve and tendon dysfunction were reported in 4 of 9 papers (67 hands), and union was reported in 3 of 9 papers (21 hands).
Excision of the hook of the hamate resulted in a return to sport time of 6 weeks, postoperative pain in 6.1% of the patients, ulnar nerve sensory dysfunction in 4.2% of the patients, and ulnar nerve motor dysfunction in 1.5% of the patients.Treatment with ORIF was associated with a return to sport time of 7.8 weeks, postoperative pain in 33.3% of the patients, ulnar nerve sensory dysfunction, and ulnar nerve motor dysfunction were not present among the ORIF group.
Treatment of chronic fractures resulted in a return to sport time of 7.2 weeks, postoperative pain in 2.5% of the patients, ulnar nerve sensory dysfunction in 1.8% of the patients, and ulnar nerve motor dysfunction in 0.9% of the patients.Treatment of nonchronic fractures was associated with a return to sport time of 5.7 weeks, postoperative pain in 15.7% of the patients, ulnar nerve sensory dysfunction in 4.5% of the patients, and ulnar nerve motor dysfunction was not present among the nonchronic group.
Discussion
This study provides an extensive literature review of treatment approaches in the setting of hook of hamate fracture care.We found that neither ORIF nor excision yielded consistently improved outcomes.Additionally, ORIF yielded the greatest average grip strength and lowest rates of ulnar nerve and flexor tendon dysfunction after surgery.Excision yielded a decreased return to sport time, postoperative QuickDASH score, and rates of postoperative pain.The cause of injury was much more likely to be related to sport in those who underwent hook of hamate excision.
Additionally, this study analyzed postoperative outcomes among acute versus chronic fractures of the hook of the hamate.Chronic fractures were much more likely to be treated with excision relative to nonchronic fractures.Chronic fractures had lower postoperative QuickDASH scores, lower rates of postoperative pain, and lower rates of ulnar nerve sensory dysfunction.Treatment of acute fractures resulted in greater grip strength, lower return to sport time, lower rates of ulnar nerve motor dysfunction, and lower rates of flexor tendon dysfunction.
Two studies have examined postoperative results among hook of hamate management treatments and compared the use of immobilization, hook of hamate excision, and ORIF. 9,32Neither surgical approach resulted in superior postoperative measures, whereas immobilization resulted in nonunion rates of 24% and 83% in each study.This is in accordance with the current review of literature where neither surgical intervention proved to be consistently superior.
The goal of surgery for hook of hamate excision was removal of the fracture while avoiding ulnar nerve structures.Indications for surgical treatment of these injuries include chronic nonunion fractures in addition to acute fractures in younger athletes.Therefore, ORIF is indicated for acute hook of hamate fractures in older adults.Although each method has general indications as noted above, great variability exists for each fracture based on the fracture characteristics and surgeon preference and experience.Postoperative rehabilitation after hook of hamate fracture also varies greatly by surgeon and institution; however, general mainstays of treatment include early passive ROM exercises as allowed by stability of the fixation construct.Disparity in postoperative rehabilitation is a potential source of variation in the results of this review.
This literature review has several limitations.Although only validated data were included, the heterogeneous composition of the articles reviewed made direct comparisons challenging.Particularly, the reported outcome measures displayed an enormous range of variation between articles, making it difficult to make definitive conclusions regarding the superiority of either surgical technique.Lack of standardized outcome measures not only resulted in reporting variability but also in the presentation of reported information.The relatively small sample sizes, because of data missingness, within ROM, grip strength, and QuickDASH score variables made these values unreliable in the comparison of surgical techniques.Direct comparisons were made even more challenging by the relatively low volume of studies investigating ORIF in the treatment of hook of hamate fractures.In addition, upon comparing nonchronic and chronic fractures, we found that the timing of surgery strongly impacted the type of surgery with which these patients were treated.As a result, our findings regarding the outcome differences between nonchronic and chronic fractures are confounded by the relatively high number of hook of hamate excision procedures performed in the chronic fracture cohort relative to the nonchronic fracture cohort.We elected not to exclude ORIF patients from this comparison because of the already low availability of data.The selection of only English-language articles imports a selection bias that may limit the generalizability of our findings.This review is also limited to a literature body of predominately case series with inherent biases that lack a control group.Because of this, we are unable to determine the causality of our findings.Therefore, future control studies are required to assess beyond correlation.
In this review, we report that no surgical approach to hook of hamate fracture management yielded consistently higher average postoperative outcomes.Treatment of this fracture is ultimately based on fracture chronicity, patient functional status, and ulnar nerve involvement.However, our data were limited by data missingness and demonstrated that no robust comparison of hook of hamate excision and ORIF is feasible with the existing literature.We recommend a robust prospective cohort or randomized trial with standardized outcomes, such as average postoperative pain, return to sport, ulnar nerve dysfunction, flexor tendon dysfunction, ROM, grip strength, and QuickDASH score.
Figure .
Figure.Preferred Reporting Items for Systematic Reviews and Meta-Analyses flowchart depicting literature review methodology.
Table 1
Clinical Outcomes Stratified by Surgery Type | 2024-01-10T16:28:29.996Z | 2023-12-27T00:00:00.000 | {
"year": 2023,
"sha1": "287017e1d5060bb9c4e46877cef5c1cf0adc9d96",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jhsgo.org/article/S2589514123001962/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72aa4e587345fe78496114435313ebbec1951a65",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
8683541 | pes2o/s2orc | v3-fos-license | Carotid endarterectomy versus carotid angioplasty for stroke prevention: a systematic review and meta-analysis
Background This meta-analysis aimed to evaluate the efficacy of carotid endarterectomy (CE) compared with carotid angioplasty (CA) in preventing stroke. Whether the use of CE is more efficient in preventing stroke than CA is a matter of debate. Methods Data were gathered from randomized controlled trials to evaluate the effect of CE compared with CA on the risk of stroke. Electronic searches in PubMed, Embase, and the Cochrane Library were performed to identify studies till November 2014. Only randomized controlled trials performed on patients who received either CE or CA for stroke prevention were included. Results Nine relevant trials (n = 7163) that met the inclusion criteria were identified. In a pooled analysis, CE resulted in 35 % reduction in relative risk (RR) for short-term stroke [RR, 0.65; 95 % confidence interval (CI): 0.47–0.89; P = 0.007)] and 22 % reduction in RR for long-term stroke (RR, 0.78; 95 % CI: 0.66–0.93; P = 0.006) relative to CA. However, CE also increased the risk of 30-day myocardial infarction by 114 % compared with CA (RR, 2.14; 95 % CI: 1.30–3.53; P = 0.003). Sensitivity analyses suggested that CE might influence the risk of 30-day major vascular events and 1-year major vascular events compared with CA. Conclusions CE could reduce the risk of stroke (whether short term or long term), but resulted in a relative increase in the risk of myocardial infarction. This study might guide appropriate judgments about treatment approach. It also provided evidence to justify general guidelines for patients with carotid artery stenosis.
Background
Cerebrovascular disease, either ischemic or hemorrhagic stroke, is the leading cause of premature mortality and morbidity worldwide for both men and women [1][2][3]. Asian countries have a higher incidence of stroke compared with Western countries [4]. Over the past few years, many studies have shown a strong correlation between carotid artery stenosis and stroke [5,6]. It has been suggested that carotid artery stenosis should be corrected as a therapeutic approach to prevent stroke events. However, the use of carotid endarterectomy (CE) compared with carotid angioplasty (CA) for preventing stroke has not been shown consistently to be beneficial.
CE was recommended as the standard therapy, which could reduce the risk of stroke in patients with carotid artery stenosis [7]. However, in many cases, a high residual risk of stroke persists after CE. Hence, it is necessary to explore additional effective preventive therapies [8]. Recently, endovascular treatments [9] (CA with or without stenting) have been increasingly used as an alternative to CE. However, whether endovascular treatments are more effective than surgery in patients with carotid artery stenosis remains unclear. This led to uncertainty over the presence and magnitude of any protective effects of endovascular treatments and surgery on stroke, and also difficulties in interpretation of the results. Therefore, this systematic review and meta-analysis was conducted to evaluate the possible effect of CE compared with CA on stroke in patients with carotid artery stenosis.
Methods
Data sources, search strategy, and selection criteria This review was conducted and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Statement [10] issued in 2009. Data were gathered from randomized controlled trials to evaluate the effect of CE compared with CA on the risk of stroke. Trials comparing CE with CA were included, excluding any studies with a sample size less than 50, to alleviate systematic error and resulting bias, hence ensuring the reliability of the conclusion.
The English literature was systematically searched to identify all relevant randomized, controlled trials regardless of publication status (published, in press, and in progress). Relevant trials were identified using the following procedures: (1)Electronic searches: The PubMed, Embase, and the Cochrane Central Register of Controlled Trials were searched for randomized controlled trials of CE compared with CA, using "endarterectomy,", "angioplasty," "stenting," stenosis," "carotid," "human," "English,", and "randomized controlled trials" as search terms. All reference lists from reports on nonrandomized controlled trials were searched manually for additional eligible studies. (2)Other sources: Authors were contacted to obtain any possible additional published or unpublished data, and the site http://www.ClinicalTrials.gov was searched for ongoing randomized controlled trials that had been registered as completed but not yet published, using the aforementioned terms. Medical subject headings, methods, patient populations, interventions, and outcome variables of these studies were used to identify relevant trials.
The literature search, data extraction, and quality assessment were undertaken independently by two authors (CZ and FLC) using a standardized approach, and any discrepancy was settled by group discussion. Studies were eligible for inclusion if: (1) the study was a randomized controlled trial; (2) sample size was more than 50; (3) the number of events for stroke that occurred during the study was more than 10; (4) the trials assessed the effects of CE compared with CA; and (5) patients had carotid artery stenosis.
Data collection and quality assessment
All data from eligible trials were independently abstracted in duplicate by two independent investigators (CZ and FLC) using the standard protocol, and reviewed by a third investigator (AJH). Any discrepancy was resolved by group discussion. Data were extracted from the included trials were as follows: name of first author or study group, publication year, number of patients, percentage of males, mean age, history of disease, intervention, control, duration of follow-up, and primary outcome (the number of incident cases for each treatment group). One author (AJH) entered the data into computer, and the primary author (YHZ) checked it. The study quality was assessed using the Jadad score [11], which was based on the following five subscales: randomization (1 or 0), concealment of the treatment allocation (1 or 0), blinding (1 or 0), completeness of follow-up (1 or 0), and the use of intention-totreat analysis (1 or 0). A "score system" (ranging from 0 to 5) was developed for assessment. In this meta-analysis, a study with a score of 4 or more was considered to be of high quality.
Statistical analysis
The results of each randomized controlled trial was compiled as dichotomous frequency data. Individual study relative risks (RRs) and 95 % confidence intervals (CIs) were calculated from event numbers extracted from each trial before data pooling. The overall RR and 95 % CIs of stroke incidence, major vascular events, myocardial infarction, and any possible adverse events were also calculated. Both fixed-effects and randomeffects models were used to assess the pooled RR for CE compared with CA. Although both models yielded similar findings, the results from the random-effects model assumed that the underlying effect varied among included trials [12,13]. The heterogeneity of the treatment effects between studies was investigated visually using scatter plot analysis as well as statistically using the heterogeneity I 2 statistic [14,15]. A sensitivity analysis was also performed by removing each individual trial from the meta-analysis. An Egger's test [16] was used to check for potential publication bias. All the reported P values were two-sided, and a P value less than 0.05 was regarded as statistically significant for all included studies. All analyses were calculated using software STATA (version 10.0).
Results
Of the 19 trials retrieved for detailed assessment, 10 were excluded because: they lacked data on stroke, they reported on the same study population, [17] they were of small sample size, or it was a stopped trial. The final analysis included nine randomized controlled trials [18][19][20][21][22][23][24][25][26] consisting of 7163 patients with carotid artery stenosis (Fig. 1). These trials compared CE with CA, with stroke reported as one of the endpoints. Table 1 summarizes the characteristics of these trials and the important baseline information of the included 7163 patients. Of the nine trials, two were performed in the USA [18,20], four in European countries [19,21,22,24], one [23] in Germany, Austria, and Switzerland, one [25] in the USA and Canada, and one [26] in Europe, Australia, and Canada. The number of patients ranged from 87 to 2502. The percentage of previous cases with cardiovascular disease ranged from 11.9 to 80.7 %. The duration of follow-up ranged from 0.3 to 5.4 years. The inclusion criteria were restricted to randomized controlled trials with the number of patients more than 50 to ensure that high-quality literature was included in the study. Although the included trials scarcely reported on the key indicators of trial quality, the quality of the included trials was also evaluated according to the predefined criteria using the Jadad score [11]. Overall, five [20][21][22][23]25] of the included trials scored 4, two trials [19,26] scored 3, and the remaining two trials [18,24] Data on the effect of CE on 30-day major vascular events were available from 7 trials, which included 6911 patients and reported 424 major vascular events. Figure 2 shows the effect of CE on 30-day major vascular events compared with CA. The pooled RR showed a 22 % reduction in 30-day major vascular events, but with no evidence showing that CE protected against the risk of vascular events (RR, 0.78; 95 % CI: 0.57-1.06; P = 0.11). Some evidence of heterogeneity across the studies included was available. A sensitivity analysis indicated that CE was associated with a reduction in the risk of 30-day major vascular events, which was decreased by 28 % (RR, 0.72; 95 % CI: 0.54-0.94; P = 0.02, Fig. 2) when excluding SAPPHIRE trials [18]. This trial specifically added an embolic protection device to carotid artery stenting, which was more efficient in preventing major vascular events. Similarly, no evidence was found to show that CE protected against 1-year/within 1-year major vascular events (RR, 0.69; 95 % CI: 0.40-1.18; P = 0.18, Fig. 2). Substantial heterogeneity was observed in the magnitude of the effect across the trials included, according to a sensitivity analysis. It was concluded that CE was associated with a reduction in the risk of 1-year/ within 1-year major vascular events, which was decreased by 44 % (RR, 0.56; 95 % CI: 0.42-0.75; P < 0.001, Fig. 2) when excluding SAPPHIRE trials [27]. Finally, the pooled analysis showed no significant differences in the influence of CE and CA on long-term (more than 1-year) major vascular events (RR, 1.00; 95 % CI: 0.87-1.14; P = 0.95, Fig. 2). Fig. 3). Despite some evidence of heterogeneity across the studies included, a sensitivity analysis indicated that the results were not affected by sequential exclusion of any particular trial from the pooled analysis. Furthermore, although CE reduced the risk of 1-year/within 1-year stroke by 36 %, it was not associated with a statistically significant difference (RR, 0.64; 95 % CI: 0.39-1.04; Fig. 3). A sensitivity analysis indicated that CE was associated with a reduction in the risk of 1-year/ within 1-year stroke, which was decreased by 48 % (RR, 0.52; 95 % CI: 0.35-0.76; P = 0.001, Fig. 3) when excluding SAPPHIRE trials [27]. Finally, when patients received CE, the risk of long-term stroke was significantly reduced by 22 % compared with CA (RR, 0.78; 95 % CI: 0.66-0.93; P = 0.006, Fig. 3).
The effect of CE on the risk of 30-day myocardial infarction was reported in 5 trials, which included 5509 patients and recorded 70 events of myocardial infarction. Overall, it was noted that CE increased the risk of 30day myocardial infarction by 114 % compared with CA (RR, 2.14; 95 % CI: 1.30-3.53; P = 0.003; Fig. 5). Only three trials provided data on 1-year/within 1-year myocardial infarction. It was noted that CE increased the risk of 1-year/within 1-year myocardial infarction by 104 %, but it was not associated with a statistically significant difference (RR, 2.04; 95 % CI: 0.90-4.61; P = 0.09, Fig. 5). Furthermore, only SAPPHIRE trials [27] provided data on long-term myocardial infarction. No effect of CE on the risk of long-term myocardial infarction events was observed (RR, 1.56; 95 % CI: 0.69-3.49; P = 0.28).
Egger test [16] was used to check for potential publication bias, which showed no evidence of publication bias for the outcomes of 30-day major vascular events Fig. 3 Effect of carotid endarterectomy on the risk of stroke compared with carotid angioplasty (P value for Egger test, 0.889), 30-day stroke (P value for Egger test, 0.902), and 30-day myocardial infarction (P value for Egger test, 0.376). However, evidence was found of publication bias for 30-day mortality (P value for Egger test, 0.025). The conclusions were not changed after adjustment for publication bias using the trim-and-fill method [28].
Discussion
A direct relationship was observed between the degree of carotid artery stenosis and the risk of stroke. Although CE has been considered the gold standard for the treatment of carotid stenosis for decades, evidence from largescale randomized controlled trials [6] has shown that CA has emerged as an alternative therapy for this common disorder. The results of this meta-analysis showed that CE reduced the risk of 30-day stroke and long-term stroke. However, it also significantly increased the risk of myocardial infarction compared with CA. Furthermore, sensitivity analyses suggested that CE might influence the risk of 30day and 1-year/within 1-year major vascular events.
According to the SAPPHIRE trials [18,27], the study suggested that CA was not inferior to CE in preventing the risk of stroke, whether short term or long term. The present results were inconsistent with large-scale randomized, controlled trials, probably because this trial specifically added an embolic protection device to carotid artery stenting, which might have contributed more efficacy in preventing the risk of stroke. Furthermore, EVA-3S trials [21,29] indicated that in patients with symptomatic carotid stenosis of 60 % or more, the rates of death and stroke at 1 and 6 months were lower with CE than with CA. SPACE trials [23,30] failed to prove the noninferiority of CA compared with CE in terms of the periprocedural complication rate. Furthermore, it suggested a similar effect on the risk of ipsilateral ischemic strokes between CE and CA. The present study defined that benefits could be achieved when patients with Fig. 4 Effect of carotid endarterectomy on the risk of mortality compared with carotid angioplasty carotid artery stenosis underwent CE. However, it was also noted that CE significantly increased the risk of short-term myocardial infarction. The risk of 1-year/ within 1-year myocardial infarction and long-term myocardial infarction was not observed, probably because less number of trial provided data for this result.
CE might play an important role in mortality, although a significant difference was not observed. The reason for this could be that CE also significantly increased the risk of myocardial infarction, translating into an increased risk of life-threatening events. CAVATAS trial [26,31] indicated that more patients had stroke during long-term follow-up in the endovascular group than in the surgical group. However, the rate of ipsilateral non-perioperative stroke was low in both the groups, with no differences in the stroke outcome measures. This conclusion was in accordance with the findings of the present meta-analysis. This study was promising because randomized controlled trials were restricted to meet the inclusion criteria, and the aim was to provide the best evidence for a causal relationship.
A previous meta-analysis [32] illustrated that carotid artery stenting was inferior to CE with regard to the incidence of stroke or death for periprocedural outcomes, especially in symptomatic patients. Furthermore, it also suggested that carotid artery stenting was associated with a lower incidence of myocardial infarction. The present study also confirmed that patients who underwent CE had a high risk of myocardial infarction. However, it also indicated that CE was associated with a statistically significant reduction in the risk in stroke.
The major limitation of this study was the inherent assumptions made for the meta-analysis. The analysis used pooled data, whether from published papers or provided by individual authors. Individual patient data and original data were not available, which restricted performing more detailed relevant analysis and obtaining more comprehensive results. Additionally, no sufficient data were available on detailed effects of CE on the risk of different types of stroke. Furthermore, during the planning stages, the intention was to perform subgroup analyses on the basis of other confounders, which might affect the treatment effect. However, the results of subgroup analyses on the basis of follow-up duration might be unreliable because of smaller cohorts included. This study attempted to provide a comprehensive review on the comparison between the efficacy of CE and CA.
Conclusions
In conclusion, CE could reduce the risk of stroke (whether short term or long term), and might influence the risk of major vascular events compared with CA. However, it also increased the risk of 30-day myocardial infarction. The present study might guide appropriate judgments about treatment approach. It also provided evidence to justify general guidelines for patients with carotid artery stenosis. It is suggested that the following factors be improved in future research: (1) the adverse effect events of clinical trials should be recorded and reported normatively and (2) myocardial infarction events should be taken into consideration for patients who underwent CE. | 2018-04-03T00:05:43.384Z | 2016-09-08T00:00:00.000 | {
"year": 2016,
"sha1": "e3ba892b767e6affa5c87db6b63f9d20239a7046",
"oa_license": "CCBY",
"oa_url": "https://cardiothoracicsurgery.biomedcentral.com/track/pdf/10.1186/s13019-016-0532-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3ba892b767e6affa5c87db6b63f9d20239a7046",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258795098 | pes2o/s2orc | v3-fos-license | Immunomodulatory Macrophages Enable E-MNC Therapy for Radiation-Induced Salivary Gland Hypofunction
A newly developed therapy using effective-mononuclear cells (E-MNCs) is reportedly effective against radiation-damaged salivary glands (SGs) due to anti-inflammatory and revascularization effects. However, the cellular working mechanism of E-MNC therapy in SGs remains to be elucidated. In this study, E-MNCs were induced from peripheral blood mononuclear cells (PBMNCs) by culture for 5–7 days in medium supplemented with five specific recombinant proteins (5G-culture). We analyzed the anti-inflammatory characteristics of macrophage fraction of E-MNCs using a co-culture model with CD3/CD28-stimulated PBMNCs. To test therapeutic efficacy in vivo, either E-MNCs or E-MNCs depleted of CD11b-positive cells were transplanted intraglandularly into mice with radiation-damaged SGs. Following transplantation, SG function recovery and immunohistochemical analyses of harvested SGs were assessed to determine if CD11b-positive macrophages contributed to tissue regeneration. The results indicated that CD11b/CD206-positive (M2-like) macrophages were specifically induced in E-MNCs during 5G-culture, and Msr1- and galectin3-positive cells (immunomodulatory macrophages) were predominant. CD11b-positive fraction of E-MNCs significantly inhibited the expression of inflammation-related genes in CD3/CD28-stimulated PBMNCs. Transplanted E-MNCs exhibited a therapeutic effect on saliva secretion and reduced tissue fibrosis in radiation-damaged SGs, whereas E-MNCs depleted of CD11b-positive cells and radiated controls did not. Immunohistochemical analyses revealed HMGB1 phagocytosis and IGF1 secretion by CD11b/Msr1-positive macrophages from both transplanted E-MNCs and host M2-macrophages. Thus, the anti-inflammatory and tissue-regenerative effects observed in E-MNC therapy against radiation-damaged SGs can be partly explained by the immunomodulatory effect of M2-dominant macrophage fraction.
Introduction
Treatment of head and neck cancers with radiation therapy, including in combination with chemotherapy and/or surgery, causes a loss of fluid-producing acinar cells, leading to progressive tissue atrophy. This adverse effect irreversibly damages the function of the salivary glands (SGs) [1][2][3]. Affected patients typically suffer considerable morbid condition, as SG dysfunction results in a serious condition of xerostomia [1][2][3][4]. Xerostomia not only causes swallowing disorder but promotes dental caries, aggravates periodontal contributes to the amelioration of damage in irradiated SGs. This study not only revealed the mechanism of E-MNC therapy but also might have uncovered a novel direction for cell therapies to treat atrophic diseases of the SGs.
Mice
Animal experiments in this study were carried out according to protocols approved by Nagasaki University Animal Care and Use Committee (approved number: 1605271307, 1610051411). As described in our previous study [8], female and male C57BL/6JJcl mice (eight weeks old) were used for recipients and donors, and C57BL/6-Tg (CAG-EGFP) male mice were applied as donors in some experiments to pursue the transplanted E-MNCs.
Methods for 5G-Culture of PBMNCs 2.2.1. Mice Cells
To prepare E-MNCs, PBMNCs were cultured using the 5G-culture system, which was described in our previous work [8] (Figure 1). In brief, isolated PBMNCs were plated into wells of Primaria plates (BD Biosciences, San Jose, CA, USA) under specific conditions using Stem Line II medium (Sigma Aldrich, St. Louis, MO, USA) and 5 mouse recombinant proteins [8] (Supplemental Table S1). After 5 days, when CD11b/CD206-positive cells were maximally enriched, E-MNCs were collected for the following experiments. As a prerequisite to clarifying the cellular mechanism of E-MNC treatment, this study focused on the CD11b-positive cell fraction among E-MNCs, which includes both M1-and specifically induced M2-macrophage-like cells, to investigate how this cell fraction contributes to the amelioration of damage in irradiated SGs. This study not only revealed the mechanism of E-MNC therapy but also might have uncovered a novel direction for cell therapies to treat atrophic diseases of the SGs.
Mice
Animal experiments in this study were carried out according to protocols approved by Nagasaki University Animal Care and Use Committee (approved number: 1605271307, 1610051411). As described in our previous study [8], female and male C57BL/6JJcl mice (eight weeks old) were used for recipients and donors, and C57BL/6-Tg (CAG-EGFP) male mice were applied as donors in some experiments to pursue the transplanted E-MNCs.
Mice Cells
To prepare E-MNCs, PBMNCs were cultured using the 5G-culture system, which was described in our previous work [8] (Figure 1). In brief, isolated PBMNCs were plated into wells of Primaria plates (BD Biosciences, San Jose, CA, USA) under specific conditions using Stem Line II medium (Sigma Aldrich, St. Louis, MO, USA) and 5 mouse recombinant proteins [8] (Supplemental Table S1). After 5 days, when CD11b/CD206-positive cells were maximally enriched, E-MNCs were collected for the following experiments.
Human Cells
Experiments were performed in compliance with the Helsinki Declaration. Blood samples were collected with permission from the Nagasaki University Ethics Committee (17082131). Written informed consent was obtained from donors. Healthy volunteer donors were four males aged 28 to 39 years. Human E-MNCs were generated using a specific 5G-culture method established by CellAxia Inc. (Tokyo, Japan), as described in our previous article [9]. Briefly, after density gradient centrifugation, separated PBMNCs were seeded to wells of Primaria plates and cultured in 5G-culture medium with 5 human recombinant proteins (Table 1). After 6-7 days, when CD11b/CD206-positive cells were maximally enriched, E-MNCs were harvested.
Human Cells
Experiments were performed in compliance with the Helsinki Declaration. Blood samples were collected with permission from the Nagasaki University Ethics Committee (17082131). Written informed consent was obtained from donors. Healthy volunteer donors were four males aged 28 to 39 years. Human E-MNCs were generated using a specific 5G-culture method established by CellAxia Inc. (Tokyo, Japan), as described in our previous article [9]. Briefly, after density gradient centrifugation, separated PBMNCs were seeded to wells of Primaria plates and cultured in 5G-culture medium with 5 human recombinant proteins (Table 1). After 6-7 days, when CD11b/CD206-positive cells were maximally enriched, E-MNCs were harvested.
Analysis of Gene Expression in PBMNCs, E-MNCs, and Submandibular Glands
The mRNA expressions of samples were explored by real-time quantitative PCR as described in our previous study [8]. CD206, IL-1β, IL-6, IL-10, IGF1, TGF-β, TLR2, and TLR4 gene expressions were determined in mouse submandibular glands that were transplanted with mouse E-MNCs or 11b(−) cells (n = 3/each group at 10 days and 2 weeks of IR). The mRNA expressions of IL-1β, IFN-γ, and TNF-α in human PBMNCs stimulated with CD3/CD28 antibodies was also analyzed. In Supplemental Table S3, primer sets for mouse and human are shown.
Anti-Inflammatory Effects of Macrophages in E-MNCs under Co-Culture Conditions with CD3/CD28-Stimulated PBMNCs
Human PBMNCs were suspended in RPMI with 10% FBS and stimulated with CD3 and CD28 antibodies (Invitrogen, Waltham, MA, USA) coated on wells of a 24-well plate (1 × 10 6 cells/600 µL/well) at 37 • C for 1 h (Supplemental Figure S1). Prior to cell seeding, CD3 (15 ng/mL) and CD28 (5 ng/mL) antibodies were diluted and then added at 250 µL/well to a 24-well plate and incubated overnight. To evaluate the anti-inflammatory effects of E-MNC, 11b(+) cell, and 11b(−) cell fractions on stimulated PBMNCs, the cells suspended in RPMI with 10% FBS were co-cultured in a Transwell upper chamber (Corning, Corning, NY, USA) for 1 h at 37 • C. Finally, total RNA of PBMNCs was extracted.
Phagocytosis Assay
Phagocytosis assays using fluorescent beads were performed according to the manufacturer's instructions (Cayman Chemical latex beads; Cayman Chemical, Ann Arbor, MI, USA). Human E-MNCs suspended in RPMI were seeded into the wells of a 24-well plate at a density of 1 × 10 5 cells/500 µL/well. Then, E-MNCs were incubated with fluorescent beads for 1, 2, or 3 h, after which phagocytosis of latex bead-rabbit IgG-FITC complexes was measured by flow cytometry. Counterstaining of nuclei in E-MNCs of the 1 h group was carried out with mounting medium for fluorescence with DAPI after fixation.
Irradiation and Time Course of Transplantation
Irradiation to mouse SGs was performed as previously described in our study [8]. Briefly, female mice were anesthetized and restrained in a specific container, and then given a single dose of 12 Gy gamma rays to the head and neck area. This dosage of IR was determined based on our preliminary study examining 10, 12, 15, and 18 Gy of gamma-ray irradiation, which was aimed at inducing more than 50% reduction of salivary secretion without declining health (decreasing the body weight). Doses of 15 and 18 Gy were too harsh to keep the health, while a dose of 10 Gy reduced salivary flow only by 50%. The 12 Gy was a dose to induce approximately 77% reduction of salivary flow without declining health up to 5 weeks of IR, and mice survived more than 6 months post-IR.
Salivary Flow Rate after Irradiation
For evaluating the function of saliva secretion (salivary flow rate; SFR) of SGs, saliva was collected as described previously [8]. Saliva volume was measured gravimetrically (mg (collected saliva weight)/min (10 min)/body weight (converted per 100 g body weight)). SFR was determined at 0 (non-IR), 1, 2, 4, 5, 9, and 13 weeks post-IR. At each time point, the body weight was also measured.
Histological Analysis
After harvesting the submandibular glands, sections of tissue samples were prepared and observed by staining with hematoxylin and eosin (HE) and Masson's trichrome, as described in our previous work [8]. Fibrosis was assessed under a microscope at ×200 magnification via analysis of three random fields in a section per 5 sections/3 specimens/group in the IR, E-MNC, and 11b(−) groups at 9 and 13 weeks of IR.
Microarray Analysis
Microarray analysis using DAVID Bioinformatics Database functional-annotation tools (http://david.abcc.ncifcrf.gov/, accessed on 1 September 2020), specifically for functional annotation clustering analysis, was carried out. The analysis was carried out to assess differences in characteristics between mouse PBMNCs and E-MNCs and between submandibular gland samples in the E-MNC group and IR group at 2 weeks post-IR.
Statistics
Experimental values are presented as the mean ± standard error. Differences between means were analyzed by one-way analysis of variance. To find significant differences within groups, Tukey's multiple comparison t-test was performed. p values < 0.05 were considered statistically significant.
Transplantation of E-MNCs or CD11b-Negative E-MNCs into a Prevention Model
To examine the in vivo efficacy of CD11b-positive cells among E-MNCs, E-MNCs depleted of CD11b-positive cells (11b[−] cells among E-MNCs) were prepared, and then the macrophage fraction (CD11b + cells), especially M2-macrophages (CD11b + /CD206 + cells), was confirmed to be nearly depleted from this fraction by flow cytometric analysis ( Figure 4A). Saliva secretion decreased markedly for 1 week after IR and then continued to decline up to 9 weeks post-IR (IR group) ( Figure 4B). In contrast, when E-MNCs transplanted at 1 week post-IR (E-MNC group), saliva output gradually recovered to~80% of normal mouse (Ctrl group) levels up to 13 weeks. However, when depleted of CD11b- positive cells (11b[−] group), the efficacy of E-MNCs was reduced to a level~50% lower than that of nondepleted E-MNCs at 5, 9, and 13 weeks post-IR (at 4, 8, and 12 weeks post-transplantation), respectively ( Figure 4B). Meanwhile, the body weight of mice in the E-MNC group increased gently to the same level as that of mice in the Ctrl group, whereas mice in the CD11b(−) group gained a slight amount of body weight, similar to mice in the IR group (Supplemental Figure S5A). Finally, EGF and HGF concentrations in harvested saliva were significantly elevated in mice treated with E-MNCs at 9 and 13 weeks post-IR but not in mice treated with E-MNCs depleted of 11b-positive cells ( Figure 4C).
Fibrosis in Submandibular Glands after Irradiation
At 9 and 13 weeks, staining of HE and Masson's trichrome clarified fibrosis in the submandibular glands (IR group) ( Figure 5A-D). Meanwhile, fibrosis area in E-MNCtreated submandibular glands perceptible diminished compared to the IR group (0.08fold at 9 weeks and 0.39-fold at 13 weeks post-IR) ( Figure 5B,D). In contrast, the suppres-
Fibrosis in Submandibular Glands after Irradiation
At 9 and 13 weeks, staining of HE and Masson's trichrome clarified fibrosis in the submandibular glands (IR group) ( Figure 5A-D). Meanwhile, fibrosis area in E-MNC-treated submandibular glands perceptible diminished compared to the IR group (0.08-fold at 9 weeks and 0.39-fold at 13 weeks post-IR) ( Figure 5B,D). In contrast, the suppression of fibrosis was limited when CD11b-positive cells were depleted from E-MNCs (CD11b[−] group) (0.61-fold at 9 weeks and 0.83-fold at 13 weeks when compared to IR group) ( Figure 5B,D).
Gene Expression in Submandibular Glands at 10 Days and 2 Weeks of IR
At 2 weeks of IR (1 week post-transplantation), saliva output had already begun to recover, but only in the E-MNC group ( Figure 4B). Therefore, as E-MNCs appeared to work effectively in damaged tissues from the initial stage of transplantation, SG specimens at 10 days and 2 weeks post-IR (3 days and 1 week post-transplantation) were analyzed. Gene expression analyses showed that E-MNCs and E-MNCs depleted of CD11bpositive cells (CD11b[−] cells) downregulated the proinflammatory gene expressions (IL-1β, IL-6) at 10 days, but no effect on Toll-like receptor (TLR) 2 and 4 mRNA expressions was observed at this stage ( Figure 6A). However, while E-MNCs continued to inhibit IL-1β and IL-6 gene expressions at 2 weeks and also suppressed TLR2 and TLR4 mRNA expression, CD11b(−) cells did not inhibit mRNA expression of these genes, except for IL-6 ( Figure 6B). IL-10 and CD206 mRNAs, which are associated with polarization of M2-macrophages, promoted their expressions in E-MNC-treated SGs at 10 days and 2 weeks of IR but were inhibited in SGs treated with CD11b(−) cells ( Figure 6A,B). Consistent with these phenomena, E-MNC treatment increased IGF1 gene expression over time but downregu-
Gene Expression in Submandibular Glands at 10 Days and 2 Weeks of IR
At 2 weeks of IR (1 week post-transplantation), saliva output had already begun to recover, but only in the E-MNC group ( Figure 4B). Therefore, as E-MNCs appeared to work effectively in damaged tissues from the initial stage of transplantation, SG specimens at 10 days and 2 weeks post-IR (3 days and 1 week post-transplantation) were analyzed. Gene expression analyses showed that E-MNCs and E-MNCs depleted of CD11b-positive cells (CD11b[−] cells) downregulated the proinflammatory gene expressions (IL-1β, IL-6) at 10 days, but no effect on Toll-like receptor (TLR) 2 and 4 mRNA expressions was observed at this stage ( Figure 6A). However, while E-MNCs continued to inhibit IL-1β and IL-6 gene expressions at 2 weeks and also suppressed TLR2 and TLR4 mRNA expression, CD11b(−) cells did not inhibit mRNA expression of these genes, except for IL-6 ( Figure 6B). IL-10 and CD206 mRNAs, which are associated with polarization of M2-macrophages, promoted their expressions in E-MNC-treated SGs at 10 days and 2 weeks of IR but were inhibited in SGs treated with CD11b(−) cells ( Figure 6A,B). Consistent with these phenomena, E-MNC treatment increased IGF1 gene expression over time but downregulated the mRNA expression of TGF-β, which is associated with fibrotic activity, particularly after 2 weeks post-IR ( Figure 6A,B). Microarray analyses of SGs at 2 weeks post-IR indicated that overall proinflammatory, matrix metalloprotease, and apoptosis-associated genes decreased their expressions, and upregulation of angiogenic and tissue regenerative genes were induced after E-MNC treatment compared with no cell transplantation (Supplemental Figure S6A). Among these genes, MMP9 mRNA expression was significantly suppressed by E-MNC treatment (0.15 ± 0.0076-fold compared with the Ctrl group) but not by CD11b depletion (Supplemental Figure S4B). Expression of mRNAs encoding NGF and Car3, which are involved in SG development/maintenance, was increased and maintained in E-MNC-treated submandibular glands versus IR or IR+CD11b(−) mice (NGF; 1.56 ± 0.16-fold over IR group, 1.79 ± 0.16-fold over CD11b[−] cells) (Car3; 9.56 ± 0.13-fold over IR group, 3.44 ± 0.13-fold over CD11b[−] cells) (Supplemental Figure S6B). lated the mRNA expression of TGF-β, which is associated with fibrotic activity, particularly after 2 weeks post-IR ( Figure 6A,B). Microarray analyses of SGs at 2 weeks post-IR indicated that overall proinflammatory, matrix metalloprotease, and apoptosis-associated genes decreased their expressions, and upregulation of angiogenic and tissue regenerative genes were induced after E-MNC treatment compared with no cell transplantation (Supplemental Figure S6A). Among these genes, MMP9 mRNA expression was significantly suppressed by E-MNC treatment (0.15 ± 0.0076-fold compared with the Ctrl group) but not by CD11b depletion (Supplemental Figure S4B) Figure S6B).
Immunohistological Observations in Submandibular Glands at 10 Days and 2 Weeks of IR
Localization and quantification of specific subpopulations of macrophages in damaged tissue was visualized by immunohistochemistry. At 10 days post-IR, while there were no differences between groups in the number of F4/80 (a pan macrophage marker)-positive cells, E-MNC-treated mice had a higher number of CD206/F/4/80 (as M2-macrophages)positive cells than mice in the other groups ( Figure 7A,C). In particular, CD206-expressing host cells were seen at the periphery of EGFP-labeled E-MNCs containing CD11b-positive cells (Supplemental Figure S7A). Subsequently, at 2 weeks post-IR, E-MNC-treated mice still maintained a number of CD206-positive cells, whereas IR or IR+CD11b(−) mice largely exhibited a decrease in positive cells from 10 days post-IR ( Figure 7B,C).
Immunohistological Observations in Submandibular Glands at 10 Days and 2 Weeks of IR
Localization and quantification of specific subpopulations of macrophages in damaged tissue was visualized by immunohistochemistry. At 10 days post-IR, while there were no differences between groups in the number of F4/80 (a pan macrophage marker)positive cells, E-MNC-treated mice had a higher number of CD206/F/4/80 (as M2-macrophages)-positive cells than mice in the other groups ( Figure 7A,C). In particular, CD206expressing host cells were seen at the periphery of EGFP-labeled E-MNCs containing CD11b-positive cells (Supplemental Figure S7A). Subsequently, at 2 weeks post-IR, E-MNC-treated mice still maintained a number of CD206-positive cells, whereas IR or IR+CD11b(−) mice largely exhibited a decrease in positive cells from 10 days post-IR (Figure 7B,C). In investigating the presence of phagocytic M2-macrophages that function in clearing DAMPs from damaged tissues, increased numbers of Msr1 (a scavenger receptor of M2-macrophages)-positive cells were found at 10 days and 2 weeks of IR, but only in E-MNC-treated mice ( Figure 8A-E). In addition, some Msr1-positive E-MNCs (EGFP-expressing cells) seemed to have internalized HMGB1 (as a representative DAMP) in E-MNC-treated tissues ( Figure 8B,D). In contrast, abundant extracellular HMGB1 was detected in damaged tissues of IR or IR+CD11b(−) mice ( Figure 8A,C). Indeed, at 5 weeks, HMGB1 concentration in E-MNC-treated glands was significantly reduced to levels similar to normal control glands compared with levels in IR or IR+CD11b(−) mice (0.31-fold and 0.13-fold, respectively) ( Figure 8F). In investigating the presence of phagocytic M2-macrophages that function in clearing DAMPs from damaged tissues, increased numbers of Msr1 (a scavenger receptor of M2macrophages)-positive cells were found at 10 days and 2 weeks of IR, but only in E-MNCtreated mice ( Figure 8A-E). In addition, some Msr1-positive E-MNCs (EGFP-expressing cells) seemed to have internalized HMGB1 (as a representative DAMP) in E-MNC-treated tissues ( Figure 8B,D). In contrast, abundant extracellular HMGB1 was detected in damaged tissues of IR or IR+CD11b(−) mice ( Figure 8A,C). Indeed, at 5 weeks, HMGB1 concentration in E-MNC-treated glands was significantly reduced to levels similar to normal control glands compared with levels in IR or IR+CD11b(−) mice (0.31-fold and 0.13-fold, respectively) ( Figure 8F). An assessment of tissue-regenerative activity revealed that some Msr1-positive E-MNCs (EGFP-expressing cells) expressed IGF1 in E-MNC-injected specimens at 10 days post-IR, and then scattered IGF1-expressing host cells were observed at the periphery of Msr1-positive E-MNCs after 2 weeks ( Figure 9A,B). However, few IGF1-and Msr1-positive host cells were detected during this period in specimens transplanted with CD11b(−) cells. Quantitative analyses revealed that there were significantly IGF-1-expressing cells in both E-MNC and CD11b(−) groups, but the E-MNC-treated group was shown to be higher than the CD11b(−) group (approximately two-to fivefold) ( Figure 9C). Consistent with these observations, c-Kit and Sca-1 double-expressing cells (as ductal stem/progenitor cells) were conspicuously recognized in duct cells of E-MNC specimens at 2 weeks post-IR ( Figure 10A). Double-positive cells in the E-MNC-injected specimens were significantly higher in number than in other groups, while those cells decreased in specimens of IR and CD11b(−) groups compared to the Ctrl group ( Figure 10B).
An assessment of tissue-regenerative activity revealed that some Msr1-positive E-MNCs (EGFP-expressing cells) expressed IGF1 in E-MNC-injected specimens at 10 days post-IR, and then scattered IGF1-expressing host cells were observed at the periphery of Msr1-positive E-MNCs after 2 weeks ( Figure 9A,B). However, few IGF1-and Msr1-positive host cells were detected during this period in specimens transplanted with CD11b(−) cells. Quantitative analyses revealed that there were significantly IGF-1-expressing cells in both E-MNC and CD11b(−) groups, but the E-MNC-treated group was shown to be higher than the CD11b(−) group (approximately two-to fivefold) ( Figure 9C). Consistent with these observations, c-Kit and Sca-1 double-expressing cells (as ductal stem/progenitor cells) were conspicuously recognized in duct cells of E-MNC specimens at 2 weeks post-IR ( Figure 10A). Double-positive cells in the E-MNC-injected specimens were significantly higher in number than in other groups, while those cells decreased in specimens of IR and CD11b(−) groups compared to the Ctrl group ( Figure 10B).
Transplantation of E-MNCs or CD11b-Negative E-MNCs into a Mouse Model with Estab lished Radiogenic Atrophic Salivary Glands
To further examine the efficacy of CD11b-positive cells among E-MNCs, E-MN (two doses of cell numbers for injection) and CD11b(−) cells were transplanted into da aged SGs at 5 weeks post-IR (4 weeks post-transplantation); this time point was chos because hyposecretion of saliva induced by irradiation was established in mice (IR grou at this time point ( Figure 11A). At 9 weeks post-IR, both the 2 × 10 5 and 1 × 10 6 E-MN groups recovered saliva output to a level approximately 55% and 100% of that of norm mice (Ctrl group), respectively, whereas saliva output in nontransplanted mice (IR grou declined to a level approximately 17% of that of normal mice ( Figure 11A). However, jection of 1 × 10 6 E-MNCs maintained saliva output to a level 78.25% of that of norm mice at 13 weeks post-IR ( Figure 11A). In contrast, when depleted of CD11b-positive ce (11b[−] group), the injection of 1 × 10 6 E-MNCs reduced saliva output to a level ~25% that of normal mice at 9 and 13 weeks post-IR. Overall, E-MNCs containing CD11b-po tive cells were effective in restoring saliva production in damaged SGs. In terms of bo weight, that of mice in the E-MNC group increased gradually but did not reach the le of Ctrl group mice, whereas mice in IR and CD11b(−) groups exhibited stable or sligh lower body weight from 5 to 13 weeks post-IR (Supplemental Figure S7B).
Fibrosis in submandibular glands was observed in samples of IR group stained w HE and Masson's trichrome at 9 and 13 weeks after IR ( Figure 11B-E). However, the brosis area in submandibular glands injected with 1 × 10 6 E-MNCs was clearly lower co pared to that of nontreated glands (IR group) (0.18-fold at 9 weeks and 0.56-fold at weeks post-IR) ( Figure 11B,D). In contrast, the suppression of fibrosis was limited wh CD11b-positive cells were depleted from E-MNCs (CD11b[−] group) (0.46-fold at 9 wee and 0.85-fold at 13 weeks when compared to IR group) ( Figure 11C,E).
Transplantation of E-MNCs or CD11b-Negative E-MNCs into a Mouse Model with Established Radiogenic Atrophic Salivary Glands
To further examine the efficacy of CD11b-positive cells among E-MNCs, E-MNCs (two doses of cell numbers for injection) and CD11b(−) cells were transplanted into damaged SGs at 5 weeks post-IR (4 weeks post-transplantation); this time point was chosen because hyposecretion of saliva induced by irradiation was established in mice (IR group) at this time point (Figure 11A). At 9 weeks post-IR, both the 2 × 10 5 and 1 × 10 6 E-MNC groups recovered saliva output to a level approximately 55% and 100% of that of normal mice (Ctrl group), respectively, whereas saliva output in nontransplanted mice (IR group) declined to a level approximately 17% of that of normal mice ( Figure 11A). However, injection of 1 × 10 6 E-MNCs maintained saliva output to a level 78.25% of that of normal mice at 13 weeks post-IR ( Figure 11A). In contrast, when depleted of CD11b-positive cells (11b[−] group), the injection of 1 × 10 6 E-MNCs reduced saliva output to a level~25% of that of normal mice at 9 and 13 weeks post-IR. Overall, E-MNCs containing CD11b-positive cells were effective in restoring saliva production in damaged SGs. In terms of body weight, that of mice in the E-MNC group increased gradually but did not reach the level of Ctrl group mice, whereas mice in IR and CD11b(−) groups exhibited stable or slightly lower body weight from 5 to 13 weeks post-IR (Supplemental Figure S7B).
Fibrosis in submandibular glands was observed in samples of IR group stained with HE and Masson's trichrome at 9 and 13 weeks after IR ( Figure 11B-E). However, the fibrosis area in submandibular glands injected with 1 × 10 6 E-MNCs was clearly lower compared to that of nontreated glands (IR group) (0.18-fold at 9 weeks and 0.56-fold at 13 weeks post-IR) ( Figure 11B,D). In contrast, the suppression of fibrosis was limited when CD11b-positive cells were depleted from E-MNCs (CD11b[−] group) (0.46-fold at 9 weeks and 0.85-fold at 13 weeks when compared to IR group) ( Figure 11C,E).
Discussion
This study showed that CD11b-positive cells play a vital role in determining the efficacy of E-MNC therapy against radiation-damaged SGs. The positive outcomes were as follows: (1) a CD11b-positive cell fraction composed of M1-and specifically induced M2-
Discussion
This study showed that CD11b-positive cells play a vital role in determining the efficacy of E-MNC therapy against radiation-damaged SGs. The positive outcomes were as follows: (1) a CD11b-positive cell fraction composed of M1-and specifically induced M2-macrophages exhibited phagocytic and immunomodulatory activities in culture; (2) when depleted of this cell fraction, E-MNCs did not show sufficient efficacy in preventing fibrosis and stimulating recovery of saliva secretion in damaged glands; (3) CD11bpositive cells among E-MNCs and host M2-macrophages worked synergistically with regard to HMGB1 clearance and IGF1 secretion, and these actions appear to induce ductal stem/progenitor activation. These outcomes indicate that an M2-dominant macrophage fraction is an essential component of E-MNC-based cell therapy.
Regarding the phenotype and function of CD11b-positive cells among E-MNCs, CD206-positive M2-like macrophages were specifically induced in both human and mouse PBMNCs via 5G-culture. This culture method significantly reduced the M1/M2 ratio in a CD11b-positive cell fraction of PBMNCs (more than 50-and 10-fold lower in human and mouse E-MNCs). Particularly, in human E-MNCs, we found that CD11b-positive cells, which contained an abundance of M2-like macrophages (approximately 80% of CD11bpositive cells), were highly enriched (approximately 25% of E-MNCs). Indeed, similar to human E-MNCs, the CD11b-positive cells substantially inhibited the proinflammatory gene (IFN-γ, IL-1β, TNF-α) expressions in PBMNCs stimulated by T cell activation molecules in co-culture, whereas CD11b-negative cells among E-MNCs were unable to reduce the expression of these factors. In mouse E-MNCs, the abundance ratio of the M2-dominant CD11b-positive cell fraction was lower than that in human E-MNCs, but this cell fraction contained an abundance of Msr1-or galectin3-positive cells that could exhibit high phagocytosis ability associated with M2-macrophage polarization [29,33,34]. Meanwhile, Th2 cells were predominantly induced in T cell subsets after 5G-culture (from 0.1% in PBMNCs to 17% in CD3/CD4-positive E-MNCs), and such T cell differentiation over 5 days of culture appears to support the promotion of M2-macrophage polarization in CD11b-positive cells [30,35]. Indeed, expression of IL-10 and VEGF mRNA was upregulated in E-MNCs, whereas that of Th1-associated genes (IFN-γ and IL-1β) was downregulated. We then recognized that CD11b-positive cells produce IGF1, which is a primary factor in resolution of inflammation and polarization of M2-macrophages [36,37]. Moreover, M2-macrophages were recently shown to be a considerable source of IGF1 in immune metabolism and tissue regeneration [38]. These findings strongly suggest that CD11b-positive cells among E-MNCs exhibit high phagocytic and immunomodulatory activities in sterile inflamed tissues after radiation therapy. A certain polarization state of M2-macrophages in this CD11b-positive cell fraction, which in humans comprise approximately 80%, may govern the resolution of inflammation and stimulation of tissue regeneration [28,30].
With regard to the efficacy of E-MNC transplantation in treating radiation-damaged SGs, E-MNCs promoted recovery of saliva secretion and prevented the development of tissue fibrosis when transplanted not only in the early stage after irradiation but also in the late stage (after damage is established). E-MNCs depleted of CD11b-positive cells could not sufficiently rescue SG function. In particular, when transplanted in the late stage, the therapeutic effect was markedly reduced. As mentioned above, the abundance ratio of CD11b-positive cells was approximately 10% among mouse E-MNCs. Hence, the isolation and transplantation of only CD11b-positive cells has been technically challenging. However, once this cell fraction was depleted from E-MNCs, the efficacy of E-MNC transplantation was markedly impaired. Therefore, the CD11b-positive cell fraction appears to be essential for the therapeutic efficacy of E-MNC treatment. Indeed, E-MNCs functioned to maintain EGF and HGF concentrations in saliva at high levels for a long period and suppressed the development of tissue fibrosis, whereas E-MNCs depleted of CD11b-positive cells did not. EGF stimulates the proliferation of epithelial cells, and treating radiation-damaged SGs with MSC or other growth factors such as keratinocyte growth factor 1 (KGF1) leads to increased EGF secretion by acinar cells and higher resulting concentrations in saliva [13,39,40]. Likewise, HGF exerts anti-inflammatory or antifibrotic functions and was shown to exhibit the effects of protecting SG cells from radiation-induced cell death [41]. We have shown E-MNCs induce the proliferation of acinar and ductal cells during the regenerative process in radiation-damaged SGs [8]. Therefore, CD11b-positive macrophages among E-MNCs appear to affect the release of these growth factors from regenerated salivary epithelial cells.
To better understand the cellular mechanism of the in vivo efficacy of E-MNC treatment, this study explored how CD11b-positive macrophages work in damaged SGs. In particular, as we previously demonstrated that E-MNCs can be detected in damaged SG tissues up to 3 weeks after transplantation [8], we first analyzed the behavior of E-MNCs in the early stage of transplantation. We found that F4/80/CD206-positive cells (M2-macrophages) were significantly induced in E-MNC-treated SGs from 3 to 7 days posttransplantation. In contrast, these cells decreased over time in SGs treated with E-MNCs depleted of CD11b-positive cells. Interestingly, total number of F4/80-expressing cells (M1-and M2-macrophages) did not change in any group. Therefore, the abundance ratio of M2-macrophages increased robustly in damaged SGs in the early stage after E-MNC treatment. This phenomenon was likely caused by CD11b-positive cells among the E-MNCs, because many host M2-macrophages clustered around donor cells (CD11b-positive cells among E-MNCs)-derived CD206/EGFP-positive cells. As mentioned above, CD11b-positive cells among E-MNCs produce several cytokines, such as IL-10, VEGF, and IGF1. These proteins are associated with polarization of infiltrated monocytes/macrophages to the M2-phenotype at injury sites [30]. Meanwhile, we also found that the expression of HMGB1, a DAMP molecule, was increased in radiation-damaged SGs, and E-MNC treatment effectively decreased HMGB1 expression. HMGB1 was originally identified as a nuclear protein, but it is also known to play an essential role in mediating sterile inflammation when released to the extracellular environment from dead cells as a DAMP [29]. With regard to sterile inflammation, recent studies have reported that the radiation-induced innate immunity acts as an immune modulator [22,23,42]. Specifically, radiation leads to HMGB1 cytoplasmic translocation and extracellular release, and released HMGB1 in turn mediates the radiation-induced damage in normal tissues such as those in the lung via the TLR4 pathway [23]. Indeed, we preliminarily confirmed increased expression of TLR4 in ductal epithelial cells of mouse SGs after irradiation, accompanied by the extracellular release of HMGB1. Macrophages (both M1-and M2-macrophages) internalize extracellular DAMPs by binding their class A scavenger receptor (such as Msr1) [29]. However, HMGB1 is taken up efficiently by M2-macrophages rather than M1-macrophages. Such scavenger receptors on M2-macrophages are primarily functioned for HMGB1 clearance, whereas M1-macrophages secrete inflammatory cytokines in response to HMGB1 via the TLR4 pathway [29]. Indeed, internalization of HMGB1 by Msr1-positive cells among E-MNCs could be observed from the early stage of E-MNC transplantation. These cells showed the M2-phenotype and induced host Msr1-positive cells in damaged tissues, as indicated by the fact that not only Msr1-positive cells but also IGF1-expressing host cells increased around IGF1/Msr1-positive E-MNCs for up to 7 days post-transplantation. M2-macrophages are an important source of IGF1 production during tissue regeneration [34,38]. Previous studies have reported an aging-related decrease in IGF1 synthesis in SGs, resulting in lower levels of cell proliferation and tissue regeneration in oral tissues, and IGF1 injection prior to irradiation was shown to suppress apoptosis in acinar cells and prevent SG dysfunction in mice [43][44][45]. Therefore, these activities of the fraction of CD11b-positive cells among E-MNCs appear to induce the proliferation of c-Kit/Sca-1 expressing ductal progenitor cells. Additionally, the upregulated expression of mRNAs associated with SG tissue recovery, such as NGF and Car3, was recognized 7 days post-transplantation. Promotion of SG stem/progenitor cell activation from the early phase of sterile inflammation likely facilitates tissue regeneration without inducing the development of fibrosis.
Conclusions
This study demonstrated that CD11b-positive cells, composed of M1-and specifically induced M2-macrophages, among E-MNCs contribute to suppress sterile inflammation and promote tissue regeneration by eliminating extracellular HMGB1 and inducing IGF1 production in radiation-damaged SGs ( Figure 12). Overall, the mechanism underlying the effectiveness of E-MNC treatment for radiation damage in SGs can be explained in part by these findings. E-MNCs are a readily available and low-invasively obtained source of cells which can be produced within a week; thus, this therapy for treating radiogenic xerostomia can be easily performed in the clinic. However, the cellular mechanism should be investigated in greater detail for future clinical applications. This study was unable to clarify the detailed function and appropriate abundance of both M1-and M2-macrophages in the CD11b-positive cell fraction of E-MNCs, and, yet, CD11b-positive cells among human E-MNCs are composed mostly of M2-macrophages. Therefore, the predominant state of polarization toward M2-phenotype in the macrophage fraction must be at least essential for the function of E-MNC therapy. We are currently carrying out additional experiments to determine how significantly resolving sterile inflammation caused by DAMPs can serve as a therapeutic target for radiation-damaged SGs. In parallel, we are also investigating the intercellular interactions to initiate ductal stem/progenitor cell activation and acinar cell proliferation, required for tissue regeneration, via IGF1 production by donor and host M2-macrophages during the subsidence of sterile inflammation in damaged SGs. Another limitation of the present study is that a single-dose irradiation, which does not correlate with actual regimens of radiation therapy, was employed for revealing the exact mechanism of atrophied gland reconstruction by E-MNC treatment. Moreover, in this study, we employed a single transplantation using a certain dose to submandibular glands, which should not be enough to ensure actual treatment effects such as long-term efficacy. Therefore, future analytical experiments should be performed, employing proper models of clinical therapy using fractionated-dose irradiation with or without chemotherapy. Then, optimal therapeutic conditions to lead the clinical effectiveness, such as the frequency or dose of E-MNC transplantation, should be developed. This study found that an M2-dominant CD11b-positive macrophage fraction plays an essential role in E-MNC therapy. Our results suggest that host Msr1-positive macrophages are a potential target for newly developed therapeutics to facilitate atrophied SG regeneration. Supplementary Materials: The following supporting information can be downloaded at: www.mdpi.com/xxx/s1, Table S1: Recombinant proteins of mouse 5G culture medium; Figure S1: Schematic diagram describing the experimental design for co-culture of E-MNCs and T cell-activated PBMNCs; Figure S2: Characteristics of human E-MNCs; Figure S3: Characteristics of human E-MNCs; Figure S4: Microarray analysis of PBMNCs and E-MNCs; Figure S5: Changes in body weight in the prevention mouse model after transplantation; Figure S6: Microarray and qPCR analyses of transplanted spec- Figure 12. Schematic diagram of the suggested mechanism of E-MNC treatment. CD11b-positive cells among E-MNCs might contribute to convert the condition of damaged tissue from a proinflammation to an anti-inflammation and promote tissue regeneration by mediating DAMP clearance and IGF1 production in cooperation with host macrophages.
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/cells12101417/s1, Table S1: Recombinant proteins of mouse 5G culture medium; Table S2-1: Mouse antibodies for flow cytometric analysis; Table S2-2: Human antibodies for flow cytometric analysis; Table S3-1: Mouse primers; Table S3-2; Human primers; Figure S1: Schematic diagram describing the experimental design for co-culture of E-MNCs and T cell-activated PBMNCs; Figure S2: Characteristics of human E-MNCs; Figure S3: Characteristics of human E-MNCs; Figure S4: Microarray analysis of PBMNCs and E-MNCs; Figure S5: Changes in body weight in the prevention mouse model after transplantation; Figure S6: Microarray and qPCR analyses of transplanted specimens at 2 weeks post-IR; Figure S7: Immunohistological observations at 10 days post-IR and changes in body weight after transplantation.
Institutional Review Board Statement:
The study was carried out in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Nagasaki University Graduate School of Biomedical Sciences (17082131). All animal experiments were performed in accordance with the protocols accepted by the Animal Care and Use Committee of Nagasaki University (1605271307 and 1610051411).
Informed Consent Statement:
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The datasets are available upon reasonable request to the corresponding author. | 2023-05-20T15:17:41.309Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "36a9a2db4d1b54d1243799b26dba79cdf2a54319",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cells12101417",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "daca6b22a41321f52ddb0c5660814acfabb6a108",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
256832754 | pes2o/s2orc | v3-fos-license | Stress and displacement patterns during orthodontic intervention in the maxilla of patients with cleft palate analyzed by finite element analysis: a systematic review
Objective Rationale for the review in the context of what is already known about the evaluation of stress and displacement patterns using finite element analysis in the maxilla of patients with cleft palate after orthodontic intervention. Methods This systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA). The protocol for this systematic review was registered with PROSPERO (CRD42020177494). The following databases were screened: Medline (via PubMed), Scopus, Embase, and Web of Science. Results The search identified 31 records. 15 articles were retrieved for full texts and 11 of them were considered eligible for inclusion by 2 authors. Eventually, 11 articles were included in the qualitative analysis. Conclusions Finite element analysis is an appropriate tool for studying and predicting force application points for better controlled expansion in patients with UCLP.
Introduction
Cleft lip and palate (CLP) is known to be the most common among other congenital disorders in the craniofacial region, with an estimated incidence of approximately 1: 1000 live births [1]. Usually, the medical staff involved in the treatment process is organized into a cleft team consisting of many specialists. Orthodontic treatment in patients with cleft palate involves various types of therapies/applications, can begin nearly after birth with naso-alveolar moulding (NAM) and can be performed until the end of growth (potential orthognathic surgery procedures) [2,3]. The potential possibility of accurate simulation of the movement of the clefted parts of the maxilla with the use of different orthodontic appliances might be an important point in treatment planning.
One of the methods that can predict the rehabilitation carried out is the finite element method (FEM) [4]. Finite element analysis (FEA) is a modern computer simulation method that has found application in orthodontics in recent years as a simulation method of the force applied to bone / teeth and prediction of its displacement [5,6]. In FEA analysis, based on patient CT scans, virtual models of anatomical structures are created. The most significant is to evaluate the characteristics of tissues (cartilage, bone, soft tissues, teeth) [5]. In the FEA approach, a single 3D model undergoes simulation to point out elements of mechanics in orthodontic treatment, particular distribution of stress in TADS and archwires, distribution of strain and stress in biological tissues (bones, teeth, PDL), conditions of resorption of roots, directions and the size of displacement for positioning of orthodontic appliance [7]. In FEA an important issue concerns individual variations and response to the applied forces, e.g. different rate, amount and directions of canine distal movement.
The approach in FEA orthodontic studies includes mostly biomechanics of the skull, assessment of (bio) mechanical loads of the skull, and mechanism of tooth movement. There are different aspects of using FEA in dentistry, for example, data on displacement, tension during chewing, distribution of mechanical forces on craniofacial forces [8]. FEA makes it possible to model and reproduce the anatomy of the craniofacial skeleton that helps to understand the parameters that influence bone remodelling [9].
FEA aims to simulate the conditions under which analysis would not be possible by clinical means. However, there are some disadvantages of this technique: complexity of craniofacial structures and their relationship, different information of tissue characteristics, limitation of anatomic details in reproduced models.
Geometric methods of analysis have limitations due to the nonregular and complex structure and shape of human bone. 3-D FEA enables investigating, e.g. reconstruction of alveolar bone and modifications of soft tissue making it useful and the errors of measurements can be reduced. However, it is important to link the model, prototype, and clinical treatment implemented.
Aim of the study
Reason for the review in the context of what is already known about the evaluation of stress and displacement patterns using finite element analysis in the maxilla of patients with cleft palate after orthodontic intervention.
Protocol
This systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA). The protocol for this systematic review was registered with PROSPERO (CRD42020177494).
Eligibility criteria
The Population, Intervention, Comparison, Outcome and Study Design (PICOS) framework was followed. The population was defined as patients with cleft palate, intervention, and orthodontic treatment that includes fixed or removable appliances. The comparison, stress and displacement pattern evaluation in the maxilla using finite element analysis. The primary outcome-evaluation of the results of the orthodontic intervention performed using finite element analysis. The secondary outcome was the diagnostic precision of the prediction of the 3D finite element model in the clefted maxilla. The study design-the following exclusion criteria were: (1) papers describing patients with no clefts, (2) studies without orthodontic intervention (3) repetitive publications, (4) animal studies, (5) reviews.
Search strategy and study selection
The following databases were screened: Medline (via PubMed), Scopus, Embase and Web of Science (from 1970.01.01 to 2021.12.31) (Fig. 1). The search strategy combined MeSH heading words with free text words. The main search terms used were "finite element analysis and cleft palate and model (stress or displacement pattern)". Manual search was carried out in selected orthodontic journals: American Journal of Orthodontics and Orthopedics, Angle Orthodontist, European Journal of Orthodontics (from 1995 to 2021.12.31). The titles and abstracts were read to find eligible studies and thus their full texts were obtained. The references in the retrieved studies were checked. Keywords
Outcome of the search process
The search identified 31 records. Screening of titles/ abstracts-excluded 16 articles. If necessary, the articles were screened in more detail. 15 articles were retrieved for full texts and 11 of them were considered eligible for inclusion by 2 authors. Eventually, 11 articles were included in the qualitative analysis. Figure 1 describes the search strategy using a PRISMA flow chart.
Material properties for structures used in the studies
The properties of the material, such as cortical bone, cancellous bone, tooth/enamel, periodontal ligament, suture, and palatal mucosa, are presented in Table 1.
Value of the Poisson ratio used in the studies
The Poisson ratio used for tissue was similar for almost all studies; for cortical, cancellous bone, and tooth/ enamel it was 0.30 (except one paper), only 4 articles provided values for periodontal ligament. Suture numbers ranged from 0.30 to 0.49. The values for the palatal mucosa ranged from 0.28 to 0.45. This information is summarized in Table 2.
Number of nodes used in the simulation
The number of tetrahedral elements and nodes used for the creation of the model varied in different papers. For 3 articles, information about tetrahedral elements was not available. In others, the number ranged from 151.142 to 1.277.568. As concerning nodes-for 2 papers information were not available, and for the rest varied from 33.902 to 1.801.945. Information is presented in Table 3.
Clinical findings
All the information is summarized in Table 4.
Discussion
The era of modern orthodontics began when Edward H. Angle published a book in 1900 entitled "Treatment of Malocclusion of the Teeth and Fractures of the Maxillae: Angle's System" creating the foundations for modern diagnostics and orthodontic treatment [20]. He is considered the father of orthodontics. Today, innovative technologies and intelligent materials have become part of the tools used by the orthodontist: digital technologies, transparent wires and brackets, aligners, robots, and computer modelling.
The driving force seems to be also Artificial Intelligence (AI) that is used in diagnostics, planning, and monitoring of treatment [21]. Digital techniques such as intraoral scanning are becoming increasingly important in orthodontic treatment. Currently, works are underway on a comprehensive digitization of the orthodontic treatment process in terms of time, cost and patient comfort. To shorten and simplify the treatment process, comprehensive practice management software (PMS) systems have been developed to plan appointments, visibility of patient information, and optimized patient communication. Digital methods allow for more innovative, patient-friendly, and time-saving treatment and are based on the directto-consumer business model. Although everything has recently moved in the direction of virtuality, clinical examination remains an indispensable element of therapy [22]. Therefore, it is anticipated that traditional orthodontic treatment will continue to be practiced. Due to the use of FEM in orthodontics, any material structure (wires, brackets, rings) or maxillofacial structures (bone structures, ligaments) can be analyzed. The main assumption of FEM is the division of a larger (complex) structure into smaller sections with strictly defined physical properties (ligament, different types of bones, enamel, dentine). In this way, the response of the entire structure to the applied force is generated, e.g. orthodontic force [23].
FEM is a theoretical technique useful in the analysis of the biomechanics of maxillary protraction in UCLP patients. The maxillary hypoplasia is 3D and involves sagittal, vertical, and transverse directions with the aim of combining maxillary protraction with maxillary expansion. The comparison of the biomechanics of maxillary protraction with/without maxillary expansion was investigated. There is not much information about the biomechanical response of the maxillary complex if loaded with the protraction force of UCLP on the cleft side that was greater than that on the non-cleft side. Consequently, the existing fissure of the dental arch is enlarged (14). There are only few reports on the biomechanical effect of maxillary protraction on the craniofacial skeleton in patients with cleft and it has not been well explained clinically and experimentally using finite element methods (FEM). The basic mechanism of finite element analysis (FEA) in patients with UCLP is still unknown. There is only some information about maxillary protraction in patients with UCLP [14]. In FEA analyzes, the exceptional properties of craniofacial sutures cannot be expressed, since it was assumed that all craniofacial synchondrosis has the same histological and mechanical characteristics as the surrounding bones (19). Two methods of FEM creation are distinguished: 1) tissue-sectioning technique-generates very thin slices, destructive technique, 2) spiral CT-that enforces less scan interval, non-destructive. A skull sample for a cleft palate is usually inaccessible. The computed tomography technique was used to obtain data from a patient with UCLP (19). Because the original image of computed tomography was not enough to obtain a clear skeleton border for the generated FEM, use the window technique to obtain more readable CT images. Data from CT scans were found to be more reliable in the preparation of a digital image for FEM than from the sectioning method [19]. In the model of a cleft lip and palate, the FEA shows biomechanical characteristics of rapid maxillary expansion. It is possible to determine differences in tissue response in patients versus healthy individuals [17].
The results of some studies implied that a patient with unilateral cleft would be expected to have an asymmetric skeletal development between the noncleft and cleft sides as a consequence of an asymmetric functional loading pattern. Pan et al. [19] investigated physiological changes and the distribution of orthopaedic force stress on the craniofacial structures of the first premolar maxillary and the crown of the first molar. Several clinical implications arise from Table 4 Clinical findings Study Clinical findings [5] Variations in the trajectory patterns in the cleft skull model compared with the normal skull on occlusal loading [10] Asymmetric and nonuniform stress distribution within the cleft model between the cleft and non-cleft sides due to the asymmetric skeletal maxillary defect [11] The transverse expansion forces from rapid palatal expansion are distributed to the 3 maxillary buttresses [12] Maximum displacement in the midpalatal cleft area in the BBPE, true skeletal expansion at the alveolar level without any dental tipping when compared with the conventional HYRAX expander [13] The protraction force alone led the craniomaxillary complex to move forward and counter clockwise, accompanied by lateral constraint in the dental arch The additional rapid maxillary expansion resulted in a more positive reaction, including both a greater sagittal displacement and an increase in the width of the dental arch [14] The best effect after loading maxillary protraction force, and resorption in the lower region of the grafted bone showed a better effect than resorption in the upper region of the grafted bone [15] It is more advantageous to perform maxillary protraction using a facemask with a miniplate anchorage than a facemask with a tooth-borne anchorage and after the alveolar bone graft rather than before the alveolar bone graft, regardless of the type of cleft [16] Need for customizing expansion therapy for patients with clefts depending on the patient's age, the type of cleft present, and the desired expansion area [17] Visualization of bone and suture structures and explain the function and mechanism of RME on the skull with UCLP [18] Nilateral cleft would be expected to have an asymmetric skeletal development between the noncleft and the cleft sides as a consequence of an asymmetric functional loading pattern [19] RPE caused asymmetric pyramid-like displacement and deformation of UCLP; Fan-like expansion of the upper dental arch Asymmetric expansion and dispersed stress distribution on maxilla-inferior border of nasal cavity the conducted research. It seems that asymmetric and nonuniform stress and strain distribution comparing cleft and non-clefted side [10,18,19,23] and non-clefted side with higher stress and strain level [1]. The RPE procedure in UCLP patients reveals a pyramid-shaped displacement of the nasomaxillary complex along with a fan-shaped expansion of the upper dental arch [3]. The model of the patient with UCLP after ABG revealed that the best effect was obtained after loading the maxillary protraction force [14] [15]. Rapid palatal expansion forces are transmitted along three vertical buttresses [11]. In the craniofacial complex with UCLP, missing mid-palate and deformity of maxillary bone, the resistance to maxillary expansion mainly came from the connection between the maxilla and the pterygoid plates of the sphenoid bone [17]. However, when using FEM to assess the displacement/ response of specific maxillofacial structures to an applied (orthodontic) force, many additional aspects must be taken into account. In the case of reconstruction (creation of a model) of the jaw or mandible, computed tomography is used. The computed tomography data used should have an appropriate resolution (sections of at least 0.25 mm, DICOM). In the absence of appropriate contrast and resolution, it becomes impossible to determine the boundaries of such structures: enamel, dentin, periodontal ligament. The transformation of a solid model into a model composed of a mesh of constraints and elements is the basis of FEM analysis. During the mesh refinement process, the convergence of the results with the gradual increase of bonds and elements is verified so that the voltage peaks difference between mesh refinements is 5% or less. Specialized software requires the correct representation of the mechanical properties (Young modulus and Poisson's ratio) for each mesh component [24]. Expansion therapy should be personalized according to the patient's age, the type of cleft present (primary or secondary palate), and the desired expansion area (anterior or posterior).
The results obtained by use of FEM-are based on modeling software-therefore it is extremely important to enter all the structure data-bones, teeth, ligaments, and the applied force and boundary conditions.
Limitations
The limitations of the present review were due to heterogeneity between the studies and a meta-analysis of the included studies could not be performed. Due to the disparate nature of the studies, only simple descriptive and stratified comparisons were reported.
Conclusions
Despite the limitations related to the heterogeneity of the studies included in the review, it can be concluded that finite element analysis is an appropriate tool to study and predict the points of force application for better controlled expansion in patients with UCLP. | 2023-02-14T15:33:38.029Z | 2023-02-13T00:00:00.000 | {
"year": 2023,
"sha1": "ce3cb9f76eee10bec48d4b857caada597df68408",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "ce3cb9f76eee10bec48d4b857caada597df68408",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248640461 | pes2o/s2orc | v3-fos-license | Manufacturing of a Magnetic Composite Flexible Filament and Optimization of a 3D Printed Wideband Electromagnetic Multilayer Absorber in X-Ku Frequency Bands
With the multiplication of electronic devices in our daily life, there is a need for tailored wideband electromagnetic (EM) absorbers that could be conformed on any type of surface-like antennas for interference attenuation or military vehicles for stealth applications. In this study, a wideband flexible flat electromagnetic absorber compatible with additive manufacturing has been studied in the X-Ku frequency bands. A multilayer structure has been optimized using a genetic algorithm (GA), adapting the restrictions of additive manufacturing and exploiting the EM properties of loaded and non-loaded filaments, of which the elaboration is described. After optimization, a bi-material multilayer absorber with a thickness of 4.1 mm has been designed to provide a reflectivity below −12 dB between 8 and 18 GHz. Finally, the designed multilayer structure was 3D-printed and measured in an anechoic chamber, achieving −11.8 dB between 7 and 18 GHz. Thus, the development of dedicated materials has demonstrated the strong potential of additive technologies for the manufacturing of thin wideband flexible EM absorbers.
Introduction
The electromagnetic (EM) wave's problematics are becoming more crucial and ubiquitous, due to the development of electronic circuits and the rise of the Internet of Things. There is thus a prominent need for EM absorbers to reduce the impact of generated signals for various applications in different domains. For instance, the radar cross section (RCS) reductions of military aircraft and boats are vital for stealth purposes against enemy radar systems [1]; antenna devices require the mitigation of parasitic interferences from external and internal sources to operate properly [2], and high-frequency circuits require the use of microwave load to dissipate energy [3]. Besides, the performance and constraints of the absorbers may vary in terms of operating frequencies, reflectivity at various angles and polarizations, thickness, and stability in harsh environments. As such, different topologies and materials have been investigated to fabricate EM absorbers. Usually, they are designed to limit the reflection of incident waves by impedance matching, either by working on geometry-like dielectric pyramids [4], or by gradually modifying their EM properties in the case of composite absorbers [5]. Moreover, the absorption has to be increased by introducing a way to mitigate the waves inside the absorbers through different types of losses, multi-reflections, and interferences with the incident waves, such as the Dallenbach screen [6]. To increase the bandwidth, multilayer structures can be used by working on the nature and thickness of each layer, allowing for a gradual matching of the incoming waves and the introduction of various losses. These multilayer structures have the advantage of being simple to optimize by using recursive formulas proposed by Chew [7]. Many studies have used different types of algorithms to find the optimal stacking of layers resulting in various performances while keeping a thin structure, such as a genetic algorithm [8] or particle swarm optimization [9], while some [10] are looking for the optimization of the composites' composition in a multilayer structure to achieve the same results. In order to design an ultra-wideband absorber, Li et al. [11] have managed to combine metamaterials in a multilayer structure to reach an absorption above 90% from 7.2 GHz to 35.7 GHz, and a thickness of 3.8 mm. Another paper by Pan et al. [12] also achieved a three-layer Jaumann microwave absorber by controlling the thickness of graphene oxide laminated on Kapton film, which is then placed on various dielectric substrates (foams, flexible materials), resulting in a reflection of −10 dB, covering L, S, and C bands. Finally, Kim et al. [13] have proposed a double-layer carbon metapattern using surface plasmon polaritons and coherent absorptions to achieve 90% of the absorption, from 6.3 to 30.1 GHz, with a total thickness of 4 mm. Besides, 3D printing technologies have also been investigated to realize EM absorbers using different absorbing designs. For instance, a multilayer metamaterial absorber using graphene composites has been achieved by 3D printing [14], with an absorption above 90% between 4.5 GHz and 40 GHz. Other studies from Ren et al. [15] have managed to fabricate a broadband absorber by designing dielectric resonators with a carbon-loaded acrylonitrile butadiene styrene polymer (ABS), absorbing 90% of the energy from 3.9 to 12 GHz. Without using a composites filament, research by Ghosh et al. [16] led to the design of a 3D-printed honeycomb with a PLA filament on top, which they added a resistive film to, in order to achieve a broadband absorber (at least 90%) from 5.52 to 16.96 GHz. Nevertheless, while the potential of the broadband 3D-printed EM absorber has been illustrated, the flexible flat absorber has not been investigated with this manufacturing process.
Our current goal is to obtain a flexible, flat, and thin EM absorber that could be fixed on any surface, and with the highest absorption (at least 10 dB) in the X-Ku frequency bands. The maximum total thickness has been fixed at 5 mm, which corresponds to the average thickness used in the optimization of multilayer structures [8,9]. In this article, a genetic algorithm has been developed to determine the composition of each layer of a multilayer absorber, depending on the objective and total thickness of the structure, which will then be 3D-printed by fused deposition modeling. The fabrication of a flexible loaded filament and the characterization of its EM properties from 4 GHz to 18 GHz used in the optimization are also described.
Materials and Filament Preparation
In this study, our aim is to demonstrate the feasibility of designing and fabricating 3D-printed wideband absorbers. The fabrication will use a fused deposition modeling method (FDM) that consists of depositing a fused polymer layer-by-layer. Flexible filaments with low loss are available, but flexible magnetic composites that could allow improving impedance matching, together with providing high absorption levels, have to be developed. Thus, composite filaments with high magnetic losses were prepared from a commercial ester-based thermoplastic polyurethane matrix selected because of its low shore hardness and its melt temperature close to 160 • C. The 3D printer used in this study is the A2V4 (from 3ntr, Oleggio, Italy), and is shown in Figure 1. This machine has a maximum printing temperature of 450 • C. All the objects were manufactured at the minimum layer thickness (i.e., 0.1 mm). Thus, the tolerance in the manufacture of the layers in the 3D printer will be of the order of ±0.1 mm. The filler used in this study was carbonyl iron particles (CIP). Studies by Abshinova et al. [17] have shown the possibility to fabricate composites with CIP and different polymers for a volume concentration from 10 to 52%. Besides, magnetic materials have better impedance matching because their permeability is higher than unity and should Materials 2022, 15, 3320 3 of 16 enhance the absorbing properties at low frequencies. While CIP could be considered to be too heavy for 3D printing, they present a higher permeability at low frequencies and broader magnetic losses than ferrites, which are slightly lighter. Moreover, this study aims to design a multilayer absorber that will not only use these magnetic composites, but also pure and porous polymers. Thus, the surface density of the absorber will be evaluated at the end of the fabrication process.
Materials 2022, 15, x FOR PEER REVIEW Besides, magnetic materials have better impedance matching because their perm is higher than unity and should enhance the absorbing properties at low freq While CIP could be considered to be too heavy for 3D printing, they present a permeability at low frequencies and broader magnetic losses than ferrites, wh slightly lighter. Moreover, this study aims to design a multilayer absorber that only use these magnetic composites, but also pure and porous polymers. Thus, the density of the absorber will be evaluated at the end of the fabrication process. The filaments were prepared by simultaneously melt mixing all the compone ymer matrix, and CIP, using a DSM Xplore laboratory twin-screw extruder (Ams The Netherlands) at a wall temperature of 160 °C and a screw rotational speed of 1 (step 1). The mixing residence time did not exceed 1 min. Samples were then free tured to obtain pellets (step 2). These pellets were incorporated in a Filabot Ex 6 f extruder operating at a temperature of 160 °C and a fixed screw speed to contro ameter of the filament close to 2.85 mm (step 3). The same filaments melt forming was then extrapolated on a semi-industrial and industrial scale (screw profile co of conveying elements only, at a wall temperature of 200 °C and a screw speed of coiling linear velocity of 1.6 m/min) with control of the filament diameter by a laser Fairly good quality for the entire filaments coil was obtained with an accuracy o on the diameter. For the sake of clarity, the filaments elaboration process is illust Table 1. The filaments were prepared by simultaneously melt mixing all the components, polymer matrix, and CIP, using a DSM Xplore laboratory twin-screw extruder (Amsterdam, The Netherlands) at a wall temperature of 160 • C and a screw rotational speed of 150 rpm (step 1). The mixing residence time did not exceed 1 min. Samples were then freezefractured to obtain pellets (step 2). These pellets were incorporated in a Filabot Ex 6 filament extruder operating at a temperature of 160 • C and a fixed screw speed to control the diameter of the filament close to 2.85 mm (step 3). The same filaments melt forming process was then extrapolated on a semi-industrial and industrial scale (screw profile composed of conveying elements only, at a wall temperature of 200 • C and a screw speed of 20 rpm; coiling linear velocity of 1.6 m/min) with control of the filament diameter by a laser system. Fairly good quality for the entire filaments coil was obtained with an accuracy of 50 µm on the diameter. For the sake of clarity, the filaments elaboration process is illustrated in Table 1. The microstructure was observed, both before and after 3D printing, using a Hitachi S 3200 N Scanning Electron Microscope (SEM) (Naka, Japan), operating at an accelerating voltage of 15 kV. These images will be analyzed to observe the distribution of CIP in the matrix, and to note the presence, or not, of agglomerates likely to impact, in a harmful way, the electromagnetic properties of the composite. The influence of the 3D printing process on the microstructure will also be evaluated.
Electromagnetic Characterization
EM characterization was carried out by extracting the permittivity and permeability, from the analysis of measured scattering parameters of different rectangular waveguides loaded with the samples being tested. The measurements of the scattering parameters were done with an N5245A PNA-X microwave network analyzer by Agilent Technologies (Figure 2). Then, the scattering parameters are converted using an NRW-NIST iterative method to determine both the permittivity and the permeability [18]. The samples were obtained by printing a plate with the printing parameters (thickness layer, temperature, and pattern) planned for the absorber to keep the same induced porosity of the printing parts, and to take into account the possible effects of variations on the EM properties. The different printing parameters, such as printing speed or printing temperature, as well as the dilatation or contraction effects on the object dimensions, have been adjusted for each filament with a calibration sample. The lossless filament is printed at a temperature of 230 • C, while the lossy magnetic filament is printed at 180 • C, both at the speed of 10 mm/s to ensure better control of the dimensions. The printed plate was then cut into rectangular samples to fit the dimensions of standard rectangular waveguides in C to Ku frequency bands, especially WR187, WR137, WR90, and WR62. The experimental setup is shown in Figure 2. While magnetic materials could present advantages at low frequencies thanks to their high permeability, other frequency bands have not been studied in this paper.
Microstructure Observation
The microstructure was observed, both before and after 3D printing, using a S 3200 N Scanning Electron Microscope (SEM) (Naka, Japan), operating at an acce voltage of 15 kV. These images will be analyzed to observe the distribution of CI matrix, and to note the presence, or not, of agglomerates likely to impact, in a h way, the electromagnetic properties of the composite. The influence of the 3D p process on the microstructure will also be evaluated.
Electromagnetic Characterization
EM characterization was carried out by extracting the permittivity and perm from the analysis of measured scattering parameters of different rectangular wav loaded with the samples being tested. The measurements of the scattering par were done with an N5245A PNA-X microwave network analyzer by Agilent Techn (Figure 2). Then, the scattering parameters are converted using an NRW-NIST i method to determine both the permittivity and the permeability [18]. The sampl obtained by printing a plate with the printing parameters (thickness layer, temp and pattern) planned for the absorber to keep the same induced porosity of the p parts, and to take into account the possible effects of variations on the EM propert different printing parameters, such as printing speed or printing temperature, as the dilatation or contraction effects on the object dimensions, have been adjusted filament with a calibration sample. The lossless filament is printed at a temper 230 °C, while the lossy magnetic filament is printed at 180 °C, both at the speed of 1 to ensure better control of the dimensions. The printed plate was then cut into rect samples to fit the dimensions of standard rectangular waveguides in C to Ku fre bands, especially WR187, WR137, WR90, and WR62. The experimental setup is sh Figure 2. While magnetic materials could present advantages at low frequencies th their high permeability, other frequency bands have not been studied in this pape Besides the printing parameters used to print a fully dense object, different f air inclusions have been investigated. By varying the porosity, we expand the r EM properties. This allows us to artificially increase the number of materials du optimization, and to reach better impedance matching with the air. The size of th sions must be small enough, compared to the wavelength, to consider that this la a homogeneous material, while still being printable with a nozzle of 0.8 mm. Mo the air cavities must not be connected for sealing purposes. Thus, only one set of patterns will be investigated, to avoid printing different porous layers with vario or forms of the cavity on top of each other, and to guarantee the impermeabilit absorber. To not increase the printing difficulty, we only use one filament to m porous layers. The chosen pattern is a square hole grid, with a side length of 1.8 m a wall thickness of 1 mm between each cell, thus adding 41% of air inclusions to th Besides the printing parameters used to print a fully dense object, different forms of air inclusions have been investigated. By varying the porosity, we expand the range of EM properties. This allows us to artificially increase the number of materials during the optimization, and to reach better impedance matching with the air. The size of the inclusions must be small enough, compared to the wavelength, to consider that this layer has a homogeneous material, while still being printable with a nozzle of 0.8 mm. Moreover, the air cavities must not be connected for sealing purposes. Thus, only one set of porous patterns will be investigated, to avoid printing different porous layers with various sizes or forms of the cavity on top of each other, and to guarantee the impermeability of the absorber. To not increase the printing difficulty, we only use one filament to make the porous layers. The chosen pattern is a square hole grid, with a side length of 1.8 mm and a wall thickness of 1 mm between each cell, thus adding 41% of air inclusions to the layer. The lossless filament is selected to reach a lower permittivity and to obtain better impedance matching between the air and absorber during the optimization to decrease the total reflectivity of the model. The composite filament can also be used to make porous layers, but has shown difficulties in accurately printing complex structures, and is thus not studied in this paper. A sample with the "Grid" pattern has been sliced by Ultimaker Cura (Figure 3) and printed. Then, the porous plate has been cut into rectangular samples for the EM characterization in standard waveguides.
Materials 2022, 15, x FOR PEER REVIEW
The lossless filament is selected to reach a lower permittivity and to obtain better ance matching between the air and absorber during the optimization to decrease t reflectivity of the model. The composite filament can also be used to make porou but has shown difficulties in accurately printing complex structures, and is thus n ied in this paper. A sample with the "Grid" pattern has been sliced by Ultimak ( Figure 3) and printed. Then, the porous plate has been cut into rectangular sam the EM characterization in standard waveguides. Figure 3. Image of a sample with the "Grid" pattern sliced in Ultimaker Cura, with an "i pattern" at 2.8 mm (1.8 mm for the size of the holes and 1 mm for the wall).
Measurement of the Reflectivity of the Absorber
The reflectivity measurements were done in a bistatic configuration inside choic chamber, as illustrated in Figure 4, using a vector network analyzer, and wi antennas from 4 to 18 GHz. The bistatic configuration imposes a minimum angle b the antennas of 10° in order to minimize direct coupling. Normalization was don measurement of a metallic plate, of which the size is the same as the sample un Then, the absorber was placed on the metallic plate to determine its reflection co between 4 and 18 GHz. Time-gating was applied to S-parameters, in order to red multi-path reflections in the anechoic chamber.
Structure Optimization
In order to reduce the RCS of an object, this paper describes the optimizat . Image of a sample with the "Grid" pattern sliced in Ultimaker Cura, with an "infill line pattern" at 2.8 mm (1.8 mm for the size of the holes and 1 mm for the wall).
Measurement of the Reflectivity of the Absorber
The reflectivity measurements were done in a bistatic configuration inside an anechoic chamber, as illustrated in Figure 4, using a vector network analyzer, and wideband antennas from 4 to 18 GHz. The bistatic configuration imposes a minimum angle between the antennas of 10 • in order to minimize direct coupling. Normalization was done by the measurement of a metallic plate, of which the size is the same as the sample under test. Then, the absorber was placed on the metallic plate to determine its reflection coefficient between 4 and 18 GHz. Time-gating was applied to S-parameters, in order to reduce the multi-path reflections in the anechoic chamber.
The lossless filament is selected to reach a lower permittivity and to obtain better ance matching between the air and absorber during the optimization to decrease t reflectivity of the model. The composite filament can also be used to make porous but has shown difficulties in accurately printing complex structures, and is thus no ied in this paper. A sample with the "Grid" pattern has been sliced by Ultimake ( Figure 3) and printed. Then, the porous plate has been cut into rectangular samp the EM characterization in standard waveguides. Figure 3. Image of a sample with the "Grid" pattern sliced in Ultimaker Cura, with an "in pattern" at 2.8 mm (1.8 mm for the size of the holes and 1 mm for the wall).
Measurement of the Reflectivity of the Absorber
The reflectivity measurements were done in a bistatic configuration inside choic chamber, as illustrated in Figure 4, using a vector network analyzer, and wi antennas from 4 to 18 GHz. The bistatic configuration imposes a minimum angle b the antennas of 10° in order to minimize direct coupling. Normalization was done measurement of a metallic plate, of which the size is the same as the sample und Then, the absorber was placed on the metallic plate to determine its reflection coe between 4 and 18 GHz. Time-gating was applied to S-parameters, in order to red multi-path reflections in the anechoic chamber.
Structure Optimization
In order to reduce the RCS of an object, this paper describes the optimizati multilayer structure. This absorption method has the advantage of combining nonmaterial for impedance matching with the incident EM wave, and materials with tric/magnetic losses to dissipate the energy. EM characterization will allow us to
Structure Optimization
In order to reduce the RCS of an object, this paper describes the optimization of a multilayer structure. This absorption method has the advantage of combining non-loaded material for impedance matching with the incident EM wave, and materials with dielectric/magnetic losses to dissipate the energy. EM characterization will allow us to extract the permittivity ε and the permeability µ of each material: with ε r , ε r , µ r and µ r being the real and imaginary part of the permittivity and permeability, and ε 0 , µ 0 those of the vacuum. With the data from the EM characterization, a genetic algorithm (GA) has been used to determine the nature of the different layers in our multilayer EM absorber. However, the configuration of the machine for printing flexible materials only allows the use of a maximum of 2 nozzles, and thus we have decided to start the optimization by using 2 materials: the lossless filament (with or without air inclusions) and the lossy magnetic filament. By considering the resolution and by fixing the thickness of the objective structure, it is possible to slice the system into a fixed number of layers (for instance, a total thickness of 10 mm with a maximum resolution of 0.1 mm will result in 100 sublayers). Different thicknesses are tested to find the thinnest structure to fulfill the goal. We set the following rules for the GA optimization of the multilayer structure. The material library will be composed of the EM properties of: 1.
The lossless filament when fully dense; 2.
The magnetic loaded filament when fully dense; 3.
The lossless filament with the "Grid" porous pattern.
Moreover, the first and last layers are constrained to be made of a material without porosity to ensure the impermeability of the absorber. Since better impedance matching leads to better microwave absorption, the first layer is fixed to be made by the non-loaded filament, while the last layer is fixed to be made by the magnetic loaded filament. As for the rest of the layers, the GA algorithm creates a vector population determining different combinations of layers, based on the Z resolution and the thickness chosen, where each individual contains the nature of each layer as "genes". The genes of each vector will be adjusted after each iteration to find the optimal structure. The Z resolution of a 3D printer refers to the minimum layer thickness that can be printed. Each vector will thus contain a number of genes equaling the total thickness of the absorber without the first and the last layers, divided by the resolution of the 3D printer (0.1 mm in this paper). The size of the population was fixed at 100. The multilayer is backed by a perfect electric conductor (PEC), as shown in Figure 5a. The fitness function, also called the figure of merit, allowing the sorting of the different possibilities, is the difference between the area of the reflectivity calculated with each vector, and that of the goal reflectivity, which is first set at −10 dB in the X-Ku frequency bands at 10 • . To calculate the global reflectivity coefficient R 0,1 , we need the reflection coefficient at each i interface R i of the multilayer using these equations with the EM properties of each layer, µ i for the permeability and ε i the permittivity at the layer i: for TM polarization : where k i = ω µ i ε i − µ 0 ε 0 sin 2 (θ), ω being the angular frequency, θ the angle of incidence, µ 0 the vacuum permeability and ε 0 the vacuum permittivity.
Then, we can use the recursive formula of Chew using these coefficients and the thickness d i of each layer to find the generalized reflection coefficient between each layer and of the entire structure: where R n,n+1 = −1 for TE (transverse electric) polarization, and R n,n+1 = +1 for TM (transverse magnetic) polarization at the interface between the last layer of material and the PEC. As mentioned by Dib et al. [19], because Equation (4) represents the reflection coefficient for the magnetic field for TM instead of the electric field for TE, R n,n+1 has to be set to à +1 to reach the same magnitude with both equations at normal incidence.
where , = −1 for TE (transverse electric) polarization, and , = +1 for TM (transverse magnetic) polarization at the interface between the last layer of material and the PEC. As mentioned by Dib et al. [19], because Equation (4) represents the reflection coefficient for the magnetic field for TM instead of the electric field for TE, , has to be set to à +1 to reach the same magnitude with both equations at normal incidence.
After the selection, the best vectors are kept and combined to create new combinations, creating another population with more optimal solutions. The algorithm proceeds until it reaches the objective or after meeting stopping criteria, such as converging toward a local minimum or reaching a fixed generation, fulfilling the objective criteria ( Figure 5b). As for the other parameters of the GA algorithm, every function is set up with default options in the Matlab optimization toolbox [20]. For instance, the stochastic uniform option is used for the selection function, the Gaussian mutation is set at a rate of 0.01 between each generation, a scattered crossover option is chosen to create new combinations of layers, and the maximum generation is limited by 100× number of genes.
Microstructure Observation
The CIP has a spherical aspect with a characteristic size of a few microns, as shown on the scanning electron micrograph in Figure 6. In particular, the average diameter, estimated from about 300 individual measurements using Sigman Scan ® PRO 5 images analysis software, is close to 1.8 µm, with a standard deviation of 0.8 µm, as illustrated on the diameter distribution curve in Figure 7. This value is close to the one given by the supplier. In the paper by Abshinova et al. [17], different concentrations of CIP have been tested from 10% to 52% vol. However, the composite filament must show a permanent flow at the exit of the extruder to ensure the potential ability to process the samples. If the flow is not permanent, the 3D printing deposition cannot be controlled, and the printed object will suffer from dimensional inaccuracies. Thus, we have to find a compromise between having the highest concentration to ensure the best absorbing performances, while being printable without flowing issues. After testing different composites, a high filling ratio of carbonyl iron particles (75% wt or 30% vol) was chosen to obtain a flexible elastomer/CIP composite filament with high magnetic losses. A higher concentration could be tested in the future. After the selection, the best vectors are kept and combined to create new combinations, creating another population with more optimal solutions. The algorithm proceeds until it reaches the objective or after meeting stopping criteria, such as converging toward a local minimum or reaching a fixed generation, fulfilling the objective criteria ( Figure 5b). As for the other parameters of the GA algorithm, every function is set up with default options in the Matlab optimization toolbox [20]. For instance, the stochastic uniform option is used for the selection function, the Gaussian mutation is set at a rate of 0.01 between each generation, a scattered crossover option is chosen to create new combinations of layers, and the maximum generation is limited by 100× number of genes.
Microstructure Observation
The CIP has a spherical aspect with a characteristic size of a few microns, as shown on the scanning electron micrograph in Figure 6. In particular, the average diameter, estimated from about 300 individual measurements using Sigman Scan ® PRO 5 images analysis software, is close to 1.8 µm, with a standard deviation of 0.8 µm, as illustrated on the diameter distribution curve in Figure 7. This value is close to the one given by the supplier. In the paper by Abshinova et al. [17], different concentrations of CIP have been tested from 10% to 52% vol. However, the composite filament must show a permanent flow at the exit of the extruder to ensure the potential ability to process the samples. If the flow is not permanent, the 3D printing deposition cannot be controlled, and the printed object will suffer from dimensional inaccuracies. Thus, we have to find a compromise between having the highest concentration to ensure the best absorbing performances, while being printable without flowing issues. After testing different composites, a high filling ratio of carbonyl iron particles (75% wt or 30% vol) was chosen to obtain a flexible elastomer/CIP composite filament with high magnetic losses. A higher concentration could be tested in the future.
As shown in Figure 8a,b, most CIP are well dispersed within the matrix without a significant difference between the microstructure of the 3D-printed sample (Figure 8b), and that of the filament used for 3D printing (Figure 8a). Besides, the absence of aggregates is attributed to the high frequency of collisions between CIP during the extrusion process estimated from the Smoluchowski equation [21,22]: where C is the frequency of collisions, . γ the shear rate, and ϕ the volume fraction of CIP. If we consider that the average shear rate is equal to 2200 s −1 , estimated from the technical data, during mixing, then C~200 collisions per second at ϕ = 10% vol, and C reaches 1700 collisions per second at ϕ = 75% wt. The frequency of collisions is the number of collisions between particles per unit time. A high number of collisions allow one to disrupt the formation of aggregates. As shown in Figure 8a,b, most CIP are well dispersed within the matrix w significant difference between the microstructure of the 3D-printed sample (Fig and that of the filament used for 3D printing (Figure 8a). Besides, the absence o gates is attributed to the high frequency of collisions between CIP during the ex As shown in Figure 8a,b, most CIP are well dispersed within the matrix wi significant difference between the microstructure of the 3D-printed sample (Figu and that of the filament used for 3D printing (Figure 8a). Besides, the absence of gates is attributed to the high frequency of collisions between CIP during the ex process estimated from the Smoluchowski equation [21,22]: where C is the frequency of collisions, the shear rate, and the volume fraction of CIP. If we consider that the average shear rate is equal to 2200 s −1 , estimated from the technical data, during mixing, then C~200 collisions per second at = 10% vol, and C reaches 1700 collisions per second at = 75% wt. The frequency of collisions is the number of collisions between particles per unit time. A high number of collisions allow one to disrupt the formation of aggregates.
Microwave Characterization
The measured EM properties for the two filaments are presented in Figure 9. According to the EM characterization, the filament with carbonyl iron filler shows a high magnetic loss tangent of around 0.5-1 (Figure 9d), and can thus act as a microwave absorber. The real part of the permeability decreases from 2.4 to 1.1 in the 4-18 GHz frequency band. The real permittivity (Figure 9c) is around 10 and is very stable among the frequency band under study. The lossless filament has a permittivity between 2.5 and 3 that slightly decreases with increasing frequency (Figure 9a). As illustrated in Figure 9e, the air inclusion addition effectively reduces the permittivity of the porous structure compared to the filament when fully printed, from around 2.8 to 1.8 for the real part of permittivity. These values of EM properties were used for the optimization of the multilayer microwave absorber. To evaluate the repeatability of the printing process, the tolerance was calculated by characterizing different samples for each material. In the case of the non-loaded filament, the tolerances of the permittivity are ±0.3 for the real part and ±0.02 for the imaginary part. For the magnetic-loaded filament, the tolerances of the permittivity are ±0.8 for the real part and ±0.3 for the imaginary part, and those of the permeability are ±0.08 for the real part and ±0.08 for the imaginary part.
Structure Optimization
Several levels of absorption have been tested in the X-Ku band for multilayer structures with a maximum thickness of 5 mm. Figure 10 presents the simulated results of a multilayer structure made from the lossless material and the material with magnetic losses, with a goal of −12 dB in the X-Ku frequency bands at 10 • with and without a fixed number of alternating layers. This angle of incidence was chosen because it corresponds to the closest angle to the normal incidence of our bistatic measurement setup. Optimizations of a higher level of absorption (more than 13 dB) in these bands require a higher thickness and will not be described. Using the optimization method as described in Section 2.3, a multilayer structure with 15 alternating layers was found, which could be difficult to print, especially because of the fact that porous layers have to be well defined, and most of the layer thicknesses are thin. Thus, another optimization has been carried out by fixing the number of layers at 3 to ease the printing process, and allow us to obtain a first experimental demonstration of a multilayer absorber before printing more complex structures. The nature of each layer for these structures, as well as each thickness, is described in Tables 2 and 3. sorber. To evaluate the repeatability of the printing process, the tolerance was calculated by characterizing different samples for each material. In the case of the non-loaded filament, the tolerances of the permittivity are ±0.3 for the real part and ±0.02 for the imaginary part. For the magnetic-loaded filament, the tolerances of the permittivity are ±0.8 for the real part and ±0.3 for the imaginary part, and those of the permeability are ±0.08 for the real part and ±0.08 for the imaginary part.
(e) (f) Figure 9. Characterization of the EM properties, with Eps Re = real permittivity, Eps Im = imaginary permittivity, Mu Re = real permeability, Mu Im = imaginary permeability: (a) the permittivity of the non-loaded filament, (b) the permeability of the non-loaded filament, (c) the permittivity of the magnetic loaded filament, (d) the permeability of the magnetic loaded filament, (e) the permittivity of the printed sample with a "Grid" pattern, (f) the permeability of the printed sample with a "Grid" pattern. Figure 9. Characterization of the EM properties, with Eps Re = real permittivity, Eps Im = imaginary permittivity, Mu Re = real permeability, Mu Im = imaginary permeability: (a) the permittivity of the non-loaded filament, (b) the permeability of the non-loaded filament, (c) the permittivity of the magnetic loaded filament, (d) the permeability of the magnetic loaded filament, (e) the permittivity of the printed sample with a "Grid" pattern, (f) the permeability of the printed sample with a "Grid" pattern.
The thinnest structure to fulfill the goal of −12 dB was found to be 4 mm. However, the optimization method leads to a multilayer with 15 different layers that might be difficult to print accurately. Moreover, this optimized multilayer absorber has alternating very thin layers of porous and non-porous non-loaded materials that can be replaced by thicker alternating layers to achieve impedance matching. Thus, by fixing the number of layers at 3, a similar result can be obtained with a thickness of 4.1 mm. The structure being quite simple, it seems that a wideband thin absorber over the X and Ku frequency bands might be feasible to print based on the simulation. In this case, the 3D printing process will allow for a single-step process to manufacture a microwave absorber, as well as controlling the porosity of one of the layers to impact its EM properties. especially because of the fact that porous layers have to be well defined, and most of the layer thicknesses are thin. Thus, another optimization has been carried out by fixing the number of layers at 3 to ease the printing process, and allow us to obtain a first experimental demonstration of a multilayer absorber before printing more complex structures. The nature of each layer for these structures, as well as each thickness, is described in Tables 2 and 3.
3D Printing
Based on the optimization of the multilayer structure, a square model with a size of 100 mm × 100 mm of the EM absorber was sliced by Ultimaker Cura and printed. Because of the flexibility of the composite filament, the printing speed has been limited to 10 mm/s, thus reducing the time to print the multilayer to under 24 h. Small extensions on the sides have been made to estimate the thickness of each layer with a caliper; they are then removed before measuring the reflectivity inside an anechoic chamber. After measurements of the absorption, the sample is sliced to measure the thickness of each layer with a microscope for better accuracy. Models and pictures of the printed objects are presented in Figure 11.
The surface density was 5.9 kg m −2 , which is lighter compared to the Eccosorb SLJ by Laird Technologies (8 kg m −2 ), a commercial flexible wideband absorber with a thickness of 6.7 mm which does not use magnetic materials. Since the magnetic layer only represents 30% of the total thickness of the optimized absorber, the surface density of the absorber is not considerably heavier than other commercial absorbers.
Based on the optimization of the multilayer structure, a square model with a size of 100 mm × 100 mm of the EM absorber was sliced by Ultimaker Cura and printed. Because of the flexibility of the composite filament, the printing speed has been limited to 10 mm/s, thus reducing the time to print the multilayer to under 24 h. Small extensions on the sides have been made to estimate the thickness of each layer with a caliper; they are then removed before measuring the reflectivity inside an anechoic chamber. After measurements of the absorption, the sample is sliced to measure the thickness of each layer with a microscope for better accuracy. Models and pictures of the printed objects are presented in Figure 11. The surface density was 5.9 kg m −2 , which is lighter compared to the Eccosorb SLJ by Laird Technologies (8 kg m −2 ), a commercial flexible wideband absorber with a thickness of 6.7 mm which does not use magnetic materials. Since the magnetic layer only represents 30% of the total thickness of the optimized absorber, the surface density of the absorber is not considerably heavier than other commercial absorbers. The measurement in the anechoic chamber of the absorber was made in a bistatic configuration, at an angle of incidence of 10 • , for both linear polarizations TE and TM, and can be seen in Figure 12, as well as the simulations with the same configuration. The measurement was made on sample with the measured values for each of the layers given in Table 4. Based on the measurement, the absorber follows the expected behavior and reaches −11.8 dB between 7 GHz and 18 GHz (or −10 dB between 6.5 GHz and 18 GHz). The measurement in the anechoic chamber of the absorber was made in a bistatic configuration, at an angle of incidence of 10°, for both linear polarizations TE and TM, and can be seen in Figure 12, as well as the simulations with the same configuration. The measurement was made on sample with the measured values for each of the layers given in Table 4. Based on the measurement, the absorber follows the expected behavior and reaches −11.8 dB between 7 GHz and 18 GHz (or −10 dB between 6.5 GHz and 18 GHz).
Discussion
The result demonstrates the potential of additive manufacturing for the development of flexible broadband absorbers. However, we observe a shift towards lower frequencies
Discussion
The result demonstrates the potential of additive manufacturing for the development of flexible broadband absorbers. However, we observe a shift towards lower frequencies of around 2 GHz compared to the optimization, even though the structure seems to be thinner than the optimization. It is likely that the dimension of each layer does not exactly match what has been simulated, and a study of the sensitivity is required to assess this hypothesis. Concerning the bottom layer, the slight thickness differences are surely due to the offset between the nozzle and the 3D printer bed, and the other slight variations might be begotten by the porosity of the layer and the flexibility of the structure. Moreover, the concentration of the air inclusion seems difficult to evaluate, but we will have to consider possible variations of the EM properties of this layer.
As for the study of the sensitivity, the error impact caused by the printing resolution of the structure has been studied. We have also considered the variation of the diameter of the filament (50 µm), which can change the material flow through the nozzles and impact the thickness of each layer. Finally, we have also evaluated the possibility of a variation of the filler concentration or uncertainty of the EM properties, which can impact the performances of the structures. To evaluate the influence of such parameters, several parametric studies have been performed on the optimized structure using ANSYS HFSS software.
First, we have looked at the impact of a dimensional error on each layer thickness. As such, we choose to modify the thickness of each layer by more or less 0.1 mm, which is equal to the resolution of the machine, and thus will be considered as the maximum error that we will first accept. Based on the simulations (Figure 13a-c), it seems that the main parameter affecting the sensibility of the system is the thickness of the last layer (magnetic loaded material). If the bottom layer is thicker, the bandwidth will be larger, but the maximum reflectivity will degrade up to −11 dB in bandwidth. Otherwise, we should observe the opposite and expect a smaller bandwidth, but a more absorbing structure. However, because the overall structure, as well as the thickness of the bottom layer, are thinner than the simulation, this should result in a frequency shift toward higher frequencies. The deviation should be explained by the other factors.
Then, we looked at the influence of the filler concentration variation in the material. Because making a new filament for every concentration would be too cumbersome, we decided to modify the EM properties (permittivity and permeability at the same time) of the magnetic filament by a fixed percentage (more or less 10%, EM properties shown in Figure 13e,f) to evaluate the impact of such variation on the reflectivity. According to the results in Figure 13d, EM properties' variations of the magnetic loaded filament have a non-negligible impact, with a possible frequency shift of 1 GHz, with variations of the EM properties by only 10%. If the filler concentration is higher, then we should expect a shift of the absorption bandwidth toward lower frequencies and a reduction of the absorption in the X and Ku frequency bands, and we should see the opposite in the other case.
Other than this, we consider the variation of the porous layer EM properties. Indeed, the EM characterization has been done without any material printed on top of the grid, which could lead to different results compared to the printing, where the top layer might fill the air inclusion. As such, the EM properties of the intermediate layer may be between those of the lossless material when fully printed, and those with air inclusions. To evaluate such an issue, we consider the worst case being the porous lossless layer replaced by the fully printed lossless layer. By looking at Figure 13g, it seems that we are expecting a frequency shift toward the lower frequencies, as well as a degradation of the absorption if the porous layer is not printed correctly. Finally, by studying the influence of an air gap under the structure due to internal stresses or the roughness after printing, we can see in Figure 13h that this issue leads to a frequency shift and the degradation of the global absorption. As such, it will be important to check during the measurements that the structure is as flat as possible, without using any tape or glue to fix the structure, since they will increase the gap.
According to the above study, a combination of a higher magnetic loss from the loaded material, a possible reduction of the air inclusion in the porous layer, and an addition of a small air gap due to the roughness of the structure might explain the frequency shift compared with the optimization. Thus, it is necessary to search for a way to control the dimension of each layer thickness and the air inclusion of the porous layer, especially with flexible filament.
Conclusions
In summary, the manufacturing of a 3D-printed multilayer broadband EM absorber has been described from the creation of the filaments enabling magnetic EM losses to the optimization with a genetic algorithm, taking into account the possibilities offered by FDM 3D printing, but also its limitations. A 4.1-mm-thick absorber with a bandwidth between 7 to 18 GHz in which return losses are lower than −11.8 dB has been optimized and measured in an anechoic chamber, thus demonstrating the potential of this dedicated filament and of the 3D printing technique to fabricate an efficient flexible EM absorber. Future studies will consist of improving the current 3D printing parameters in such a way that it can use three different flexible materials instead of two today, as well as enhancing the accuracy of the thickness of the different layers, which will be crucial with more complex structures. However, the preferred route initially will be the use of materials with higher iron filling content, which will necessarily require further studies on the thermomechanical behavior of the composite thus obtained.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the collection, analyses, or interpretation of data, or in the decision to publish the results. | 2022-05-10T15:41:36.755Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "e5bcf6201ba6cc358acd726b2dc465bb754f5092",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/9/3320/pdf?version=1651826194",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f85bc0fcbf523c9000e8f20d07320f4ae2a831e9",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13704283 | pes2o/s2orc | v3-fos-license | Proof of concept of a workflow methodology for the creation of basic canine head anatomy veterinary education tool using augmented reality
Neuroanatomy can be challenging to both teach and learn within the undergraduate veterinary medicine and surgery curriculum. Traditional techniques have been used for many years, but there has now been a progression to move towards alternative digital models and interactive 3D models to engage the learner. However, digital innovations in the curriculum have typically involved the medical curriculum rather than the veterinary curriculum. Therefore, we aimed to create a simple workflow methodology to highlight the simplicity there is in creating a mobile augmented reality application of basic canine head anatomy. Using canine CT and MRI scans and widely available software programs, we demonstrate how to create an interactive model of head anatomy. This was applied to augmented reality for a popular Android mobile device to demonstrate the user-friendly interface. Here we present the processes, challenges and resolutions for the creation of a highly accurate, data based anatomical model that could potentially be used in the veterinary curriculum. This proof of concept study provides an excellent framework for the creation of augmented reality training products for veterinary education. The lack of similar resources within this field provides the ideal platform to extend this into other areas of veterinary education and beyond.
Introduction
The practice of medicine and veterinary medicine relies heavily on clinicians thorough and working knowledge of 3D anatomy. Skills needed in a clinician's repertoire include physical examination, interpretation of imaging data, including advanced imaging, correctly diagnosing, and procedures such as surgery all requiring an in-depth anatomy knowledge [1].
Traditionally, this knowledge acquisition has been a mainstay of medical education and clinical training. Students in the past have relied on lengthy didactic lectures, cadaveric dissection, textbook figures and simplified models to develop their knowledge and anatomy skills [1,2]. Indeed, anatomy has been viewed as the foundation of medical training and budgets a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 have, in the past, been funding dissection programs [1,3]. However, over recent years, there has been a reduction in the amount of time allocated to anatomy teaching, including dissection, and a lack of qualified staff to teach clinically applied anatomy [4][5][6].
As a consequence of this, and the rapidly progressing field of digital products in anatomical and medical education, there has been an explosion onto the market of a wide range of products [7][8][9][10]. Ever since the development of the first major game-changer in digital anatomy of the Visible Human Project, there are now a plethora of tools available [11].
However, unlike human anatomy and medical training using digital products, there has been a serious lack of progress in this field from the veterinary perspective. Certainly, there have been some attempts to develop educational and training materials for the veterinary community. These have however been around isolated cases including the rat brain, frog and limbs of the horse [12][13][14]. In addition, they did not have sufficient levels of detail that would be required for veterinary students to embed into the curriculum effectively. A more accurate representation was based on the Visible Animal Project (VAP), which attempted to create a 3D database of anatomical items of the dog trunk [15]. However, it lacked the detail, as so many do, of the cranial anatomy of the dog.
Similarly, there has been a rise in the popularity of virtual reality (VR) in human anatomy education [16]. However the challenge here is to make the invisible visible. Even the most skilful dissection of specimens can only reveal certain aspects of structural relationships, and only after significant investment of time and resources [17][18][19]. VR, however, can make many aspects of the invisible visible to a user in an immersive way, as quickly as navigating to a website or accessing a mobile application. If the VR is convincing enough, students can not only interact with structures and concepts in ways that expand their 3D reasoning, but they can begin to practice clinical skills, such as assessment sequences and problem solving, all at their own pace [17][18][19][20]. One of the key advantages to using VR is that a vast variety of structures or scenarios can be generated with more ease than any dissection or didactic lecture, and be explored and repeated as a student wishes. Additionally, VR applications involving 3D models of anatomical structures are often generated using medical imaging scans, which can account for a plethora of variances, structures [17-19; 21].
VR may have many advantages, but it still only immerses users in a completely alternate 'reality.' Some have argued that difficulties bridging the gap between VR and real world applications may temper the benefits [17][18][19]. But what happens when virtual and digital aspects are layered over real-time reality, if real world images and scenarios could be augmented with digital features and objects?
Augmented reality (AR) applications seek to do exactly this-to enhance real world experience with virtual aspects [22,23]. This can be done in a variety of ways, but the most prominent style of AR involves scanning real-time images of the world from a device camera and displaying those images overlaid with digital elements, such as information, 2D graphics, and 3D models that the user can interact with [22][23][24]. These digital components bear significance on whatever is scanned, such as providing information, showing relevant aspects, or in some cases making a sort of game out of the real-world scenario. AR has recently been used in many capacities involving education, training, simulations, and even in enhancing surgical procedures [18-20; 22-26].
Therefore, the purpose of this study was to harness new technologies but develop it in a very unique manner. We wanted to take advantage of many 3D modelling techniques, and the benefits of ubiquitous digital learning, by attempting to create an effective, novel modality for veterinary students to learn 3D canine head anatomy using highly accurate models generated from MRI and CT scans in an engaging augmented reality (AR) format. Since the canine skull and brain represent some of the more complicated areas for veterinary students to study, and are currently under-represented in available resources, it also provides a unique opportunity to trial new educational approaches. Therefore, the goal of this project was to explore methodologies for segmenting MRI and CT scans, generating and refining models of key elements of the canine skull and brain from them, and making them available in an interactive, intuitive AR platform.
Data
A variety of software and hardware was used in this study to process the data, generate 3D models and integrate them into the final AR application. The following details each medical scan used, software package utilized and the apparatus needed to execute each stage of the study, and is summarised in Tables 1 and 2. This study was considered as sub-threshold for specific ethical approval by the convenor of the School of Veterinary Medicine ethics committee, as the work involved only analysis of data routinely recorded from normal and necessary clinical procedures.
In addition, the following hardware was used in this study: • PC (HP Z230 Workstation) • HTC One M8 mobile phone • Samsung Tablet Galaxy 10"
Methods
The methodology utilized in this project involved three stages: data extraction, development of accurate 3D anatomical models, and integration of those models into an interactive AR platform. The data extraction process entailed segmentation of the acquired CT and MRI scans of the canine head in 3D Slicer to highlight structures of interest and generate basic models. These models were then refined using a variety of methods in both 3DS Max and Zbrush. Finally, an interactive AR platform was built using Unity, in which a user can interact with each model set in an exciting AR experience. The workflow methodology is summarised in Figs 1-3 and explained below.
Data extraction
To obtain anatomically accurate models, segmentation was performed on both CT and MRI scans, to reconstruct the canine skull and brain in 3D. Segmentation was performed in 3D Slicer, via a Digital Imaging and Communications in Medicine (DICOM) stack. Manual sliceby-slice segmentation was used with the Threshold Paint to ensure an accurate reconstruction. The Model Maker in 3D slicer was used to create the 3D skull with a balancing between smoothing of natural ridges and the thinness of the slices used. The Laplacian smoother set to 63 and the Decimation at 0.11 were deemed most appropriate.
The MRI dataset for extraction of the brain had the best resolution in the T2 dorsal plane for segmentation of the brain. To ensure accurate representation of the brain, manual slice-byslice segmentation was used to identify the larger structures like the forebrain, cerebellum and brainstem, but also to differentiate the different sulci and gyri of the brain. However, due to a number of issues related to the resolution of these scans, it was decided that a T2 CISS MRI would be better suited. The latter is a 3D scan of 1mm 3 voxels. Although it does not always give good differentiation between soft tissues like grey and white matter, it does have a high contrast between cerebrospinal fluid and soft tissue (Fig 4). Laplacian smoothing set to 38 and Decimation at 0.21 generated a satisfactory model with recognisable contouring of sulci, gyri, lobes and fissure and proportionality of the cerebrum. Accurate volume of the cerebellum could be obtained but nor its surface texture nor its lobes could be precisely recreated using the T2 CISS MRI data. The same was true for the brainstem with accurate volume but not sufficient precise data to reconstruct its exact surface.
Retopping and refining skull. The skull model generated by 3D Slicer's Model Maker was smooth, but had many holes. Some of these holes were inherent to the structure of the skull itself, whilst others were artefacts from the scan and shortcomings in the model making. In addition, the skull contained 964,354 polygons, far in excess for rendering in a mobile application. Therefore, manual retopology in 3DS Max was chosen to clean the mesh for mobile applications, yet at the same time remove false holes from the model. "Draw on" was initially selected for the surface of the skull and "conform" was used to gently blanket the plane onto the contour of the skull. However, due to over-sensitivity in this method, a full manual approach to retopology was used. The "strip" and "extend" options were utilised to draw a line of connected square polygons along a contour of the original mesh e.g. along the edge of the mandible, zygomatic arch, midline of the skull and nasal bones. Irregularities were smoothed and regulated using the "Relax" tool, making the spacing between polygons more regular. This was carried out until a clean retopologized mesh was obtained over half of the skull down the midline ( Fig 5). As the skull is generally symmetrical, half of the original mesh needed to be refined and retopologized in this manner. The "Symmetry" modifier in 2DS Max was utilised to create a symmetrical mirrored mesh, altered and adjusted until the mirrored items lined up, and joined together with the "Bridge" feature of the "Extend" tool. This resulted in a full retoplogized mesh which, in 3D Slicer, had a polygon count of 964,354, but the retopologized skull and mandible had a polygon count of 128,653, 13% of the original count. The final step here was then to use "OpenSubdiv" thus subdividing the existing polygons to smooth the mesh, interpolating the lines, and thus giving a smoothing effect on all edges (Fig 6).
Sculpting and remeshing the brain. To ensure the brain was corrected for minor anomalies, and ensuring the professional appearance to it, Zbrush was used. There were no major issues with the brain, unlike the skull, and following import, a simple smoothing tool was used over each gyrus and elements of the brainstem and cerebellum. The "Dam" tool helped to sharpen the grooves of the sulci, and the "Clay Build-Up" tool was used to thicken areas which had lost mass or developed gaps through the carving process. At this stage, any minor anatomical adjustments could be made ensuring anatomical accuracy of the model. Finally, the brain needed to be retopologized by using the "Remesher" facility in the "Geometry" menu. This allowed for reduction of the polygon count from 706,036 to 342,294 polygons, yet still maintaining a high degree of anatomical accuracy and model cleanliness. The final model once shaded materials were applied can be seen in Interface development Trial version. To create an interface, a simplified PC platform was created first with Unity, prior to the augmented reality (AR) platform. This involved creating the basic functionality of three key scenes initially. These were a "Start" screen, a scene for the skull and one for the brain. The "Start" scene included a simple title panel with two buttons, one for the brain and one for the skull. A "Back" button was also included which returned to the opening scene. Functionality was then added including rotation, highlighting section and user interface (UI) elements linked to sub-object selections. Initially "MouseOrbitZoom" was used as a trial platform to use the camera function to orbit around and zoom in on a selected target during the game. In addition, a Generic Script and a Particular script were used to enable smooth movement between scenes in the interactive application.
Following this trial build for the platform, the AR element was developed using Vuforia development kit for Android in a new Unity project. An AR camera and an Image Target were added to each of the scenes with the updated version of Javascript installed and applied to the HTC One M8 for functionality testing.
Final version. As with the experimental trial version, three scenes were created: "Start", Skull" and "Brain". The "Start" scene was created using the same AR camera setup as the interactive scenes, with a semi-transparent panel to allow the user to get live camera feed behind the menu. The "Skull" and "Brain" options were adapted from the AR trial scene, this time including a semi-transparent "Top Panel" with title information and a back button for returning to the "Start" menu. Sagittal sections of the CT skull and MRI brain were used for the final image targets in the Vuforia Developer Portal. Trial builds were then created to test navigation, model placement and model rendering. Test Game Objects were created to trial the functionality from the PC version for the AR. Generic and Particular scripts were imported from the previous trial project, and utilised on trial objects within the AR scene with colours and semi-transparent custom panels corresponding to each object.
Functional development. This was created in an alpha version using the final models with colliders. Information panels were created for each sub-object in each scene, with different colours for better visual differentiation of items. A scrollbar was also installed that would work on touch, and scroll freely. Information panels giving a brief description of the subobject, highlighting important landmarks and features, and links to further discussion and resources were applied to each panel as appropriate.
Simple sphere and capsule colliders were created for each sub-object and Generic and Particular classes were applied to Empty Game Objects and sub-objects, as trialled initially. In addition, rotation functionality needed to be embedded into the AR application. Trialling demonstrated numerous issues: bugs, few were "clamped", and inability to adjust the rotation for a moving camera. Therefore, a custom-made rotation facility was created. This was designed to be applied directly to the object needing rotation, so that no camera was involved. Conditional statements were applied that prevented rotation on a simple touch only. Rotation only occurred when there was a change in position on the touch or a "delta position".
For rotation, clamping was trialled, with the notion that simply adjusting the rotation direction based on the object's orientation would be enough to establish completely intuitive rotation. While this concept proved worthy when the mobile device was held directly in front of the image target, it failed to adjust this performance to odd angles between the image target and the mobile device, as is necessary for a full AR experience.
So instead of clamping the movement, the rotation script was made to retrieve information about the Main Camera's relative axis, and then to adjust the way the rotation of the object behaves based on these vectors and its local axis. In this manner, the rotation behaviour would always feel intuitive and behave as expected, no matter the angle at which the mobile device is being held around the target.
Once all these elements had been refined, the overall aesthetic of the application was regulated and beautified, to maximize the user's intuitive feel and enjoyment of the application, while optimizing the potential educational output by ensuring clean, attractive, informative simplicity.
Results
The methods employed in this study produced the following augmented reality application which allows the user to explore and interact with anatomical models of the canine skull and brain, utilising the functionality depicted in Fig 8A and 8B.
The "Start Up" screen employs a semi-transparent menu panel through which the user gets live camera feed. The buttons included navigate to the following scenes, for the skull, brain or acknowledgements page. The image targets needed for each scene have been rendered on to each button (Fig 9).
The "Skull" button navigates to the skull scene where the user hovers over the image target created from the canine CT. This allows the skull model to be visualised as depicted din Fig 10. The user can rotate the model in all three axes by touching and dragging on the screen. The "Reset" button returns the model to its original position. Selecting the main part of the skull highlights it and triggers a pop up panel providing key anatomical information, which can be scrolled through (Fig 11). Selecting individual anatomical territories will reveal further information related to that site.
From the "Start Up" menu, the user can also select the "Brain" button, navigating to that topic. The user then utilises the image target, created from the canine MRI. This allows the canine brain model to appear, which the user can rotate and reset in the same fashion as the skull. Selection of specific anatomical regions (e.g. forebrain, cerebellum, brainstem etc.) will then reveal further information (Fig 12). In addition, an acknowledgments page was also created.
Discussion
The aim of this project was to create a workflow methodology for development of a mobile augmented reality application, potentially to be used for veterinary students learning basic canine head anatomy, both of the skull and brain, in an exciting and intuitive interface. We have shown, through adoption of a variety of commonly available software packages and imaging, how simple it is to create a mobile AR application to potentially be used in future veterinary education.
Anatomy within the veterinary curriculum represents a significant proportion on which clinical training originates, and forms the basis for communication of diagnosis and treatment to the owners and other professionals alike [27,28]. Indeed, teaching within a modern day veterinary curriculum can include a number of modalities, similar to a medical degree programme [29][30][31]. However, like any curriculum, there are always areas that students find more challenging than other.
Students undertaking veterinary education, like medical training programmes, find the concept of the nervous system, and all related aspects of it, difficult [29,32]. Nowadays, there are a plethora of digital technologies available to aid learning in the medical anatomical field [7-10], but perhaps not so much within the veterinary educational arena [33][34][35]. Some are emerging; however, they are not advancing at the rate that they are within human anatomical education and training.
Therefore, it is timely that we have developed a clear methodology for the creation of digital veterinary education related products. Given the inadequacies around the teaching of veterinary neuroanatomy, and issues related to visualising structures, modern alternatives are much needed. Indeed, some of the first work in this area was related to computer assisted learning and the development of learning modules via digital lectures, online tutorials and question and answer packages [34]. Certainly this was innovative for its time but technology and our understanding of its educational uses has improved significantly.
Previously the "Visible Animal Project" represented the first 3D anatomical animal model designed specifically for veterinary training. However, the fine detail of canine anatomy was not realised, and lacked detail. From then, "Virtual Canine Anatomy: The Head" was designed and implemented into the first year of the veterinary dissection curriculum within Colorado State University. However, it was built upon 2D views and the illusion was portrayed of a 3D object, but was not as engaging as anticipated.
Thus far, to our knowledge, there is no interactive canine computer aided learning package that offers interactivity and immersion, hence this study using modern day technologies. To enable this, we followed the advice of Clark and Mayer [36] who discussed that for digital technologies to be effective, they advocate the use of good visuals, text and segmenting the different aspects of learning. Within this AR workflow, we have adopted these elements into the canine neuroanatomy to engage the learner in the visualising the detailed anatomy using accessible technology, in this case with a popular smart phone. Indeed, it also complies with it being an inviting and interesting environment, responsive (in that it is visually active) and provides feedback and information related to each of the anatomical areas [37]. Therefore, what we have created has the potential to be engaging in the learning process, however the next stage for this study would be formal testing of the end-user-veterinary students.
Limitations
In relation to the CT scan data, the only slight drawback was in the fact that it had to be manually manipulated for segmentation. Although it can take a little longer than more automatic techniques, it certainly does allow for identification of clear distinctions between bony structures.
However, the initial MRI dataset that was initially to be used did not have such accessibility and ease of use. The resolution in the T2 dorsal plane showed excellent definition, not only of the larger structures (e.g. cerebrum and cerebellum), but the "out-of-plane" resolution was problematic. The original plan was to segment both larger structures and also the areas between grey and white matter. This would have benefited from an educational perspective in being able to show these clinically relevant areas. It also logically seemed possible under the T1 weighting for grey versus white matter distinction. However, segmenting the full 3D brain and its components produced a cubic and completely unrecognisable model. These scans are not recommended for attempting indirect volume rendering, or the generation of 3D polygonal meshes to be used in other formats. While employing direct volume rendering modules onto the voxel stack itself is able to provide fascinating insight (into a 3D understanding of internal structures to the user), which can be done in both 3D Slicer and Osirix, generation of clean polygonal meshes representing many structures is not feasible. Clinical MRI dataset are therefore of value for an initial assessment of a technique like the one described here to give the student an appropriate and accurate volume relationship between forebrain, brainstem and cerebellum. However, better quality scans (research scan dataset) would be required if more detailed neuroanatomy is needed.
Future work
Existing models in this field are rather rudimentary and serve to illuminate what is possible, rather than creating a full educational function. However, with careful refinement and investment in these models, there is much opportunity for advancement of the anatomical accuracy through, for example, fine detail of the smaller facial bones. These could be further developed with micro-CT data, to ensure a more accurate representation of the skeletal anatomy.
A difficulty with this type of work, which merits further research, is refining the accuracy and detail of the canine brain for a mobile augmented reality application. The potential here is for higher resolution MR datasets, dissections and photogrammetry combined to provide a photorealistic and highly accurate reconstruction.
This full incorporation of anatomical and also potentially neuroanatomy and neuroscience would need to be educationally validated by those veterinary surgeons who specialise in neurosurgery and clinically applied research. It would also need validation from ultimately the end user-students. A well-designed trial with both alpha and beta phase testing would be necessary to ensure the application of this type of teaching tool.
Conclusion
The purpose of this project was to establish the processes for a methodology in the creation of an augmented reality application for basic canine head anatomy. We have clearly identified the advantages and drawbacks of different approaches in the creation of a robust and interactive augmented reality tool for veterinary education. Of course, further validation is needed both from specialist neurological and neurosurgical veterinary clinicians and the students themselves. However, this process clearly sets out the workflow methodology in the creation of a novel, innovative, different and cutting edge tool for enhancing learning opportunities with a visual, tactile and engaging manner. Now, we have shown the basic recipe for those involved in veterinary education who are keen to develop ideas and innovations, but tailor make it for each of your teaching, learning and assessment methods both locally, nationally and internationally. This type of technological advance and application is not only limited to veterinary education but can be opened up to ensure an immersive learning environment for anything requiring visual and tactile learning. | 2018-04-29T23:25:43.093Z | 2018-04-26T00:00:00.000 | {
"year": 2018,
"sha1": "5dabf0eb17de8ba82650839c78d45e2bb932d5ac",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0195866&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5dabf0eb17de8ba82650839c78d45e2bb932d5ac",
"s2fieldsofstudy": [
"Computer Science",
"Education",
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
232384833 | pes2o/s2orc | v3-fos-license | Rapid Molecular Tests for Detecting Respiratory Pathogens Reduced the Use of Antibiotics in Children
Multiplex polymerase chain reaction (mPCR) is increasingly being used to diagnose infections caused by respiratory pathogens in pediatric inpatient facilities. mPCR assays detect a broader array of viruses, with higher specificity and sensitivity and faster turnaround than previous assays. We adapted the FilmArray Respiratory Panel (FA-RP) for diagnosing respiratory infections. FA-RP is an in vitro mPCR assay that simultaneously and rapidly (in about 1 h) detects 20 pathogens directly from respiratory specimens. Here, we studied the clinical efficacy of FA-RP in children who underwent testing for respiratory pathogens at Yeungnam University Hospital from November 2015 to August 2018. From November 2015 to June 2016, routine mPCR testing was performed on nasopharyngeal swabs using the routine mPCR kit. From November 2016 to July 2018, mPCR testing was performed using FA-RP. A total of 321 tests by routine mPCR and 594 tests by FA-RP were included. The positive detection rates for routine mPCR and FA-RP were 71.3% and 83.3%, respectively. FA-RP reduced the lead time, waiting time, turnaround time, intravenous (IV) antibiotic use, and length of hospital stay for pediatric patients. The decreased use of antibiotics is expected to reduce antibiotic resistance in children.
Introduction
Acute respiratory infection (ARI) is a leading cause of hospitalization for young children. Among infants in the United States, the median cost of hospitalization due to infectious disease was 2235 USD, with total annual hospital costs of approximately 690 million USD [1]. Respiratory viruses are the most infectious pathogens in humans, with worldwide distribution and broad diversity in type, antigenicity, and patterns of infections; this makes understanding their infection patterns difficult [2]. ARI is the most common illness, regardless of age or sex [3]. The burden of disease caused by ARIs is substantial, and ARIs are the third most common cause of death worldwide [4]. Infants and young children are particularly vulnerable to respiratory disease. Among them, pneumonia is the predominant cause of childhood mortality, and it causes nearly 1.3 million deaths per year. Early childhood respiratory infection or environmental exposures may lead to chronic disease in adulthood [5,6].
Although bacteria were previously considered to be the principal etiological agents of severe respiratory tract infections, the global impact of respiratory viruses has been increasingly recognized in recent years [7][8][9][10]. Between 2001 and 2010, there were estimated to be 43 million emergency department (ED) visits for patients less than 5 years old that resulted in diagnoses of ARI (354 per 1000 ED visits) [11]. There were 126 million ED visits with diagnoses of ARI, and antibiotics were prescribed in 61% of the cases in these periods. Significant progress has been made toward the reduction of antibiotic use in pediatric patients with ARI. Therefore, between 2001 and 2010, antibiotic use decreased for patients aged <5 years presenting with antibiotic-inappropriate ARI.
The emergence of antibiotic-resistant bacteria is a serious global challenge. Antibiotic resistance develops when bacteria adapt and grow in the presence of antibiotics. As the development of resistance is linked to the frequency of antibiotic use, the misuse and overuse of antibiotics hastens the development of bacterial drug resistance, rendering existing antibiotics less effective. In Korea, antimicrobials were most frequently prescribed to children younger than 10 years of age. The Korean government has implemented a series of healthcare policies that have resulted in the decrease of antibiotic prescription for the treatment of upper respiratory infections because the causative agents were mostly viral [12].
Multiplex polymerase chain reaction (mPCR) for the diagnosis of respiratory pathogens is increasingly being used in pediatric inpatient facilities [13,14]. The use of mPCR testing for respiratory viruses among hospitalized patients has been significantly associated with decreased healthcare resource utilization, including decreased use of antibiotics and chest radiography and increased use of isolation precautions [15]. The availability of the test results of mPCR for respiratory pathogens at a clinically actionable time can influence antibiotic prescription; in other words, antibiotic use might be reduced if the results of virus checks are obtained rapidly. The FilmArray Respiratory Panel (FA-RP) (BioFire Diagnostics, Inc., Salt Lake City, UT, USA) is the first FDA-cleared assay for the qualitative detection of nucleic acid targets from 20 respiratory pathogens with a turnaround time (TAT) of 1 h [16]. The sensitivity and specificity of this rapid test were similar to the existing diagnostic test. The workflow of FA-RP was simple, making it suitable for introducing emergency testing. We have previously reported that the use of the FA-RP increases diagnostic efficiency and reduces turnaround time (TAT) in laboratories [17].
The aim of this study was to determine the real-world clinical impact of FA-RP results on antibiotic use and hospital stay, particularly if antibiotic use was reduced as a result of decreased lead time. Figure 1 shows the flowchart of patient enrollment. The number of patients included in the study in periods I, II, and III were 321, 264, and 330, respectively. The age or sex distribution of patients between periods was not significantly different. In laboratory findings, white blood cell count, hemoglobin levels, aspartate aminotransferase levels, and alanine aminotransferase levels were not significantly different between periods. The platelet count was elevated during period III but was still within the normal range (322,000 ± 124,000 vs. 358,000 ± 128,000; p = 0.007). C-reactive protein levels were not significantly different between periods, and lactate dehydrogenase levels were elevated during period III. However, these test results were not clinically significant ( Table 1). The percentages of patients with upper respiratory tract infection (URI), lower respiratory tract infection (LRI), and both URI and LRI (URI + LRI) were 28.3%, 64.4%, and 7.2%, respectively.
Virus Detection
Positive detection rates between the routine test and FA-RP were significantly different (71.3% vs. 83.3%; p < 0.0001, respectively). For routine tests performed during period I, negative results were obtained in 28.3% of samples, while 56.7% of samples tested positive for one virus and 15.0% tested positive for two viruses. However, in FA-RP, viruses were detected in 83.3% of samples. One virus was detected in 33.8% of all tests; two viruses were detected in 27.9% of samples; and more than three viruses were detected in 20.2% of samples. Bacteria were detected in 3.5% of samples and virus-bacteria co-infection was seen in 13 (2.2%) of samples tested by FA-RP (Table 2).
Test Time, Antibiotic Use, and Hospital Stay
Lead time, waiting time, and TAT were shorter in the FA-RP group (p < 0.001) ( Figure 3). The frequency of intravenous (IV) antibiotic use during period I using the routine test was 51.7%, and the rates of antibiotic use during periods II and III using FA-RP were 52.7% and 39.4%, respectively. In particular, the frequency of IV antibiotic use decreased significantly during period III (p = 0.002). There was no significant difference in the frequency of oral antibiotic use during the study period. There was no significant difference between the frequency and duration of IV antibiotic use between period I and period II. However, between period I and period III, the frequency of use of IV antibiotics, duration of use, and period of use of IV + oral antibiotics decreased significantly (Table 4, Figure 4). The duration of IV antibiotic use was significantly reduced in the FA-RP group when compared to the routine test group (p = 0.015). The length of hospital stay was significantly reduced in period III when compared to period I (p = 0.004). Sixty-nine patients were discharged without hospitalization after mPCR tests in the ED. Of these, 59 patients underwent FA-RP testing, 49 (83%) of whom tested positive and 10 (16.9%) tested negative. Ten discharged patients underwent routine testing, six of whom tested positive and four of whom tested negative.
Discussion
We compared two respiratory pathogen tests performed during a three-year period and compared the clinical management and patient characteristics with respect to the method of pathogen testing. We have reported in a previous study that the use of FA-RP significantly reduces the mean waiting time, TAT, and lead time compared to the routine test [17]. While a routine mPCR method that can simultaneously perform nucleic acid amplification and extraction is commonly used, the routine mPCR test cannot be used as an emergency test as it takes a long time to yield results. As the routine mPCR test involved batch operation, reviews of the results and follow-up actions were performed only after initial isolation or treatment. Even if the sample was delivered quickly, the test result was not revealed quickly, and hence, waiting time, sample delivery, and sample reception time were delayed. Previous studies on the usefulness of FA-RP as a detector of respiratory viruses found that the test was effective in detecting viruses that went undetected by previous routine mPCR methods. This was of interest as it indicated the usefulness of the technique in reducing both unnecessary antibiotic use and invasive investigation. However, the role mPCR can play in the diagnosis of ARI is unclear because the use of mPCR has not yet been recommended in national and international guidelines due to a lack of research on the cost effectiveness [18].
The impact of reverse transcription (RT)-PCR testing on clinical management, antibiotic use, and the length of hospital stay for children with respiratory infections has been studied, with respiratory sample results being reported to clinicians within 12-36 h for the study group and after 4 weeks in the control group. In this case, rapid reporting did not result in a change in patient care. There were no significant differences between the groups with respect to hospital admission, length of hospital stay, or duration of antibiotic use. The authors concluded that although RT-PCR testing had a high yield of viral diagnoses, rapid communication of these results did not lead to shorter hospital stay, decreases in hospital admissions, or decreased antibiotic use for children with ARIs [14].
In a similar randomized trial in adults, the mPCR results were available in 24 h for the study group and in 7 days for the control group. mPCR testing for respiratory viruses with results available within 24 h did not reduce the consumption of bacterial antibiotics or the length of hospital stay in adults in the emergency unit [19]. However, Brendish et al. reported that a point-of-care test with a TAT of ≤1.6 h was associated with earlier hospital discharge and earlier discontinuation of antibiotics compared to tests with longer TATs [20]. In our study, the TAT was less than 3 h, shorter than that in the previous studies. Once the mPCR results were obtained, the physician could apply them to clinical practice. This led to a decrease in the frequency and duration of antibiotic use. Thus, early notification of the test results led to significant changes in clinical practice. Although there is a difference in the frequency of the detection of viruses and bacteria according to the period, the difference between the period of antibiotic use and the frequency of use of antibiotics due to this is not likely to be significant because the incidence of pertussis and Mycoplasma and Chlamydia infections during periods II and III was not high.
In a previous randomized controlled trial on the impact of early and rapid diagnosis of viral infections in children with febrile respiratory tract illness in the ED, children were randomly assigned to either undergo mPCR or routine care [21]. mPCR testing was performed by the research nurses at triage and hand-delivered to the laboratory. The results were added to the patient chart as soon as they became available, and as such, this trial intervention did not lend itself to blinding. Patient treatment was otherwise comparable for both the groups. For the control patients, the mPCR tests could be ordered by the physician after assessment and performed at the bedside by the usual bedside nurses. The TAT of the mPCR tests was 30-150 min. In their study, no statistically significant differences were found between the ED visit length, rate of ancillary testing, or antibiotic prescription rate during the ED visit between the study groups. There was, however, a significant reduction in antibiotic prescription after ED discharge. As in our study, the short TAT seems to have resulted in the reduction of antibiotic use.
The Infectious Disease Society of America guidelines suggest the use of rapid viral testing for respiratory pathogens as a way to reduce the inappropriate use of antibiotics. However, as of now, it is only a weak recommendation due to low-quality evidence. Although rapid viral testing has the potential to reduce the inappropriate use of antibiotics, the results have been inconsistent [22]. Our results suggest that mPCR testing with a TAT of no more than 3 h is important for reducing antibiotic use as a step toward antibiotic stewardship.
Kreitmeyr et al. conducted a prospective study on antibiotic stewardship programs (ASPs) that aim to reduce antibiotic consumption. ASPs targeted the infectious diseases (ID) ward rounds (prospective audit with feedback), ID consultation services, and internal guidelines on empiric antibiotic therapy. The study concluded that the implementation of an ASP was associated with a profound improvement in rational antibiotic use, and no adverse effects regarding the length of hospital stay or in-hospital mortality were observed [23].
In our study, with the introduction of the FA-RP test, the test time was shortened, but there was no significant difference in the frequency and duration of antibiotic use between periods I and II. However, over time, in period III, the frequency and duration of antibiotic use were significantly decreased. This suggests that despite the introduction of effective tests, it takes time to implement ASPs and to change clinical practice.
In the study of young Italian doctors' knowledge, attitudes, and practice of ASP, only 20-40% of doctors answered vancomycin-resistant enterococci (VRE), carbapenemresistant Enterobacteriaceae (CRE), extended-spectrum β-lactamase-producing enterobacteria (ESBL), and methicillin-resistant Staphylococcus aureus (MRSA) correctly. In addition, 81% of participants said that ASP was not properly handled during their medical training, and 71% said they did not learn appropriate examples from their tutors. Therefore, proper ASP education for medical doctors is very desperately needed [24].
In this study, we analyzed the usefulness of two viral tests performed at different times in patients who visited the hospital for respiratory disease and fever. Since the emphasis was placed on comparing the effectiveness of the tests, the patients' clinical characteristics were not analyzed in detail.
In our study, there were viral-viral co-infections and viral-bacterial co-infections. Several studies have been conducted on the clinical significance of viral co-infection. It is known that co-infection between viruses and interference between viruses can occur. In particular, it is known that RSV accounts for most childhood infections [25]. Although this is beyond the scope of the present study, as various co-infections are confirmed due to the activation of mPCR tests, further studies on this aspect are warranted.
While complex policy changes and management are an important part of ASPs, it is a challenging task, requiring time, effort, and careful implementation. Our findings are significant as they demonstrate that simply changing to an mPCR kit with a faster TAT resulted in reduced antibiotic use. Careful management of antibiotics, the use of ASPs, and targeted medical education to pediatricians and pediatric hospitals are important for the overall reduction of antibiotic use [22,23,26]. By providing evidence for an efficient test for pediatric ARIs, we have improved the ability of clinicians to implement good ASPs.
In conclusion, despite the limitations of the study, TAT was shortened through mPCR tests, and it was confirmed that the use of antibiotics could be reduced. In the future, it will be necessary to apply a rapid mPCR test to pediatric patients who need to consider antibiotic prescription due to febrile illness and respiratory disease in order to reduce duration of antibiotics, hospitalization period, and medical costs.
Materials and Methods
We conducted a retrospective cohort study of children hospitalized at Yeungnam University Hospital from November 2015 to August 2018, who underwent testing for respiratory pathogens either in the ED prior to admission or within the first 2 days of hospitalization. Virus tests were performed on patients with fever accompanied by diseases of the respiratory system, such as URI, bronchiolitis, pneumonia, croup, and tonsillitis. We excluded patients with chronic disease requiring treatment for more than 3 months and infants under 29 days of age. From November 2015 to June 2016 (period I), routine mPCR testing was performed on nasopharyngeal swabs by using the AnyplexTM II RV16 detection kit (RV16; Seegene, Seoul, Korea). From July 2016 to July 2018 (periods II and III), newly adopted mPCR testing was performed on nasopharyngeal swabs by using the FA-RP (BioFire Diagnostics, Inc., Salt Lake City, UT, USA). We divided this period into periods II (2016.7.1 to 2017.6.30) and III (2017.7.1 to 2018.7.31). After the introduction of the rapid mPCR test method, we conducted a study by dividing the period by one year to check whether medical practices such as antibiotic prescription and hospital stay changed.
The medical records of enrolled patients were retrospectively reviewed to confirm laboratory results, identify patient characteristics, and determine the duration of oral and intravenous (IV) antibiotic use and that of hospitalization. mPCR testing was used to detect viruses and bacteria. Positive detection rates of viruses and bacteria were determined. The waiting time (time from prescription to submission of a specimen to the laboratory), TAT (time from submission of a specimen to the final result), and lead time (time from prescription to the final result) of the routine test and FA-RP test were analyzed. The indications for antibiotic use were a negative result on the mPCR test; unstable clinical symptoms before the virus test result; or elevated levels of inflammatory markers such as erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), and procalcitonin.
This study was approved by the institutional review boards (IRB) of Yeungnam University Hospital (IRB approval number: YUMC 2019-11-026). Informed consent was waived due to the retrospective nature of this study.
Statistical Analyses
The statistical package SPSS version 25.0 was used to analyze the data (SPSS Inc., Chicago, IL, USA). Categorical variables were compared using Pearson's chi-square test to compare patient characteristics, frequency of pathogen, and antibiotic use. The t-test was used to compare laboratory values, duration of hospital stay, and antibiotic use. Statistical significance was defined at p < 0.05. | 2021-03-29T05:23:12.829Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "aeef34d6c69f7b2d13567b45c577f9b14b02e68b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/10/3/283/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aeef34d6c69f7b2d13567b45c577f9b14b02e68b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256561001 | pes2o/s2orc | v3-fos-license | Identification and Characterization of Genomic Predictors of Sarcopenia and Sarcopenic Obesity Using UK Biobank Data
: The substantial decline in skeletal muscle mass, strength, and gait speed is a sign of severe sarcopenia, which may partly depend on genetic risk factors. So far, hundreds of genome-wide significant single nucleotide polymorphisms (SNPs) associated with handgrip strength, lean mass and walking pace have been identified in the UK Biobank cohort; however, their pleiotropic effects on all three phenotypes have not been investigated. By combining summary statistics of genome-wide association studies (GWAS) of handgrip strength, lean mass and walking pace, we have identified 78 independent SNPs (from 73 loci) associated with all three traits with consistent effect directions. Of the 78 SNPs, 55 polymorphisms were also associated with body fat percentage and 25 polymorphisms with type 2 diabetes (T2D), indicating that sarcopenia, obesity and T2D share many common risk alleles. Follow-up bioinformatic analysis revealed that sarcopenia risk alleles were associated with tiredness, falls in the last year, neuroticism, alcohol intake frequency, smoking, time spent watching television, higher salt, white bread, and processed meat intake; whereas protective alleles were positively associated with bone mineral density, serum testosterone, IGF1, and 25-hydroxyvitamin D levels, height, intelligence, cognitive performance, educational attainment, income, physical activity, ground coffee drinking and healthier diet (muesli, cereal, wholemeal or wholegrain bread, potassium, magnesium, cheese, oily fish, protein, water, fruit, and vegetable intake). Furthermore, the literature data suggest that single-bout resistance exercise may induce significant changes in the expression of 26 of the 73 implicated genes in m. vastus lateralis, which may partly explain beneficial effects of strength training in the prevention and treatment of sarcopenia. In conclusion, we have identified and characterized 78 SNPs associated with sarcopenia and 55 SNPs with sarcopenic obesity in European-ancestry individuals from the UK Biobank.
Introduction
Sarcopenia is an age-associated condition characterized by the loss of skeletal muscle strength and muscle mass, and in severe cases, followed by reduced physical performance (e.g., slower gait speed) [1]. In most cases (53-84%), sarcopenia co-exists with obesity [2,3]. Older adults are usually identified as having sarcopenic obesity if low muscle mass and strength, as well as increased adiposity are present [4][5][6]. Furthermore, individuals with sarcopenic obesity can be stratified into stage I (absence of clinical complications) or stage II (presence of clinical complications) [7].
The prevalence of sarcopenia and sarcopenic obesity depends on the diagnostic criteria used to describe these conditions, with the range from 10-27% for sarcopenia [6], and from 10-23% for sarcopenic obesity [8]. Both sarcopenia and sarcopenic obesity are related to negative health-related outcomes, such as increased risk of falls, disability, frailty, osteoporosis, type 2 diabetes (T2D), metabolic syndrome, poor glycaemic profiles (i.e., hyperglycaemia, high HbA1c, insulin resistance, etc.) cardiovascular diseases, dyslipidaemia, poor neurocognitive functioning and quality of life, decreased health span and mortality [9][10][11][12][13][14]. Importantly, while declines in lean mass could contribute to further gains in fat mass, high fat mass may also lead to accelerated loss of lean mass [10].
There is a wide range of individual variability in skeletal muscle quantity and quality even under the same biological (age, sex) and environmental (level and type of physical activity, macro and micronutrient intake, etc.) conditions. This variability partly depends on genetic factors, with high heritability for muscle strength (49-56%) [15] and fat-free mass (45-76%) [16]. So far, hundreds of genome-wide significant (p < 5 × 10 −8 ) single nucleotide polymorphisms (SNPs) associated with sarcopenia-related traits, such as handgrip strength (170 SNPs) [17][18][19], appendicular lean mass (1059 SNPs) [20] and walking pace (70 SNPs) [21] have been individually identified in the UK Biobank cohort. However, their pleiotropic effects on all three traits have not been investigated on a systemic level.
Since muscle strength has been shown to be positively correlated with muscle mass and walking speed [22,23], we hypothesized that these phenotypes might be positively associated at a genetic level as well. One might also suggest that alleles associated with all three sarcopenia-related traits (i.e., low muscle strength, low lean mass and slow walking pace) can be considered as the most robust predictors of sarcopenia. On the other hand, risk alleles for both sarcopenia and increased adiposity can be considered as genomic predictors of sarcopenic obesity.
The aims of the present study were threefold: (1) to identify SNPs with pleiotropic effects on handgrip strength, appendicular lean mass, usual walking pace and fat percentage using summary statistics from the UK Biobank cohort; (2) to identify the potential mechanism of action of each SNP on sarcopenia-related traits by searching for intermediate phenotypes and using mice knockout models; and (3) to investigate the effect of resistance exercise on the expression of sarcopenia-related genes using bioinformatic tools.
UK Biobank Study
The UK Biobank is an open-access large prospective study with phenotypic and genotypic data from more than 500,000 participants (>90% of participants are of white ethnicity) with an age range for inclusion of 40-69 years when recruited in 2006-2010 [24]. UK Biobank has approval from the North West Multi-centre Research Ethics Committee (MREC) as a Research Tissue Bank (RTB) approval (reference 11/NW/0382). Full written informed consent was obtained from all participants prior to the study.
Identification of Genomic Predictors of Sarcopenia and Sarcopenic Obesity Using UK Biobank Data
In the first stage, we used publicly available summary statistics from five published genome-wide association studies (GWASes) on handgrip strength [17][18][19], appendicular lean mass [20] and walking speed [21], making an initial list of 1299 genome-wide significant (p < 5 × 10 −8 ) SNPs (Table 1). To identify correspondences between phenotypes (for example, to find if a specific SNP associated with handgrip strength is also associated with appendicular lean mass and walking pace), we used publicly available summary statistics from GWASes on appendicular lean mass [20], appendicular lean mass in older adults [25], handgrip strength (left) [26], handgrip strength (right) [26], handgrip strength in older adults (weakness) [19] and usual walking pace [26] with less stringent p value threshold (p < 0.005) ( Table 1). SNPs associated with all three phenotypes (i.e., with appendicular lean mass, handgrip strength and walking pace) with consistent effect directions were considered as potential genomic predictors of sarcopenia. In the second stage, to test the hypothesis that sarcopenia-related SNPs are also associated with other health-related traits, we used summary statistics from GWASes on body fat percentage [26], type 2 diabetes [26], heel bone mineral density [27], frequency of tiredness [26], self-reported tiredness [26], recent feelings of tiredness or low energy [26], and falls in the last year [26] (Table 1). Risk alleles for both sarcopenia and increased body fat percentage were considered as genomic predictors of sarcopenic obesity. Furthermore, risk alleles for both sarcopenic obesity and type 2 diabetes were considered as genomic predictors of sarcopenic diabesity. We also used other traits (biochemical, anthropometric, physiological, behavioral) to identify shared genetic architecture between sarcopeniarelated traits and lifestyle exposures (Table 1).
Analysis of Sarcopenia-Related Polygenic Profiles in European Populations
Raw genetic data of sarcopenia-related SNPs of 503 anonymized individuals of European origin from the 1000 Genomes project (Phase 3) [35] were used to calculate a genetic sum score of risk alleles for each individual. This cohort was composed of five subgroups (British from England and Scotland (n = 91), Finnish in Finland (n = 99), Toscani in Italia (n = 107), Iberian populations in Spain (n = 107), Utah residents (CEPH) with Northern and Western European ancestry (n = 99). Unweighted polygenic risk scores (coded as 0, 1 and 2 for homozygous non-risk genotype, heterozygous genotype and homozygous risk genotype, respectively) were developed for the prediction of sarcopenia, sarcopenic obesity and sarcopenic diabesity in European populations. Individuals were evenly divided into 5 groups (20% each) with high (high number of risk alleles), above average, average, below average and low (low number of risk alleles) risks for sarcopenia, sarcopenic obesity and sarcopenic diabesity.
Analysis of Association of Sarcopenia-Related SNPs with Gene Expression
The Genotype-Tissue Expression (GTEx) portal [36] was used to analyze the association between sarcopenia-related SNPs and expression of genes in different tissues with the focus on skeletal muscle tissue and nervous system (p < 0.05). The GTEx project is an ongoing effort to build a comprehensive public resource to study tissue-specific gene expression and regulation. Samples were collected from 49 tissue sites across >800 individuals, primarily for molecular assays including whole genome sequencing (WGS), whole exome sequencing (WES), and RNA-Seq [37]. SNPs that were significantly (p < 0.05) correlated with expression of genes (levels of mRNAs) were considered as expression quantitative trait loci (eQTLs).
Analysis of Effects of Knockouts of Implicated Genes on Sarcopenia-Related Traits in Mice
Data from the International Mouse Phenotyping Consortium (IMPC) database [38] were used to assess the effects (p < 0.05) of genes knockout on lean mass, fat mass and grip strength in mice. The IMPC web portal makes available curated, integrated and analyzed knockout mouse phenotyping data from 9000 mouse lines [39].
Analysis of Effects of Strength Training on the Expression of Sarcopenia-Related Genes
Publicly available human skeletal muscle transcriptome dataset was used to check the significant effect (p < 0.05) of a single-bout resistance exercise on the mRNA of the sarcopenia-related genes in m. vastus lateralis of seven young men (age 23.3 ± 0.6 years) at 2.5 h and 5 h timepoints compared to baseline [40].
Potential Genomic Predictors of Sarcopenia and Sarcopenic Obesity
A flow diagram displaying the study design and the main findings is shown in Figure 1. First, by combining data from five published GWASes on handgrip strength [17][18][19], appendicular lean mass [20] and walking speed [21], we made a list of 1299 genome-wide significant SNPs. The 78 out of those SNPs (from 73 loci) were independently (i.e., all SNP with linkage disequilibrium (LD) threshold r 2 > 0.2 were excluded) associated (p < 0.005) with all three traits-appendicular lean mass, handgrip strength and walking pace-with consistent effect directions ( Table 2). These 78 SNPs can be considered as potential genomic predictors of sarcopenia and can be used as instruments for Mendelian randomization analysis to study and uncover causal relationships between sarcopenia and other traits.
Next, by using summary statistics from GWASs on other health-related traits we found that 55 of the 78 SNPs were associated with body fat percentage with consistent effect directions (i.e., the same allele is a risk variant for both sarcopenia and adiposity) and can be regarded as potential genomic predictors of sarcopenic obesity. Of these 55 SNPs, the 21 SNPs were also associated with the risk of T2D with consistent effect directions (potential genomic predictors of sarcopenic diabesity) ( Table 2).
Polygenic Analysis of Sarcopenia, Sarcopenic Obesity and Sarcopenic Diabesity
A genetic sum score of sarcopenia risk alleles composed of 78 SNPs was calculated for each of 503 individuals of European origin from the 1000 Genomes project. Individuals were then evenly (by~20%) divided into five groups. Carriers of 58-68 risk alleles had the lowest risk of sarcopenia, whereas carriers of 81-95 risk alleles had the highest risk. The distribution of risk alleles in each subgroup for sarcopenia, sarcopenic obesity and sarcopenic diabesity is shown in Table 3 and may be used to improve prediction of these disease states when incorporated into existing clinical risk tools in individuals of European origin.
By comparing human (GWAS, GTEx) and mice genes knockout data, we identified eight genes with the same direction of association. More specifically, while protective alleles in ADCY3 (rs10203386 T), BCKDHB (rs9350850 C), CEP192 (rs1786263 G), H1FX (rs4073154 G), and POLD3 (rs72977282 T) genes were associated with the increased expression of these genes in human tissues, the knockout of the corresponding genes in mice (Adcy3, Bckdhb, Cep192, H1fx, and Pold3) led to the decrease in lean mass and strength (with increase in fat mass).
On the other hand, while protective alleles in BTRC (rs10883618 A), LCORL (rs1472852 C), MTCH2 (rs11039324 G) genes were associated with a decreased expression of these genes in human tissues, the knockout of the corresponding genes in mice (Btrc, Lcorl, and Mtch2) led to the increase in lean mass and strength (with decrease in fat mass) (Supplementary Table S1).
Discussion
In this study, we identified and characterized 78 pleiotropic genomic predictors of sarcopenia based on previously discovered genome-wide significant SNPs associated with handgrip strength, appendicular lean mass and walking pace. Of the 78 SNPs, 55 polymorphisms were also associated with body fat percentage and 25 polymorphisms with type 2 diabetes (T2D), indicating that sarcopenia, obesity and T2D share many common risk alleles. It is, therefore, unsurprising that according to the Data from Health and Nutrition Examination Survey (NHANES), 83.6% of women and 79.3% of men (aged 60 and older) with sarcopenia also have obesity (i.e., sarcopenic obesity) [2].
The Table S1). Interestingly, of the 73 genes, knockouts of 27 genes in mouse models led to functional consequences such as changes in the lean mass, fat mass and grip strength.
Of the 78 SNPs, 58 were identified as eQTL SNPs that correlated with expression of genes in various tissues including skeletal muscle and nervous system, indicating that they are likely to be functional and may influence multiple traits. Indeed, we found that risk alleles were also associated with other intermediate phenotypes of sarcopenia, namely tiredness, falls in the last year, low physical activity, and low bone mineral density. Furthermore, risk alleles were associated with neuroticism, time spent watching television, alcohol intake, smoking and poor diet (higher salt, white bread, and processed meat intake), whereas protective alleles were positively associated with serum testosterone, IGF1, and 25-hydroxyvitamin D levels, height, intelligence, cognitive performance, educational attainment, income, ground coffee drinking and healthier diet (muesli, cereal, wholemeal or wholegrain bread, potassium, magnesium, cheese, oily fish, protein, water, fruit, and vegetable intake). This is in line with the previous studies stating that low educational attainment [41], neuroticism [42], low testosterone levels [43], short stature [44], high alcohol [45], processed meat [46] and salt [47] intake, sedentary behavior (such as watching television) [48], smoking and physical inactivity [49] are associated with an increased risk of sarcopenia or low muscle strength, whereas coffee, magnesium, potassium, protein, vitamin D, water, oil fish, fruits and vegetables intake [46,[50][51][52][53][54] have protective effects against sarcopenia.
Future research is needed to test interventional strategies focusing on all these factors to evaluate improvement in muscle quality and quantity. Given that resistance exercise induces significant changes in the expression of 26 genes (out of 73 implicated genes) in human skeletal muscle compared to the pre-training state, our findings also partly explain beneficial effects of strength training in the prevention and treatment of sarcopenia [55].
The link of 78 SNPs with muscle strength and lean mass indicates that these markers may be important not only in the general population, but also in athletes. Indeed, of this panel of markers, some of protective alleles have been reported to be over-represented in elite sprinters (E2F3 rs4134943 T, FHL2 rs55680124 C, GDF5 rs143384 G, SLC39A8 rs13107325 C, and ZNF568 rs1667369 A) [56] and elite strength athletes (ADCY3 rs10203386 T, ADPGK rs4776614 C, MMS22L rs9320823 T and ZKSCAN5 rs3843540 C) [57] compared to controls. Furthermore, GBF1 rs2273555 G, MLN rs12055409 G, and MMS22L rs9320823 T alleles (all protective) were found to be positively associated with weightlifting performance [58,59]. One of markers (rs3734254) is located in the PPARD gene which is in high linkage disequilibrium (LD) with the PPARD rs2016520 SNP, which has been previously associated with endurance athlete status [60].
Our study presents novel data of sarcopenia-related genetic markers. However, there are also limitations. Firstly, our findings were based on summary statistics of three different phenotypes which were discovered in the whole sample of the UK Biobank. To confirm the association between identified SNPs, sarcopenia and sarcopenic obesity, a case-control study (individuals with confirmed sarcopenia vs. individuals with normal muscle quality and quantity) is needed in independent studies. Second, our results were obtained using genomic data of European-ancestry individuals from the UK Biobank. Therefore, the set of 78 SNPs should be analyzed for association with sarcopenia in other populations as well before implementation in practice. We also recognise small sample size (n = 7) in the study of transcriptomic responses to a single-bout resistance exercise and encourage independent replication in larger cohorts.
Conclusions
In conclusion, we have identified and characterized 78 SNPs associated with sarcopenia and 55 SNPs with sarcopenic obesity that highlight shared genetic architecture between sarcopenia-related traits and lifestyle exposures.
We strongly suspect that many additional common polymorphisms, and probably rare mutations as well, will be shown to be associated with sarcopenia-related traits in due course. Thus, we suspect that the 78 polymorphisms we have identified constitute only a small fraction of the genetic factors that influence muscle strength, muscle mass and walking pace. However, looking to the future, when thousands of polymorphisms will be discovered that contribute to the variability in sarcopenia-related traits, the power of such information (in conjunction with standard measurement data) as a practical tool for clinicians will emerge.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/nu15030758/s1, Table S1: Association of top sarcopenia-related SNPs and implicated genes with health-related traits, gene expression and responses to resistance training. Informed Consent Statement: Informed consent was obtained from all subjects involved in the UK Biobank study.
Data Availability Statement:
The data presented in this study are publicly available online at https://genetics.opentargets.org (accessed on 27 December 2022). | 2023-02-13T05:50:29.974Z | 0001-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "458b762fff7593dd6a48704eff871a2e24354885",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/nu15030758",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "458b762fff7593dd6a48704eff871a2e24354885",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12991801 | pes2o/s2orc | v3-fos-license | Does Subjective Rating Reflect Behavioural Coding? Personality in 2 Month-Old Dog Puppies: An Open-Field Test and Adjective-Based Questionnaire
A number of studies have recently investigated personality traits in non-human species, with the dog gaining popularity as a subject species for research in this area. Recent research has shown the consistency of personality traits across both context and time for adult dogs, both when using questionnaire based methods of investigation and behavioural analyses of the dogs’ behaviour. However, only a few studies have assessed the correspondence between these two methods, with results varying considerably across studies. Furthermore, most studies have focused on adult dogs, despite the fact that an understanding of personality traits in young puppies may be important for research focusing on the genetic basis of personality traits. In the current study, we sought to evaluate the correspondence between a questionnaire based method and the in depth analyses of the behaviour of 2-month old puppies in an open-field test in which a number of both social and non-social stimuli were presented to the subjects. We further evaluated consistency of traits over time by re-testing a subset of puppies. The correspondence between methods was high and test- retest consistency (for the main trait) was also good using both evaluation methods. Results showed clear factors referring to the two main personality traits ‘extroversion,’ (i.e. the enthusiastic, exuberant approach to the stimuli) and ‘neuroticism,’ (i.e. the more cautious and fearful approach to the stimuli), potentially similar to the shyness-boldness dimension found in previous studies. Furthermore, both methods identified an ‘amicability’ dimension, expressing the positive interactions the pups directed at the humans stranger, and a ‘reservedness’ dimension which identified pups who largely chose not to interact with the stimuli, and were defined as quiet and not nosey in the questionnaire.
Introduction
In recent years there has been an increasing interest in the evolutionary significance of the concept of personality, making it an appealing research topic for comparative psychologists, biologists, and evolutionary scientists alike. As a consequence, many species have become the object of interest for personality-researchers [1], and one species that has received increasing attention is the domestic dog (Canis familiaris) [2][3][4]. The reasons for this interest are varied, but the potential applicability, for example in helping to find appropriate homes for shelter dogs [5][6][7], selecting the most appropriate puppies for working dog training [8][9][10][11][12][13][14], and the early detection of behavioural problems that may hence obtain rapid treatment [15,16], has had a strong impact on the field.
Various methods have been used to assess the personality of dogs. Behavioural testing has been adopted in a number of studies using a variety of tests [17][18][19]. This method involves presenting a selection of stimuli/situations to dogs in a standardized manner, to allow comparisons to be made. Behavioural coding can then be combined either with detailed analyses of behaviour (mostly done at a later stage from video) or using rater coding of behavioural categories (which can be done either in situ, from video, or both). Detailed analyses of behaviour normally involves measuring frequency, duration and latency of specific behaviours, for example coding the occurrence of behaviours such as bared teeth, raised hackles, and growling [20,21]. Rater-coding is normally done on a predetermined scale [22], which can measure either the presence/absence of a particular behaviour/behaviours (e.g. whether the dogs do or do not exhibit stress signals, which would then include, lip-licking, yawning, etc.) or the intensity of its exhibition (e.g. how much a dog rates on a scale of 1 to 5/7 on for example aggression, each score would then coincide with the presence/absence of a certain combination of behaviours relating to aggression such as growling, baring teeth, snapping etc.) [20].
Finally, numerous studies have used a questionnaire-based approach, where typically either the owner or a person well acquainted with the dog rates the dog's behaviour in everyday situations [23]. In a number of cases, the questionnaire was validated using a behavioural test or other measure (vet visit in the case of behavioural problems for example) by comparing results obtained with the two different methods [21,[24][25][26]. However, only a few attempts have been made to evaluate the degree to which proposed canine personality dimensions predict observed behaviour of individuals in contexts different from those in which they were developed [25][26][27].
Each method used independently has its pros and cons. The problem with behavioural testing is that, aside from the time-consuming aspect of carrying out the test (which can last anything between 2 to 30 minutes), whether using rater-coding, or an in depth analyses of specific behaviours, it requires evaluators to be trained and knowledgeable of dog behaviour, and in the latter case, it requires at least the same amount of time in coding than it did in testing. Questionnairebased evaluations are much faster, however, so far they have mostly been used with dogs being exposed to a number of different situations (repeatedly), or by asking owners to assess the dogs' behaviour across many contexts. Furthermore, questionnaire-based evaluations are considered to be more subjective, since the assessment of the dog's behaviour is filtered through the person's perception of the dog (including long-held, but perhaps no longer applicable beliefs, or breed prejudices etc.), although according to some authors, given they are normally based on a much wider perspective and information base, they may prove more accurate [28].
A recent meta-analysis of data from 31 studies determined that there is moderately high temporal consistency (R = 0.43) in dog personality scores, with no differences in consistency between personality scores based on behavioural ratings versus behavioural coding [3]. Given the different methods are in theory measuring the same underlying trait, it is surprising how few studies have simultaneously used different evaluation tools. Perhaps, even more worryingly, where questionnaires and behavioural tests have been used in the same study, correlation coefficients among similar personality traits are usually low, falling between 0.2-0.4 [26,29,30]. When questionnaire and behaviour coding targeted very specific traits, then higher correlations emerged. For example, Kubinyi et al. [31] found that dogs assessed by owners as more active-impulsive and inattentive showed also more activity in four behaviour test situations (r = 0.53 and r = 0.25, respectively).
Overall, there is a fair amount of agreement in regards to the personality factors emerging from the behavioural and questionnaire data. However, most factors have been extrapolated from studies assessing the behaviour of adult pet dogs. Hence, it is not clear whether an evaluation of puppy personality (behavioural or questionnaire-based) would result in the same factors emerging in adult dogs. Furthermore, whereas a few adult-based studies have sought to establish the correspondence between behavioural testing and questionnaire-based evaluations [26,29,32,33], this has not been the case for studies with young puppies, where, to our knowledge, only behavioural testing with coder rating has been used to assess personality [34].
The aim of the current study was therefore to use an open-field test accompanied by a simple adjective-based questionnaire to test personality traits of 2-month-old dog puppies and assess the correspondence between these two methods of evaluation. To achieve our aim, a sample of 2-month old puppies was tested at their breeders. Independent observers carried out behavioural analyses from videos of the pups in an open-field test, and a different set of independent observers (unaware of the behavioural coding done by the others) scored each pup on a previously selected adjective-based questionnaire. The correspondence between the two methods was assessed by comparing the personality factors emerging from the results of the behavioural analyses with those emerging from the questionnaires-based evaluation. Consensus (i.e. inter-observer reliability) for the behavioural analyses in the open field test and the adjectivebased questionnaire was assessed as a pre-requisite prior to all analysis. To further investigate the validity of the personality assessment using these different tools we assessed the internal consistency of the personality dimensions in the adjective-based questionnaire (i.e. the degree to which judgments about an individual's personality are consistent across items (i.e. adjectives) thought to reflect the same behavioural dimension [35,36]. Finally, trait consistency (test-retest reliability) was assessed by re-testing a sample of puppies two months later with the same test and comparing results both in terms of the behavioural analyses and questionnaire-based evaluation.
At a theoretical level, identifying personality factors at an early age may increase the likelihood that these dimensions have a genetic component (i.e. endophenotyping), hence potentially providing a tool for gene-behaviour studies and adding to the growing interest on the evolutionary perspectives of personality [37]. Furthermore, a more general tool to assess personality in young puppies may have important outcomes also in a more applied setting, since personality has been shown to be an important variable in determining owner satisfaction [38].
Ethics statement
No special permission for use of animals (dogs) in such behaviour studies is required in Italy, however when first visiting the breeders, an in depth description of the test was presented by the researcher and consent, to video-record and use data in an anonymous form, was sought verbally prior to testing. Following agreement they compiled a form with details regarding the dog breeding activity (e.g. type of breed, number of females, number of litters per year etc.). Since breeders did not have an active role in the study and were never subjected to evaluation themselves, IRB or equivalent was not required. All procedures were performed in full accordance with Italian legal regulations and the guidelines for the treatments of animals in behavioural research and teaching of the Association for the Study of Animal Behavior (ASAB).
Subjects
A total of 79 litters, representing 21 breeds (range: 1 to 10 litters per breed, median: 3 litters per breed; mean: 3.8 litters; Table 1), were included in the study. Litters came from a total of 55 registered dog breeders. Wherever possible, we chose more than one breeder for each breed to avoid the risk of testing specific bloodlines. All puppies were tested at the breeders before adoption.
An initial sample of 15 video-recorded tests, chosen to represent as much as possible the variability of the whole subject pool, was used only for the adjective-based questionnaire selection process (S1 Information). In this sample pups were balanced for sex (8 M; 7 F) and were representative of 15 different breeds (Table 1).
A sample of 154 puppies (79 males and 75 females) tested at 2 months (range 58-62 days) was selected from the above mentioned 79 litters taking a maximum of 2 puppies (1 male and 1 female wherever possible) from each litter ( Table 1). The video coding of the tests carried out with this sample were analysed using both the behavioural and questionnaire-based method to allow an analysis of the personality traits emerging from the two different methods and potential correspondence between them.
Finally, in order to assess the trait consistency (test-retest reliability), a sample of eighteen puppies was tested at both 2 and 4 months of age. This was possible because these pups were not sold but kept by the breeders. Therefore, the test, the environment and the owner were exactly the same as when they were first tested. The questionnaire evaluation was also carried out on these subjects tested at different ages (see Table 1, for subject sex and breed).
Procedures
Open field test. The open field test was carried out at the breeder's place in a quiet, 5 x 5 m area, temporarily fenced off, using a portable 'puppy pen' (1m high) covered by a dimming Table 1. Summary of the analysis carried out and details on sample size, breed, sex (M, F), number of litters from which the sample was taken associated to each analysis.
Assessment
Sample M F Breeds (litters) Selection of questionnaire (S1 Information) Inter-observer agreement for behavioural coding (Consensus analysis) [63] [32] [31] Inter-observer agreement for questionnaire coding (Consensus analysis) [60] [30] [30] Trait consistency (test-retest reliability) between pups at 2 and 4 months old net (to avoid distraction from the outside). Testing was normally carried out in the morning (9-11h), but could vary according to the breeder availability. Using powdered chalk, the area inside the pen was divided into 9 identical squares (Fig 1). Each square contained a stimulus: 1) a realistic looking plastic dog (approx. 50 cm tall), displaying a rather assertive, erect posture, ears forward and docked tail, 2) a bowl of water, 3) a street cone, 4) a mirror propped up to be at puppy height, 5) a child-looking doll standing up (approx. 86 cm high) and positioned with her arms reaching in front of her, and her upper body slightly bent forward; 6) a squeaky dog toy; 7) a small nylon tunnel (53 cm long and 43 cm diameter), similar to those used in agility, with a small piece of food placed inside it. One square was left empty. Finally, two people were also present in the pen: the breeder, seated on a chair in the centre square, and a female researcher, seated on the ground in a corner square. The breeder was asked not to interact with the puppy and remain passive during the test. The experimenter seated on the ground, behaved in a natural way with the pup, in that she did not call or invite interaction but if the pup engaged with her she would briefly respond by petting it then would stop interacting and adopt a relaxed posture. The position of the stimuli was the same for all pups tested. The breeder was asked to carry the pup into the pen, and once seated, place the pup on the ground in front of his/her feet. The pup was then free to move around in the pen for 5 minutes. A video camera was set up on a tripod outside the pen, and manoeuvred by an assistant so as to insure the pup's behaviour was recorded during the whole test.
Selection of an Adjective-based Questionnaire. All questionnaires concerning the personality of dogs available in the literature, which have been specifically designed for use by owners, and mostly refer to the everyday situations dogs may be observed in, were excluded since they are difficult to apply to our current setting. A sub-sample of owner-directed questionnaires however, which were concerned more with scoring the dog's personality on a number of characteristics best described by adjectives, were used. Of course the owners still refer to their knowledge of the dog in a daily context; however, adjective-based questionnaires, being less specific of the context, may be more easily applied to our research. Initially, four potential adjective-based questionnaires were taken into consideration: the canine Big Five Inventory (BFI) [26]; the Monash Canine Personality Questionnaire (MCPQ/MCPQ-Refined) [39][40][41]; the demographic personality questionnaire [2] and the Free-choice profiling method [42]. However after a systematic selection process (see S1 Information), a modified version of the Ley et al. [40] was used ( Table 2).
Coding and Analyses
The authors of this paper carried out all video analyses, for both behavioural and questionnaire coding.
Behavioural Coding and Analyses. After viewing approximately 30% of randomly chosen tests, an ethogram of the puppies' behaviour during the open field test was determined (Table 3).
Because the aim of the current study was to assess whether there are broad personality traits already visible in 2 month old puppies, we chose a midlevel analysis, i.e. rather than focus on single units of behaviours such as, the frequency of tail wags or the number of startle responses etc., we opted for a more global assessment of the dogs' behaviour, especially those behaviours directed towards the stimuli presented. However, although the approach to the single stimulus was recorded as cautious, relaxed or exuberant, the assessment of which category to include the pup in was based on an objective observation of the pups' behaviour in terms of body posture, speed of approach and tail movement (see Table 3 for a detailed description of these). Deflection from the stimuli (including avoidance behaviours, moving back/away, or a startle response followed by a change of direction) was also coded, as well as social and play behaviours directed either at the people or objects. These midlevel interpretations of the dogs' behaviour were defined on the basis of specific behavioural patterns emerging from the literature (see Table 3) Table 2. List of adjectives in the five personality subscales derived from the Ley et al. [37] and those used in the current study. To move along on foot, advancing step by step whilst looking around, or looking outside the enclosure but at no object/stimulus in particular.
Fast gait
To move either trotting, cantering or galloping/bounding whilst looking around, or looking outside the enclosure but at no object/stimulus in particular.
Cautious approach/interaction (object or people) Risk assessment: The dog starts off by keeping its body at a distance from the object and extending only its upper body towards it. It looks like the dog is stretching towards the object. This is often accompanied by hesitant and jerky back and forth movements.
Olfactory inspection with lowered posture: sniffing of the object displaying slow movements, with ears and tail held low and potentially also back legs bent.
Positive approach/interaction (object or people) The pup approaches the stimulus in a direct manner, sniffs it with tail hanging, held parallel or slightly above the bodyline. The tail may be still or slow wagging. The mouth is relaxed and the ears are pricked forward. The pup may lick or touch the stimulus with its paws. If towards the 'tunnel' the pup explores also the interior, moving inside it with at least the front paws.
Exuberant approach/interaction (object or people)
The pup approaches the stimulus at a fast walk, trot, run or bounding towards it. Often it throws the object over with the impetus of its movements. It sniffs the object with tail held higher than the line of the body, wagging it rapidly, never stopping in one place but sniffing the object all over whilst moving continuously. The mouth is relaxed, the ears pricked forward. The body posture is tall. It may lick and touch the object with its paws. If directed towards the 'tunnel' it may run through it.
Social interaction (only if referring to the experimenter or the breeder) Greeting: to interact in a friendly manner, holding the ears back, with a relaxed open mouth, the tail held low and wagging rapidly especially with the end part of the tail, occasionally accompanied by whining. Pups may also lick, sniff or gently prod the persons' face or mouth.
Hurtle: standing on back legs often associated with jumping up towards the person's face whilst exhibiting a fast and wide tail wagging motion. With an increase in the excitement level, the jumping up behaviour becomes more intense and it may be accompanied by muzzle hits or biting of the person's clothes, hair, face and hands.
Lap-sitting: the pup climbs into the experimenter's lap, sitting or lying on her knees.
Belly up: the pup lies down next to the person, displaying its belly.
Contact-seeking: extending a paw to touch the person, or using the nose/snout touching the person or placing it on the person's lap. [43][44][45]. In total 11 mutually-exclusive behavioural categories were recorded continuously in terms of duration of their occurrence. The occurrence of these behaviours was scored as directed towards each stimulus in the test enclosure. Video analyses of behaviours were carried out using behavioural event recording software (Observer XT 8.0, Noldus Information Technology, The Netherlands). Two observers (SB and VB) scored 154 videos and 40% of these were coded by both. Consensus (inter-observer agreement) was evaluated using Cronbach's alpha. A preliminary Principal Component Analysis was carried out to identify main clusters of behaviours but the KMO (Kaiser-Meyer-Olkin measure of sampling adequacy) was too low (0.539) to validate the analysis hence we decided to collect behaviours by means of a hierarchical cluster analysis (method: average-linkage between groups; similarity measure: Euclidean squared distance; (see e.g. [46,47]) using only behaviours shown by at least 30% of subjects (all behaviours reported in Table 3 except carry toy, deflection and other). The hierarchical cluster analysis creates subsets (or clusters) of objects (i.e., observations, individuals, items of variables) such that those within each cluster have a higher degree of similarity than objects assigned to different clusters. Similarities (or dissimilarities are defined by an appropriate metric (a measure of distance between pairs of observations), and a linkage criterion.
Label
Behavioural Description
Carry toy
Holding the toy (usually the squeaky toy) in the mouth whilst walking, trotting or running around in the arena. If however, the exaggerated behaviours typical of play were exhibited whilst carrying the toy, we coded the behaviour within the 'playful interaction' category and not the 'carry toy' one.
Deflection (object or people) Avoidance: after looking intently at the stimulus the pup first looks away then changes the orientation of its body in a direction opposite to the stimulus' location or maintains a position so as not to shorten the distance between itself and the stimulus. This is however usually accompanied by rapid glances to the 'offending' object.
Startle response: a sudden movement in the opposite direction to the 'alarming' stimulus, whilst maintaining the head oriented towards it.
Walk backwards: the pup increases the distance between itself and the stimulus whilst maintaining the body oriented towards it.
Look at stimulus (object or people) Visual exploration of the stimulus, the dog is oriented and looking towards it from at least a few paces away. This behaviour often occurs just before an interaction or avoidance of the object. If the pup is looking at the stimulus and walking parallel to it, the 'look at stimulus' behaviour 'over-rides' the walk/trot category outlined above.
Non-stimuli-related behaviour
This category captures the time pups spent not interacting/ engaging with the stimuli. The pup is either in a static position (sitting, lying or standing), perhaps looking around (e.g. outside the fence but not towards the stimuli) and/or biting chewing on elements of the surrounding environment such as grass, sticks, leaves etc. or is moving within the field whilst sniffing at the ground or at the fence Maintenance behaviours i.e. drink, eat biscuit (which was in the tunnel), elimination behaviours doi:10.1371/journal.pone.0149831.t003
Personality in Dog Puppies
To evaluate trait consistency (test-retest reliability), the videos of the 18 pups tested both at 2 and 4 months of age were all coded by the same observer (SB) and a Spearman's correlation on the main behavioural clusters was computed.
Questionnaire Coding and Analyses. All 154 puppies' tests were coded by SMP (unaware of the ethogram used in the behavioural analyses) using the adapted version of the Ley et al. [40] questionnaire.
A second coder (SN; also unaware of the ethogram used in the behavioural analyses) used the questionnaire to score a random selection of puppy tests (39% of the total). These data were used for consensus (inter-observer reliability) analyses using Cronbach's alpha. For interobserver coding, each adjective was analysed independently to evaluate which adjectives received a greater or lesser consensus when evaluating puppy behaviour.
Maintaining, the same behavioural dimensions identified by Ley et al. [40], the internal consistency of the test was calculated using Cronbach's alpha (the average value of the reliability coefficients one would obtained for all possible combination of items when split into two halftests) and the mean inter-item correlations (providing an assessment of item redundancy and representativeness of the content domain). It is reported that the combination of both this scores is the most accurate measure of internal consistency [48].
Cronbach's alpha values expressing the internal consistency on the personality dimensions (see results section) were comparable to those reported for adult dogs by Ley et al. (which varied from 0.74 to 0.87) [40]. However, inter-item correlations were in some cases substantially low. Hence, since no study has been carried out using a questionnaire based methodology on puppies, we ran a Confirmative Factor Analyses (CFA using the Maximum Likelihood extraction method as rotation method, and setting the factor loading at 0.40 [49,50]) to assess whether the adjectives identified by Ley et al. [40] for adult dogs would group into similar factors also in puppies. Finally, Pearson's correlation coefficients among the personality factors were evaluated.
Finally, SMP also scored all the puppies tested at 2 and 4 months and consistency (test-retest reliability) in scoring at these two time intervals was assessed using Spearman's correlation coefficient.
Correspondence between behavioural and questionnaire-based method. Correspondence refers to the extent to which judgments predict an external criterion for "reality" [51]; previous studies identified independent observations of behaviours as the most valuable external criterion [52,53]. Bivariate linear correlations were carried out using Pearson's r between each personality trait emerging from the questionnaire-based factorial analyses and behavioural clusters emerging from the cluster analysis.
Hierarchical cluster analysis. The dendrogram visual inspections along with the agglomeration matrix of the cluster analysis (maximum increment between stadiums criterion) suggested a 5 clusters solution (Fig 2): at the first stadium, we found a cluster composed by two subsets of symmetrical behaviours (exuberant approach/interaction and fast gait labelled "exuberant attitude" vs. look at stimuli and cautious approach/interaction labelled "cautious attitude"). The first subset outlines puppies behaving in one of two extreme manners: either hurtling towards all stimuli with boundless enthusiasm, the second outlines puppies looking at the stimuli from afar and when choosing to interact with them, doing so with a measure of anxiety and caution.
At the third stadium a second cluster containing walk and positive approach/interaction, named "relaxed attitude" emerged. This cluster described those puppies that were relaxed in their interaction, investigating the objects in a positive way, yet without showing the extreme exuberance of those pups included in the 'exuberant' attitude group.
Social interaction converged with the first cluster at the fourth stadium; last, playful interaction and non-stimuli related behaviour did not converge and remained as single items until the end of the agglomeration program.
Questionnaire Analyses
Consensus. Inter-observer reliability between the two observers based on 40% of the tests showed a Cronbach's alpha of 0.87. Cronbach's alpha expressing the consensus for single adjectives varied from 0.52 to 0.90 (Table 4).
Pearson's correlation matrix showed that the first factor Extraversion was strongly and positively related to Persistence (r = 0.606) and to Amicability (r = 0.739) and was negatively associated to Neuroticism (r = -0.622) and to Reservedness (r = -.639). A higher score on Neuroticism was related to a weaker value in Persistence (r = -0.512) and Amicability (r = -0.463) and to a higher Reservedness (r = 0.650). Persistence was also related to Reservedness (r = -0.412) and Amicability (r = 0.391); the latter was negatively correlated to Reservedness (r = -0.554).
Hence, the general picture emerging suggests that subjective rating using an adjective based questionnaire is capable of picking out five specific types of puppies. The two major traits (Extraversion and Neuroticism) described respectively either an energetic puppy, who bounded towards the stimuli and explored them whilst displaying a relaxed posture and much tail-wagging, or conversely, a puppy that tended to either avoid the stimuli looking at it from afar, spending time sniffing around and interacting with other elements of the environment or approached them but with a slow gait, in a tentative manner, holding the tail and potentially the hind quarters low, and hence exhibiting signals of mild fear or apprehension.
The Reservedness trait identified those pups whom did not show particular fear or unease in relation to the stimuli but largely chose not to interact with them. The questionnaire also successfully identified the puppies that interacted in a positive manner with the stranger (amicability). The persistence trait was somewhat more problematic, correlating with too many behavioural factors, to allow a clear characterization of exactly what type of puppy it described.
To summarise, the Extroversion trait from the questionnaire associated positively with the behavioural cluster subgroup expressing an energetic approach to the social and non-social stimuli in the test environment (e.g. fast gait, exuberant approach/interaction, playful interaction) and negatively with those expressing a more cautious approach (e.g. look stimulus, cautious approach/interaction) or a lack of interest in the stimuli (i.e. non-stimuli related behaviours). Interestingly, the analysis identified a very similar pattern of behaviours for the trait Persistence (Table 6), which supports the strong positive correlation between these two traits.
Conversely, the Neuroticism trait of our questionnaire was positively associated with cautious attitude (i.e. look at stimulus, cautious approach/interaction) and the lack of engagement with the stimuli, hence it describes puppies who either explored the stimuli but in a cautious manner, or largely did not interact with these but rather spent time sitting/lying or sniffing around and/or interacting with other elements of the environment (e.g. grass, leaves etc.) Furthermore, this trait correlated negatively with both behavioural dimensions describing positive interactions with the stimuli (exuberant and relaxed attitude) and playful interactions.
The Amicability trait was correlated positively to the social interaction dimension, confirming its description, but it also correlated positively to exuberant attitude, and negatively to cautious attitude and non-stimuli related behaviour. Indeed both exuberant and cautious attitude included respectively a positive and friendly or a more cautious interaction with the person, hence, these correlations are to be expected. The negative correlation with non-stimuli related behaviour is also to be expected since this largely described a puppy that mostly did not interact with the stimuli including the person present. Finally, Reservedness, reflected a puppy who largely chose not to interact with the stimuli (animate and inanimate), since it correlated significantly with non-stimuli related behaviours (e.g. sitting/lying, sniffing around), but negatively with both dimension describing positive interactions with the stimuli (exuberant and relaxed attitude).
Discussion
The aim of this study was two-fold. First, to assess whether individual personality traits could be detected in puppies as early as 2-months of age applying tools that are largely used to assess Table 6. Pearson' correlation coefficient and significance levels used to assess the correspondence between the behaviour factors identified by the hierarchical cluster analysis (rows) and personality factors emerging from the questionnaire-based analyses (columns). Non-stimuli related behaviour -0.472** 0.534** -0.468** -0.450** 0.607** personality in dogs (i.e. an adjective based questionnaire and a behavioural coding method). Second, to investigate the correspondence between these two methods in defining behavioural patterns. Overall results from both the adjective based questionnaire and the behavioural analysis suggest that at 2-months of age, when exposed to both social and non-social stimuli in an open-field test, puppies already show specific behavioural patterns which can be identified as relating to personality traits. Both methods proved to be reliable tools for the assessment of personality traits in as much as the inter-observer reliability was high in both cases, confirming previous findings [32] and a good correspondence in personality traits emerged between the two independent measures used. A further test of the strength of specific personality traits is represented by potential consistency found over time in the subset of puppies re-tested 2 months later. A number of personality traits emerged. Based on the behavioural analyses, a first clear cluster which expressed the puppies' 'style' of interacting with both social and non-social stimuli emerged describing an Exuberant versus Cautious attitude. From the questionnaire the first two factors (Extraversion and Neuroticism), largely identified the same pups falling in the Exuberant vs. Cautious factor dimensions, and indeed the high correspondence between these traits confirmed that there is a consistent behavioural pattern in the subjects tested. Overall these traits describe how pups interact with the stimuli in the test, whether they keep a distance and look at them from afar or whether they choose to explore them in an exuberant manner. In many respects this can be equated to the shyness-boldness dimension identified by other studies on dogs [17,27]. The shyness-boldness trait is a dimension, which has been described for many animal species [55], and from the evolutionary perspective it appears to be adaptive since it allows animals to cope with fluctuating environmental conditions [56]. In dogs it is likely that this trait has been maintained from the wolf-ancestor, although it is an open question to what extent wolves and dogs may differ in their representation along this continuum. Regardless, it is considered to be one of the more stable personality traits in dogs [27] and this seems to be confirmed also in the current study, by the fact that it was the only trait showing also temporal consistency in the test-retest reliability analyses.
A separate cluster, although closely linked to the exuberant/cautious attitude one, emerged for sociability (termed 'Social interaction') expressed by the time spent interacting with the people in the arena in a friendly manner. A very similar dimension emerged from the questionnaire (termed 'Amicability'). Looking at the correspondence between the two methods it emerges that Amicability positively correlates with Social interaction and with the Exuberant attitude from the behavioural analysis, thereby confirming the link between these traits. The 'sociability' trait has been identified by a number of prior studies on puppies, suggesting it is one of the most easily identified [34]. However, in some studies the sociability trait emerged as embedded within the more general shy-boldness axis [27]. Results from our study are mixed in that it is closely associated with the exuberant attitude although it emerged as a separate factor. Surprisingly, consistency was not confirmed by our test-retest. Given that the puppies were tested twice in the same environment, with no major changes (e.g. adoption, change of home) affecting their social life experiences, we would have expected higher correlation estimates for this trait. Whether this was due to our experimental constraints (i.e. small sample size) or to the particular sensitivity of this trait during development is unclear and needs to be investigated further. Contrasting evidence emerges from the literature. The sociability trait has been reported to be moderately stable over time in adult dogs [57], however, studies on puppies are far fewer. Scott and Fuller [58] reported that social investigation and attraction toward humans remains fairly consistent after 7 weeks of age (p 137). However, a recent review found little consistency over time for this trait in puppies [3]. This discrepancy may depend on several factors: test-retest interval plays an important role in detecting consistency of personality traits (the larger the test interval the smaller the strength of consistency) [3] and age at testing is also known to affect consistency, with several studies showing that testing puppies at less than 12 weeks of age is not predictive of future behaviour [10,18,59].
Playfulness remained as a distinct cluster characterized by those pups spending their time in playful interactions whether with the inanimate stimuli or the experimenter. This factor correlated the most with the Persistence trait emerging from the questionnaire, probably because it described those pups who were persistent in their attempts to play with the experimenter (who was instructed to largely not respond to these attempts), and who persevered in playing with a specific stimuli (e.g. toy) somewhat to the exclusion of all else. Interestingly, the playful trait showed a positive correlation with Extraversion and a negative one with Neuroticism. In a seminal work, Svartberg [60] showed how the selective pressure on dog breeds is still in progress shaping, and significantly affecting the personality of modern breeds and breed lines. Importantly, he reported that popular modern breeds have higher sociability and playfulness scores than both less popular breeds and breeds used in shows, highlighting that both these aspects of a dogs' behaviour are potentially very salient for pet owners. Indeed, Svartberg's work suggest that playfulness is a stable trait and could indeed be defined a personality dimension in dogs, confirmed by its stability over time in adult dogs (correlation estimates 0.76-0.89) [25]. However, recent reviews on dog personality [3,34] have not identified playfulness as a trait and, in a recent study, no consistency over time from puppy-to adult-hood was found [59]. Never the less, studies on puppies are still few, hence given the relevance of this behaviour for pet owners, future studies should indeed aim at investigating this aspect of dog's behaviour further.
The final behavioural cluster emerging, was labelled 'non-stimuli related behaviour'. It identified puppies that spent most of their time not interacting with the stimuli presented, but rather displayed either passive behaviours (sitting and lying down), or they moved around sniffing the environment but without showing expressions of fear or anxiety. This cluster showed the highest correspondence with the Reservedness trait from our questionnaire, effectively describing a pup which was the opposite of nosey/curious and rather quiet. Results from both measures taken together then, describe those pups that mostly 'do their own thing', are not so interested in exploring the stimuli, but do not show great anxiety related to these.
Overall, the correspondence scores emerging from the current study are comparable to other studies with adult dogs in which correspondence between subjective and behavioural methods were found [29,30,32,61]. Indeed, the factors emerging from the personality questionnaire and the behaviours largely showed a coherent picture with a good correspondence between the two methods of analyses. The adjective-based questionnaire, despite not being specifically designed for the current study, showed remarkably similar dimensions to its previous use with adult pet dogs. It easily identified the major dimensions of Extraversion, and Neuroticism and although with a curtailed set of adjectives it also largely allowed the amicability dimension to emerge unchanged compared to the adult study. The larger differences emerged in the Persistence dimension which only partly reflected Ley et al.'s Self-assuredness/Motivation dimension, and the Reservedness dimension which was largely novel. The fact that the adult and puppy dimensions emerging are not identical may be due to the different use of the questionnaire i.e. in an open field test for puppies, or in a pet every day situation for adult dogs, and/or to the fact that the questionnaire used for this study was adapted, i.e. some terms were omitted because not suitable. This could have affected the reduction analysis and factor loadings. However, it may also be that dimensions at this young age are somewhat different. Interestingly, this phenomenon has been reported in developmental psychology where age-specific personality dimensions, independent of the Big Five in adults were reported in adolescents [62]. Future research will be needed to assess whether a similar pattern is occurring also in dogs.
A number of limitations in the current study need to be kept in mind. Jones and Gosling [63] suggested that an important aspect of personality assessment was the consistent emergence of similar traits in different context. Indeed, in a recent study on adult dogs [61], authors looked at the correspondence between dogs' personality traits as assessed by owner questionnaire and the analyses carried out by researchers on the dogs' behaviour during a temperament test. In our own study, although similarly to the study with adult dogs we sought to assess the correspondence between a questionnaire-based and behavioural-based analysis, the context remained the same: the open-field test. Future developments of this study would include testing the two different methods to assess puppies' personality in various experimental contexts (e.g. a playful session with a stranger, and or potential conflict over a food source).
The assessment of the trait's consistency over time, which is necessary for a factor to be considered a stable personality trait [3] is another limit of the present study. Considering that we were able to re-test only 18 puppies, the conclusion as to the stability over time of the different personality factors identified in the current study are potentially premature and would need confirmation with a larger sample size. The subsample of puppies we were able to test did not undergo any specific selection, in that some were pups that the breeder decided to keep as future breeding stock (which could have been chosen on both morphological or behavioural traits) whilst others were pups that were not yet sold/ given away. Even though there was no systematic bias in the choice of pups the sample size is very small. Nevertheless, consistency over time emerging for the two main behavioural aspects (exuberant and cautious attitude) and questionnaire based personality traits (extroversion and neuroticism) that were also highly correlated with each other, suggest that these factors may form the basis of a dogs' developing personality. Thus, although further studies with puppies are needed to confirm results on the consistency of personality traits over time, current results on the factors showing correlation between the two time periods are comparable to those reported in previous studies for adult dogs [18,23,31].
Finally, since the aim of the current study was an evaluation of two methods for assessing personality traits in puppies, we sought to maximize the variability of puppies represented in the sample, thereby including subjects from 21 different breeds differing in size and belonging to different breed groups. Nevertheless, some breeds were represented more than others, and breed (at least in adult dogs) has been shown to affect personality [60,64]. Although potential breed differences in the representation of personality traits does not alter current results in terms of the evaluation of the two methods adopted, it is possible that with a different sample of puppies from a different set of breeds, other personality traits would emerge, that were not observed in the current study. Considering breed difference in personality traits have both theoretical (e.g. the effect of selection) and applied (e.g. selection for specific 'working' purposes) future research on this aspect would be particularly welcome.
In summary, despite the use of two very different tools, a more easy-to-apply adjective based questionnaire (scored on a 5-point scale) and a more complex and demanding (in terms of time and experience) tool as the behavioural coding (recording frequency and duration of behaviour) results were rather consistent and showed good correlations. The consistent identification of two main 'types' of puppies was easily detectable while scoring the puppies in the open-field test. Following the descriptions provided in this paper of Extrovert (Exuberant) or Neurotic (Cautious) puppies, breeders could easily profile the puppies in their litters. As mentioned above, this does not ensure the stability of those traits in dogs' adulthood never the less it gives a good indication of the present attitude of a pup and could be of help when selecting the most appropriate future family.
Supporting Information S1 Information. Questionnaire selection method. Description of the questionnaire selection procedure. (DOCX) | 2018-04-03T04:41:36.463Z | 2016-03-15T00:00:00.000 | {
"year": 2016,
"sha1": "92d0c2802230d2d0a02cca83db0242792ca1c808",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0149831&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c338a0409db4e2419b7161483c111165569959f6",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
234282913 | pes2o/s2orc | v3-fos-license | Antibacterial effects of Pheretima javanica extract and bioactive chemical analysis using Gas Chromatography Mass Spectrum
Pheretima sp is an earthworm from the Oligochaeta group found mostly in Java. The characteristics has segments reaching 95-150 segments. Clitellum is located in segment 14-16. The body fluids contain protein, amino acids and various enzymes. The purpose of this study was to determine the composition of bioactive compounds and evaluate antibacterial activity. The method used was maceration, antibacterial test against Salmonella typhi and GCMS analysis to identify bioactive compounds. Antibacterial test showed the inhibition zone diameter ranged from 15 to 20 mm. The identification of bioactive compounds is based on the percentage area, percentage peak height, retention time, molecular weight and pharmacological action. GC-MS analysis showed the presence of 50 peaks of compounds. Bioactive compounds which are antibacterial are 1) Nitrogen oxide (N2O) (CAS) Nitrous oxide with an area 2.03%, height 7.36%, retention time 1.361, molecular weight 44.013 g/mol; 2) Acetic acid (CAS) Ethylic acid with an area 17.02%, height 29.03%, retention time 1.789, and molecular weight 60.05 g/mol; 3) Butanoic acid, 3-methyl-(CAS) Isovaleric acid with an area of 3.27%, height 2.04%, 3.456, molecular weight 102.13 g/mol; 4) 1,2-Benzenedicarboxylic acid, diethyl ester (CAS) with an area 0.95%, height 1.32%, retention time 36.306 and molecular weight 222.24 g/mol.
Introduction
Typhoid fever cases are still a public health problem with as many as 22 million cases per year in the world and causing 216,000-600,000 deaths [1]. In 2008, the number of typhoid fever sufferers in Indonesia was reported at 81.7 per 100,000 population, with the distribution according to the age group of 0.0 / 100,000 population (0-1 years), 148.7 / 100,000 population (2-4 years), 180.3 / 100,000 (5-15 years), and 51.2 / 100,000 (≥16 years). This data shows that most sufferers are in the 2-15 years age group. The results of case studies in major hospitals in Indonesia show a tendency to increase the number of typhoid cases from year to year with an average morbidity of 500 / 100,000 population and mortality estimated at around 0.6-5% [2]. Efforts to control transmission have been carried out by the government with prevention and treatment. Prevention in the form of vaccination is less efficient and there are contradictions. Treatment with antibiotics still causes relapse and resistance [3].
Microbial resistance to drugs occurs due to genetic changes and is followed by a series of selection processes by antimicrobial drugs [4]. The mechanism of inhibiting pathogenic bacteria by producing cytotoxic and antibacterial compounds from extracellular product. The antibacterial compound will damage bacterial cell wall and causes bacteria dead [5]. So that the use of Pheretima javanica earthworm extract can be used as an alternative treatment in the prevention and treatment of Salmonella typhi infection.
Pheretima javanica is an earthworm from the Oligochaeta group that is commonly found in Java. The characteristics of the Pheretima javanica earthworm have a mouth on the anterior part of the first segment and anus on the posterior segment reaching 95-150 segments. The annular clit is located in segment 14-16 [6]. Earthworms respire through their skin and their digestive system occurs throughout their body. Its transport system consists of coelomic fluid which moves in the coelom with a simple closed circulatory system [7]. Several studies have also proven the antibacterial power of the protein extract of the earthworm Pheretima sp.
The fluid from the coelom in earthworms has an antimicrobial activity. Coelom fluid in earthworms contains active compounds that have biological activity in the form of antibacterials. the contents of the coelom fluid in the form of enzymes and proteins. the liquid is able to inhibit the growth of several pathogenic bacteria. therefore earthworm extract can be used to kill certain pathogenic bacteria. Bioactive compounds are used to control bacterial growth in order to prevent the spread of disease and infection. Antibacterial protein mechanism by creating pores and inhibiting cell wall synthesis inhibits the integrity of bacterial cell wall permeability, inhibits enzyme action, and inhibits the synthesis of nucleic acids and proteins, so that the bacterial cytoplasm is exposed to the external environment and disrupts the activity inside bacterial cells and causes death [8].
Bioactive compounds are compounds that have various benefits for human life. This compound is found in both animal and plant bodies. Some of the benefits include being antibacterial, antioxidant, anti-inflammatory and anti-cancer. Antibacterial is a drug or chemical compound that is used to kill bacteria, especially bacteria that are harmful to humans or pathogens [9].
Preparation and extraction
The research method begins with the selection of Pheretima javanica material by identifying it at the Biology Laboratory of the Faculty of Teacher Training and Education, University of Jember. Identification by looking at the characteristics of organs such as the number of segments, the location of the clitelium on the segment, body color, number and location of the seta, mouth shape and body shape. Identification using reference to Gates' identification book (1947).
Earthworm extracts are made by selecting healthy and mature earthworms Pheretima javanica. After cleaning with distilled water, it is weighed and then extracted with 70% ethanol as solvent. Before extraction, the earthworms are dried in the sun to dry. Extraction was carried out by means of worms in an oven until they reached constant dryness, a mixture of earthworms with 1: 3 solvent in a blender, then macerated by soaking in solvent for 24 hours in a shaker in a place that is protected by light. After that it is filtered and the filtration results are evaporated using a rotary evaporator to evaporate the remaining solvent, so that a thick extract is obtained [10].
After the extraction process, the next step is an antibacterial test to determine the activity of bioactive compounds that can inhibit the growth of Salmonella typhi bacteria. The steps of the bacterial activity test were carried out aseptically by providing three test tubes each containing 20 mL of liquid media. In each tube, 100 µl of Salmonella typhi was added then vortexed and then poured into a sterile petri dish and then allowed to solidify [11].
Antibacterial activity test
Antibacterial test on earthworm extracts was carried out after the agar medium solidified then three wells were made using a pipe molding. Then each well was filled with earthworm extract, positive control solution with chloramphenicol and a standard solution of distilled water each of 1000 ppm. Petri dish is stored in an incubator for 24 hours at 37oC. Furthermore, observing and measuring the formed inhibition zone [12]. The earthworm extract was then analyzed for bioactive compounds that act as antibacterial by using Gas Chromatography Mass Spectrum.
Gas Chromatography Mass Spectrum (GCMS) Analysis
This study used GC-MS chromatography. Gas Chromathography-Mass Spectrometer is a combination of analytical methods between GC and MS to identify different compounds in sample analysis. There are two main blocks in the GCMS instrument, namely GC and MS. GC uses a capillary column which depends on the column dimensions (length, diameter, film thickness) as well as the nature of the phase. The different chemical properties between the different molecules in a solution can be separated by passing the sample along the column. The 70% ethanol earthworm extract was injected into the injector so that it turned into steam and scanning was carried out for 1 hour [13]. The gaseous sample is carried gas by the carrier gas with a constant flow rate towards the separation column. The sample components will separate as they pass through the column due to differences in the absorption of the stationary phase in the cell components [14]. When the instrument is running, the computer generates a graph of the signal called a chromatogram. Each peak in the chromatogram represents the signal generated when a compound is eluted from the gas chromatography column into the detector. Before analyzing the extracts using gas chromatography and mass spectroscopy, oven temperature, gas flow rate were used and the electron gun was programmed initially.
Extraction
Earthworm identification is done by observing the morphological characteristics of worms. 200 grams of earthworms after oven at 50 o C for 90 minutes obtained a dry weight of 35.841 grams. The dried earthworm was then crushed and obtained 34 grams of implisia. The decrease in weight of the earthworm powder produced can be caused by it being scattered and still stuck in the blender. The earthworm powder was then macerated with 70% ethanol (1: 3) solvent for 24 hours using a shaker with a speed of 100 rpm. The ethanol 70% solvent is a polar solvent used in this study because it has the ability to radiate with a wide polarity ranging from nonpolar to polar compounds. Extraction by maceration was chosen because it does not need heating so that the active compounds in the sample are not damaged. The result of maceration is filtered and the filtrate is concentrated using a rotary vacuum evaporator with a water bath temperature of 50 o C, a vacuum of 25 rpm and a speed of tube 3, until a thick extract is obtained. The results of the concentration in an oven at a temperature of 50 o C were obtained by extracting a weight of 0.536 grams.
Antibacterial activity test
Antibacterial activity test was carried out on Salmonella typhi bacteria, the results of the bacterial activity test showed an inhibition zone as shown in figure 1. The bacterial activity test was carried out with three repetitions to obtain valid results. Activity test data can be seen in Table 1. Observation data from three repetitions of the antibacterial activity test against the growth of Salmonella typhi bacteria obtained an average inhibition zone for Pheretima javanica extract of 18,3 mm, chloramphenicol of 28,3 mm, while for distilled water there was no inhibition zone. The zone of inhibition in chloramphenicol is bigger because chloramphenicol is a positive control which is an antibiotic used in the treatment of infections caused by bacteria. So it can be concluded that Pheretima javanica extract has an inhibitory zone against the growth of Salmonella typhoid antibacterial which is the cause of typhoid fever. Previous research has been carried out by Waluyo regarding antibacterial activity and the resulting inhibition zone of Pheretima javanica against Salmonella sp. bacteria by using different solvents namely MOPS, Phosphate and NaCL with the inhibition zone in the solvent respectively 10 mm, 7 mm, and 8 mm [15]. Mathur et al also conducted a study on the antibacterial activity test using ethanol extract 95% Eudrilus eugeniae against Streptococcus pyogens with an inhibition zone of 19 mm [16].
Analysis of Bioactive Compounds using Gas Chromatography Mass Spectrum
The results of the GC-MS chromatogram consisting of 50 detected compound peaks are shown in Figure 2. The GC-MS chromatogram analysis of the Pheretima javanica extract showed that there were fifty main peaks and the components corresponding to the peaks were shown in Figure 3. Analysis of the compounds in the Pheretima javanica extract, shown in Table 2. The analysis used is the website pubchem.ncbi.nlm.nih.gov [17].
Figure 2. GC-MS Chromatogram of Pheretima javanica Earthworm Extract
The electron flow causes the sample to split into fragments. The obtained fragments are actually charged with ions of a certain mass. The M / Z (mass / charge) ratio obtained is calibrated from the obtained graph, which is called a Mass spectrum graph which is the fingerprint of a molecule. Research on the analysis of bioactive compounds using GCMS has been carried out on the ethanol extract of Zingiber officinale to produce forty-eight bioactive phytochemical compounds. Identification of phytochemical compounds is based on peak area, molecular time, retention time, molecular weight, MS fragment-ion and pharmacological action [18]. The results of GCMS analysis observations on 70% ethanol extract of Pheretima javanica detected 50 bioactive compound peaks which were shown in the chromatogram. The mechanism of GCMS is that the sample is injected into the injector so that it turns into steam or gas. The gaseous sample will be carried by the carrier gas to the separation column. The sample components that pass through the column will be separated because there are differences in the absorption power of the mobile phase of the sample components. Then the sample component will come out of the column along with the mobile phase and the concentration will be measured by the detector that produces the signal and sent to the recorder which produces the curves in the chromatogram. Analysis of the quality of the separation results measured based on the retention time.
In accordance with the research objectives to test the anti-bacterial activity, the earthworm extract has the potential to be anti-bacterial which is indicated by the presence of an inhibition zone against the growth of Salmonella typhi bacteria as shown in Figure 1.The results of GCMS analysis of the 70% ethanol extract of Pheretima javanica in table 2, there are bioactive compounds. as an anti- | 2021-05-11T00:06:44.473Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "10b8de45ea363419f810d0f69b511174d235656e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1751/1/012055",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "090b09875e7253b7837b4acbfaf124fda63286d0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
} |
118951231 | pes2o/s2orc | v3-fos-license | Photonic Versus Electronic Quantum Anomalous Hall Effect
We derive the diagram of the topological phases accessible within a generic Hamiltonian describing quantum anomalous Hall effect for photons and electrons in honeycomb lattices in presence of a Zeeman field and Spin-Orbit Coupling (SOC). The two cases differ crucially by the winding number of their SOC, which is 1 for the Rashba SOC of electrons, and 2 for the photon SOC induced by the energy splitting between the TE and TM modes. As a consequence, the two models exhibit opposite Chern numbers $\pm 2$ at low field. Moreover, the photonic system shows a topological transition absent in the electronic case. If the photonic states are mixed with excitonic resonances to form interacting exciton-polaritons, the effective Zeeman field can be induced and controlled by a circularly polarized pump. This new feature allows an all-optical control of the topological phase transitions.
The discovery of the quantum Hall effect [1] and its explanation in terms of topology [2,3] have refreshed the interest to the band theory in condensed matter physics leading to the definition of a new class of insulators [4,5]. They include quantum anomalous Hall (QAH) phase [6] with broken time reversal (TR) symmetry [7][8][9] (also called Chern or Z insulators) and Quantum Spin Hall (QSH or Z 2 ) Topological Insulators with conserved TR symmetry [10][11][12]. The QSH effect was initially predicted to occur in honeycomb lattices because of the intrinsic Spin-Orbit Coupling (SOC) of the atoms forming the lattice, whereas the extrinsic Rashba SOC is detrimental for QSH [11]. On the other hand, the classical anomalous Hall effect is now known to arise from a combination of extrinsic Rashba SOC and of an effective Zeeman field [13]. In a 2D lattice with Dirac cones it leads to the formation of a QAH phase, for which the intrinsic SOC is detrimental [14][15][16]. In the large Rashba SOC limit, this description was found to converge towards an extended Haldane model [14]. Another field, which has considerably grown these last years, is the emulation of such topological insulators with different types of particles, such as fermions (either charged, as electrons in nanocrystals [17,18], or neutral, such as fermionic atoms in optical lattices [19,20]) and bosons (atoms, photons, or mixed light-matter quasiparticles) [21][22][23][24][25][26][27][28][29]. The main advantage of artificial analogs is the possibility to tune the parameters [30], to obtain inaccessible regimes, and to measure quantities out of reach in the original systems. These analogs also call for their own applications, beyond those of the originals. Photonic systems have indeed allowed the first demonstration the QAHE [31,32], later implemented in electronic [33] and atomic systems [34]. They have allowed the realization of topological bands with high Chern numbers (C n ) [35], making possible to work with superpositions of chiral edge states. From an applied point of view, they open the way to non-reciprocal photonic transport, highly desirable to implement logical photonic circuits. On the other hand, the study of interacting particles in artificial topologically non-trivial bands could allow direct measurements of Laughlin wavefunctions (WFs) [36] and give access to a wide variety of strongly interacting fermionic [37] and bosonic phases [38]. In that framework, the use of interacting photons, such as cavity polaritons, for which high quality 2D lattices have been realized [39,40], showing collective properties, such as macroscopic quantum coherence and superfluidity [41], could allow to study the behaviour of bosonic spinor quantum fluids [42,43] in topologically non-trivial bands. In photonics, a Rashbatype SOC cannot be implemented for symmetry reasons, but another effective in-plane SOC is induced by the energy splitting between the TE and TM modes. In planar cavities, the related effective magnetic field has a winding number 2 (instead of 1 for Rashba). It is at the origin of a very large variety of spin-related effects, such as the optical spin Hall effect [44,45], half-integer topological defects [46,47], Berry phase for photons [48], and the generation of topologically protected spin currents in polaritonic molecules [49]. The combination of a TE-TM SOC and a Zeeman field in a honeycomb lattice has indeed been found to yield a QAH phase [29,[50][51][52][53][54][55], and the related model represents a generalization of the seminal Haldane-Raghu proposal [56] of photonic topological insulator, recovered for large TE-TM SOC.
In this manuscript, we demonstrate the role played by the winding number of the SOC on the QAH phases. We establish the complete phase diagram for both the photonic and electronic graphene. In addition to opposite C n in the low-field limit, we find the photonic case to be more complex, showing a topological phase transition absent in the electronic system. We then propose a realistic experimental scheme to observe this transition based on spin-anisotropic interactions in a macro-occupied cavity polariton mode. We consider a driven-dissipative model and demonstrate an all-optical control of these topological transitions and of the propagation direction of the edge modes. One of the striking features is that the topo-logical inversion can be achieved at non-zero values of the TR-symmetry breaking term, allowing chirality control by weak modulation of the pump intensity.
Phase diagram of the photonic and electronic QAH. We recall the linear tight-binding Hamiltonian of a honeycomb lattice in presence of Zeeman splitting and SOC of Rashba [57] and photonic type respectively [58]. It is a 4 by 4 matrix written on the basis (Ψ + A , Ψ − A , Ψ + B , Ψ − B ) T , where A and B stand for the lattice atom type and ± for the particle spin: J is the tunnelling coefficient between nearest neighbour micropillars (A/B). ∆ is the Zeeman splitting. λ i (i = e, p) are the magnitude for the Rashba (electronic) and TE-TM (photonic) induced SOC respectively [59]. The complex coefficients f k and f ± k,i are defined by: where d φj are the links between nearest neighbour pillars (atoms) and φ j = 2π(j − 1)/3 their angle with respect to the horizontal axis. Qualitatively, the crucially different φ dependencies of the tunneling f ± k,i are due to the different winding numbers of the Rashba and TE-TM effective fields in the bare 2D systems.
Without Zeeman field (∆ = 0), the diagonalization of these two Hamiltonians gives 4 branches of dispersion. Near K and K points, two branches split, and two others intersect, giving rise to a so-called trigonal warping effect, namely the appearance of three extra crossing points (see ( Fig. 1(c,d) and Fig. 3(a)). The differences between the two Hamitonians are clearly visible on the panels of Fig. 1 which show a 2D view of the 2nd branch spin polarizations (a,b) and energies (c,d). On the panels (a,b), we see the difference of the in-plane winding number around Γ (w Γ,e = 1 for Rashba and w Γ,p = 2 for TE-TM SOC). Around K points, the TE-TM SOC texture becomes Dresselhaus-like with a winding w K,p = −1 whereas Rashba remains Rashba with w K,e = 1. In each case, the winding numbers around the K and K points have the same sign and add to give ±2 C n for the electronic and photonic case respectively when TR is broken. On the panels (c,d), one can clearly observe the formation of small triangles near the Dirac points, the vertices of these triangles corresponding to the crossing points with the third energy bands. We can observe that the vertices are oriented along the K − K direction for TE-TM SOC and rotated by 60 • (K −Γ direction) for the Rashba SOC case, a small detail, which has crucial consequences for the topological phase diagram. The topological character of these Hamiltonians with the appearance of the QAH effect has already been discussed by deriving an effective Hamiltonian close to the K point in different limits for both the electronic [9,14] and photonic cases [29,50]. However, the presence of other topological phase transitions due to additional degeneracies appearing in other points of the first Brillouin zone was not checked. Figure 2 shows the diagram of topological phases of both models versus the SOC and Zeeman field strength. The different phases are characterized by the band C n that we calculate using the standard gauge-independent and stable technique of [60]. We remind that change of C n is necessarily accompanied by gap closing. Obviously, these phase diagrams are symmetric with respect to ∆ = 0 (with inverted signs of C n for the negative part). At low ∆, both models are characterized by C n = ±2. However, their C n signs are opposite due to the opposite winding of their SOC around K. Figure 3(b) shows the corresponding band structure for the photonic case, where the double peak structure around K and K', arising from the trigonal warping effect and responsible for the C n value, is clearly visible. Increasing either the SOC or the Zeeman field shifts these band extrema. In the photonic case, the band extrema finally meet at the M point, which makes the gap close, as shown on the figure 3(c). The critical Zeeman field value at which this transition takes place can be found analytically: Increasing the fields further leads to an immediate re-opening of the gap with the C n passing from +2 to -1 for the valence band. This case is shown on the figure 3(d), where the number of band extrema is twice smaller than on 3(b). This phase transition is entirely absent in the electronic case because of the different orientations of the trigonal warping.
Increasing the field even further leads to a second topological transition this time present in both models and associated with the opening of two additional gaps between the two lower and two upper branches (in the middle of the "conduction" band and of the "valence" band, correspondingly), as shown on the figure 3(d). This transition arises, when the minimum energy of the second branch at the Γ point is equal to the maximal energy of the lowest band at the K point, and thus the system of 2 bands (each containing 2 branches) is split into 4 bands (each containing a single branch). The corresponding transition in the photonic case occurs when the Zeeman splitting is: ∆ 2 = 3(J 2 − λ 2 p )/2J. The last topological phase transition occurs when the middle gap closes at the Γ point for ∆ 3 = 3J and then reopens as a trivial gap, whereas the two other bandgaps are still topological.
All-optical control of topological phase transitions. In what follows, we propose a practical way to implement the photonic topological phases analyzed above. We concentrate on the experimentally realistic configuration of a resonantly driven photonic (polaritonic) lattice [39,40], including finite particle lifetime, without any applied magnetic field, and demonstrate the all-optical control of the band topology. We show that the topologically trivial band structure becomes non-trivial under resonant circularly polarized pumping at the Γ point of the dispersion. A self-induced topological gap opens in the dispersion of the elementary excitations. The tuning of the pump intensity allows to go through several topological transitions demonstrating the chirality inversion.
A coherent macro-occupied state of exciton-polaritons is usually created by resonant optical excitation. This regime is well described in the mean-field approximation [41,61]. We can derive the driven tight-binding Gross-Pitaevskii equation in this honeycomb lattice for a homogeneous laser pump F ( = 1).
where i, j = 1..4 correspond to the four WF components . H ij are the matrix elements of the tight-binding Hamiltonian defined above (eq. 1) without the Zeeman term on the diagonal (∆ = 0). α 1 and α 2 are the interaction constants between particles with the same and opposite spins, respectively. For polaritons, the latter is suppressed [62] because it involves intermediate dark (biexciton) states, which are energetically far from the polariton states. Thus |α 2 | α 1 [63, 64] and we neglect it. F i is the pump amplitude. In the following, we consider a homogeneous pump at k = 0 (pumping beam perpendicular to the cavity plane), which implies that its amplitude on A and B pillars is the same. However, the spin projections F σ s and F −σ s , determining the spin polarization of the pump, can be different (ssublattice, σ -spin). The quasi-stationary driven solution has the same frequency and wavevector as the pump (Ψ σ s = e i(kp.r−ωpt) Ψ σ p,s ) and satisfies the equations: where ω p is the frequency of the pump mode. γ p is the linewidth related to polariton lifetime (τ p ), which allows to take the dissipation into account. The tight-binding terms (f kp ,f σ kp ) of the polariton graphene induce a coupling between the sublattices and polarizations. Eq. (4) is written for an arbitrary pump wave vector k p . In the following, we consider a pump resonant with the energy . A circular pump induces circularly polarized macrooccupied state (n − = 0), and n = n + = n + A + n + B = |Ψ + p,A | 2 + |Ψ + p,B | 2 . Combined with spin anisotropic interactions, it leads to a Self-Induced Zeeman (SIZ) splitting which breaks TR symmetry. A simple analytical formula of the k-dependent SIZ splitting between the two lower branches is obtained for λ p = 0: One of the key differences with respect to the magnetic field induced Zeeman field is the SIZ dependence on the wavevectors and energies of the bare modes. This dependence has already been shown to lead to the inversion of the effective field sign (and thus the inversion of the topology) when both applied and SIZ fields are present in a Bose-Einstein condensate [51]. The figure 4(a) shows the diagram of topological phases under resonant pumping (versus the SIZ) which is quite similar to the one under magnetic field. A method to compute the C n of the Bogoliubov modes has been developed in [65]. The procedure we use is detailed in the supplementary [59]. The only difference with respect to the linear case concerns the opening of the two additional gaps which does not take place at the same pumping values, because of the difference between the SIZ fields in the upper and lower bands. The figure 4(b) shows the magnitude of the different gaps multiplied by the sign of the C n of the valence band (C = n i=1 C n ) [66] for a given value of the SOC, a quantity highly relevant experimentally. In [39,40] J is of the order or 0.3 meV, whereas the mode linewidth is of the order of 0.05 meV. Band gaps of the order of 0.2 J should be observable. The SIZ magnitude shown on the x-axis (below 1.5 meV) is compatible with the experimentally accessible values. So in practice the topological transition is observable together with the specific dispersion of the edge states in the different phases which are presented in [59]. We note that the emergence of topological effects driven by interactions in bosonic systems has already been reported, such as Berry curvature in a Lieb lattice for atomic condensates [67] and topological Bogoliubov edge modes in two different driven schemes based on Kagome lattices [23,68] with scalar particles.
To confirm our analytical predictions and support the observability in a realistic pump-probe experiment (see sketch in [59]), we perform a full numerical simulation beyond the tight-binding or Bogoliubov approximations. We solve the spinor Gross-Pitaevskii equation for polaritons with quasi-resonant pumping: where ψ + (r, t), ψ − (r, t) are the two circular components of the WF, m = 5 × 10 −5 m el is the polariton mass, τ = 30 ps the lifetime, U is the lattice potential. The main pumping term P 0+ is circular polarized (σ + ) and spatially homogeneous, while the 3 pulsed probes are σ − and localized on 3 pillars (circles). The results (filtered by energy and polarization) are shown in Fig. 5. As compared with the previously analyzed [50,51] C = 2 case (a), a larger gap of the C = −1 phase (b) demonstrates a better edge protection, a longer propagation distance, and an inverted direction, all achieved by modulating the pump intensity.
Conclusions. We bridge the gap between two classes of physical systems where the QAH effect takes place, showing the crucial role of the SOC winding. In the photonic case, we show that the phases achieved, their topological nature and topological transitions can be controlled by optically induced collective phenomena. Our results show that photonic implementations of topological systems are not only of practical interest, but also bring new physics directly observable in real-space optical emission.
In this supplemental material, we first reintroduce the TE-TM SOC. Then, we provide details concerning the second part of the main text on the all-optical control of topological phase transitions.
Optical spin-orbit coupling
In the main text, we introduce two kind of SOC λ e and λ p for electrons and polaritons respectively. We choose this notation to make clearer the comparison between the two cases. Indeed, in our precedent works on polariton honeycomb lattices, we used the notation λ p = δJ [50,51,58]. Taking into account the TE-TM splitting the tunneling coefficients are defined in the circular-polarization basis as: A, ±| H |B, ∓ = −λ p e −2iφj where i index labels the different u i (v i ) components of an eigenstate.
This condition physically signifies that the creation of one bogolon corresponds to the creation of a quanta of energy ω.
Chern numbers of Bogoliubov excitations
The standard formula for the computation of the Chern number can be applied, but taking into account that bogolons are constituted by two Bloch waves of opposite wave vectors: where dk = dk x dk y and we drop the band index n for simplicity. We can see that the integration of the v part makes appear a minus sign because the integration takes place over an inverted Brillouin zone (BZ). This fact has been noticed in ref [65], and is commonly used [23,67,71,72]. It is typically formulated by introducing a matrix τ z = σ z ⊗ 1 1 4 directly in the definition of the Berry connexion A = Φ(k)| ∇ k τ z |Φ(k) .
Bogoliubov edge states
To demonstrate one-way edge states in tight-binding approach, we derive a 8Nx8N Bogoliubov matrix for a polariton graphene stripe, consisting of N coupled infinite zig-zag chains following the procedure of Ref. [51]. For this, we set a basis of Bogoliubov Bloch waves (u ± A/B,n , v ± A/B,n ) where n index numerates stripes, and k y is the quasi-wavevector in the zigzag direction. The diagonal blocks describe coupling within one chain and are derived in the same fashion as the M matrix in the previous section (2), coupling between stripes is accounted for in subdiagonal blocks. Figures 1(a,b) show the results of the band structure calculation for two different values of α 1 n . The degree of localization on edges is calculated from the wave function densities on the edge chains |Ψ R | 2 and |Ψ L | 2 (left/right, see inset), and is shown with colour, so that the edge states are blue and red. In Fig. 1(a), there is only one topological gap characterized by a Chern number +2 and hence there are two edge modes on each side of the ribbon. In Fig. 1(b), we can observe three topological gaps with the Chern number of the top and bottom bands being ±1 respectively. Each of them is characterized by the presence of only one edge mode on a given edge of the ribbon, and the group velocities of the modes are opposite to the previous phase: the chirality is controlled by the intensity of the pump. This inversion, associated with the change of the topological phase (|C| = 2 → 1), is fundamentally different from the one of Ref. [51], observed for the same phase (|C| = 2).
This optically-controlled transition allows to observe the inversion of chirality for weak modulations of a TRsymmetry breaking pump around a non-zero constant value, which can also possibly be used for amplifica-tion. The inversion of chirality of center gap edge states ( Fig. 1(a,b)) should be observable in a pump-probe experiment as shown by the numerical simulation in the main text. A sketch of the experiment using a σ + and a σ − polarized lasers (the homogeneous pump and the localized probe) is presented on Fig. 1(c). One should note that we can also obtain the inverted phases more conventionally by inverting the direction of the self-induced Zeeman field which is controlled by the circularity of the homogeneous pump. | 2017-01-13T14:19:26.000Z | 2017-01-13T00:00:00.000 | {
"year": 2017,
"sha1": "df371590c26bc7b3bcde246af6ed1e0ba47bb658",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1701.03680",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "df371590c26bc7b3bcde246af6ed1e0ba47bb658",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
21674416 | pes2o/s2orc | v3-fos-license | Identification of Serological Biomarkers for Early Diagnosis of Lung Cancer Using a Protein Array-Based Approach
Lung cancer (LC) remains the leading cause of mortality from malignant tumors worldwide. Currently, a lack of serological biomarkers for early LC diagnosis is a major roadblock for early intervention and prevention of LC. To undertake this challenge, we employed a two-phase strategy to discover and validate a biomarker panel using a protein array-based approach. In Phase I, we obtained serological autoimmune profiles of 80 LC patients and 20 healthy subjects on HuProt arrays, and identified 170 candidate proteins significantly associated with LC. In Phase II, we constructed a LC focused array with the 170 proteins, and profiled a large cohort, comprised of 352 LC patients, 93 healthy individuals, and 101 patients with lung benign lesions (LBL). The comparison of autoimmune profiles between the early stage LC and the combined group of healthy and LBL allowed us to identify and validate a biomarker panel of p53, HRas, and ETHE1 for diagnosis of early stage LC with 50% sensitivity at >90% specificity. Finally, the performance of this biomarker panel was confirmed in ELISA tests. In summary, this study represents one of the most comprehensive proteome-wide surveys with one of the largest (i.e. 1,101 unique samples) and most diverse (i.e. nine disease groups) cohorts, resulting in a biomarker panel with good performance.
Lung cancer (LC) 1 remains the leading cause of mortality from malignant tumors worldwide (1,2). According to the World Health Organization (WHO), among the 8.8 million cancer-related deaths in 2015, LC caused 1.69 million deaths worldwide (3). In the most populated country China, LC alone is responsible for the mortality of 42.05 per 100,000 persons (4). LC can be histologically categorized into two main classes: small-cell lung cancer (SCLC) and nonsmall-cell lung cancer (NSCLC). Approximately 79% of diagnosed LC is NSCLC, comprised of adenocarcinoma, squamous cell carcinoma and large cell carcinoma (5).
Regardless of the great advancements in targeted therapy and immunotherapy against LC in recent years, surgical resection followed by adjunctive radiation and/or chemotherapy is still the preferred method in the treatment of NSCLC patients in early stages (e.g. I-II LC), and when surgery is performed, there is a 70% one-year survival rate if the diagnosis is made at the earliest stage (6). Unfortunately, most LC patients are found in late stages at the time of diagnosis. For example, more than 75% of LC patients are diagnosed at more advanced stages (7). Currently, high-resolution (or low-dose) computed tomography (CT) of the chest is the only screening test shown to be efficacious at reducing mortality from early stages of lung cancer (8 -10). Indeed, as reported by the National Lung Screening Trial (NLST) of randomized 53,454 high-risk, asymptomatic adults, three rounds of annual screening with low-dose CT decreased LC mortality by 20% (8). In fact, LC was only diagnosed in Ͻ2% of the participants in the low-dose CT group (11); lesions thought to be malignant often require additional invasive procedures and increased radiation exposure to confirm the diag-nosis. Indeed, the cumulative risk of a false positive finding across 3 rounds of screening was 37% in the low-dose CT group at a 18% estimated overdiagnosis rate (10). Therefore, the discovery of noninvasive serological biomarkers for early stage LC diagnosis that yield high sensitivity and specificity will greatly benefit intervention and prevention of LC.
In this study, we employed a protein array-based approach to comprehensively survey autoantibodies against the human proteome for identification of novel serological biomarkers for early diagnosis in LC. Based on a screening of a large cohort of 1,101 samples, we discovered and validated a panel of three proteins, namely p53, HRas, and ETHE1, that provided 50% sensitivity and Ͼ90% specificity. ELISA tests further demonstrated the potential of this biomarker panel in future clinical diagnostic test formats.
MATERIALS AND METHODS
Cohort Description-All serum samples involved in this study were collected at Fujian Provincial Hospital, in Fujian Province, China, between 2014 and 2016. This cohort was comprised of 1101 serum samples collected from 162 healthy persons, 560 resident patients with LC, 153 resident patients with lung benign lesions (LBL), and 226 resident patients with other cancers. The 162 healthy persons were recruited during annual health examinations, including chest X-ray, abdominal ultrasonography, routine urinalysis, stool occult blood test, complete blood count, blood chemistries, and tumor antigen tests, such as carcinoembryonic antigen (CEA), CA199, and alphafetoprotein (AFP), to name a few. None of them showed any evidence of malignancy in the above tests. The 560 LC patients were recruited after histopathological confirmation of LC tumors. The TNM classification was used for evaluation of NSCLC staging and the VA scheme was used to classify SCLC into limited-and extensive-stages. The 153 LBL patients, including 83 pneumonia, 39 chronic obstructive pulmonary disease (COPD) and 31 pulmonary tuberculosis (TB), were recruited after accurate clinical assessment. The 226 patients with other cancers were recruited after histopathological confirmation of tumors. These patients included 34,66,27,48, and 51 patients with rectal cancer (RC), liver cancer (LiC), cervical cancer (CC), esophagus cancer (EC), and gastric cancer (GC), respectively. Detailed information of each subject of this cohort is listed in supplemental Table 1. This study was approved by the Ethics Committee (i.e. IRB) of Fujian Provincial Hospital. The sera were prepared according to standard protocol. Five milliliters venous blood of each subject was collected into a 12.5 ϫ 100 mm vacuum blood tube with diatomite coagulant, and centrifuged at 4000 rpm for 10 min at room temperature within 4 h after collection. Subsequently, sera were collected into 1.5 ml EP tubes and then stored at Ϫ80°C until use.
HuProt Arrays and Serum Profiling Assays-HuProt arrays were provided by CDI Laboratories, Inc. Each HuProt v3.0 array is comprised of 20,240 unique human full-length proteins, covering ϳ75% of the human proteome. Each serum sample was diluted 1000-fold in PBS, and profiled on HuProt arrays using a standard protocol as described previously (12)(13)(14)(15).
Construction of LC Focused Arrays and Serum Profiling Assays-Candidate proteins identified in the HuProt array experiments were cherry-picked to fabricate the LC focused arrays in a 2 ϫ 7 subarray format per slide. A 14-chamber rubber gasket (GraceBio Corp, Bend, OR) was mounted onto each slide to create individual chambers for the 14 identical subarrays on each slide. The subsequent assay process was identical to that described for HuProt array assay, with an excep-tion that the volume of buffers or serum samples was reduced to 50 l per subarray (12).
Data Analysis for Assays Performed on HuProt and LC Focused Array-First, the median values of the foreground (F ij ) and background (B ij ) intensity at a give protein spot (i,j) on the protein arrays (i.e. HuProt and focused arrays) were extracted. The signal intensity (R ij ) of each protein spot was defined as F ij /B ij . Because each protein is printed in duplicate on an array, R ij was averaged for each protein as R p .
Z-score of each protein on protein arrays was calculated using a method similar to the one described in our previous studies (12). A stringent cutoff (Z Ն 7) was used to determine the positives in this study. The sensitivity and specificity were calculated for each protein. For each comparison (LC versus negative controls), the biomarker candidates were selected with the highest discriminant ability (16), which is defined as
Discriminant ability ϭ
Sensitivity ϩ specificity 2 For the focused arrays fabricated with the candidate biomarkers, the signal value for each protein was normalized by dividing the median value of negative controls for each sample. p values obtained from the t test were calculated and adjusted as false discovery rates (17). The optimal cutoff value for each candidate was evaluated with two criteria: 1) at least 90% specificity and 2) the highest discriminant ability.
ELISA Assay-To develop ELISA-based assays, p53, HRas, and ETHE1 proteins were purified from yeast as described previously (18). After 50 ng of each purified protein was coated onto individual wells of an ELISA plate, each serum sample in 1:500-fold dilution was added to carry out the standard ELISA tests (18). The immunoreactivity signals were measured by reading the A 450 .
RESULTS
Overall Study Design-We employed the two-phase strategy reported in our previous studies (13,14) to identify novel biomarkers for early LC diagnosis (Fig. 1). Briefly, in Phase I,
Novel Lung Cancer Biomarkers by Human Proteome Microarrays
100 serum samples collected from 80 LC patients and 20 healthy individuals, were individually profiled on HuProt arrays. After data analysis, a total of 170 candidate proteins were identified and used to construct the LC focused arrays for Phase II validation. In Phase II, we assembled a new cohort with serum samples collected from 131 patients with early stage LC and 93 healthy subjects. Because lung benign lesions (LBL) often resemble early stage LC in imaging studies, we also included 101 LBL samples as additional negative controls. We randomly split the LC samples and negative controls (healthy ϩ LBL) in a 2:1 ratio -two thirds were used for modeling and one third for independent validation of biomarker candidates. Eight biomarkers were validated with Ͼ 13% sensitivity at Ͼ 90% specificity. Further analysis resulted in a three-protein biomarker panel with improved sensitivity, and its performance was further tested in late stage LC and other types of cancer. Finally, this panel was converted into an ELISA-based test that yielded a performance like that observed in the array-based assays.
Identification of Candidate Serological Biomarkers in LC Using HuProt Arrays-In Phase I, we employed HuProt arrays to profile 100 serum samples collected from 80 LC patients, including 20 SCLC, 24 adenocarcinoma, 23 squamous-cell carcinoma, and 13 large-cell carcinoma, as well as 20 healthy subjects, for candidate biomarker identification (Table I; supplemental Table S1). Statistic analyses did not show any significant differences between the LC and healthy groups in terms of age, gender or smoking history composition (Table I).
Each serum sample was diluted and individually incubated on the HuProt arrays, followed by a multiplexed detection of autoantigens that could be recognized by human autoantibodies of the IgG and IgM isotypes. Binding signals of both anti-IgG and -IgM channels were acquired, normalized, and quantified for each assay, based on which standard deviation (S.D.) was calculated (12). Using a stringent cut off (Z score Ն 7), positives were determined for each serum sample. For example, p53 and YARS showed respectively strong antihuman IgG and IgM signals, mostly in LC patients, but less so in healthy subjects ( Fig. 2A). Sensitivity and specificity values were calculated for each protein. We chose a generous criterion (i.e. discriminant ability Ն 60%), resulted in identification of 170 candidate proteins, 105 and 77 of which were chosen from the anti-IgG and -IgM profiles, respectively ( Fig. 2B; supplemental Table S2). Functional enrichment analysis identified many cancer-relevant terms, such as regulation of apoptosis and small GTPase mediated signal transduction, as well as signaling pathways relevant to cancers such as colorectal cancer, pancreatic cancer, and thyroid cancer (FDR Ͻ 0.5) (19, 20) (supplemental Table S3).
Identification and Validation of Biomarkers for Early Stage LC Diagnosis with LC Focused Arrays-In Phase II, we fabricated a LC focused array with the 170 candidate biomarker proteins to enable validation with a much larger cohort. We assembled a new LC cohort with serum samples collected from 131 patients with early stage LC, including 30 limited stage SCLC, 55 stage I/II adenocarcinoma, and 46 stage I/II squamous-cell carcinoma. Negative controls included 93 healthy subjects and 101 serum samples from 55 pneumonia, 26 COPD, and 20 pulmonary TB patients. Statistic analysis did not find any significant differences in age, gender or smoking history between the LC groups and negative controls (Table II; supplemental Table S1). To enable modeling and validation for biomarker identification, we randomly split each LC subgroup and negative controls in a 2:1 ratio -two thirds were used for modeling and one third for subsequent independent validation of biomarker candidates.
Each serum sample was profiled individually on the LC focused arrays using a similar protocol as described above. Again, both anti-IgG and -IgM profiles were obtained simultaneously. In the modeling stage, we compared the serum profiles between the LC and negative controls to identify (Table III). However, the same analysis did not reveal any significant biomarkers using the anti-IgM signals. The IgG signal distributions of p53, ETHE1 and HRas in the LC and negative controls are shown as examples in Fig. 3A. Areas under the receiver operating characteristic (ROC) curves (AUCs) were calculated to assess the performance of each candidate biomarker. The AUC values of the eight proteins ranged from 0.68 to 0.81 (Table III). We next calculated the maximum discriminant ability values for each protein with a requirement of a minimum specificity of 90% (see Methods). This approach allowed us to determine the optimal cutoff values of signal intensity for each protein with the corresponding sensitivity and specificity values (Table III).
To validate these potential LC biomarkers, we compared the signal intensity of each protein between the LC and negative controls in the validation cohort. As visualized in the box plot analysis, all of them showed significantly higher signal intensities in the LC than the negative controls (supplemental Fig. S1). Three proteins, p53, ETHE1, and HRas, are shown as examples in Fig. 3A. We next applied the optimal cut-off values obtained in the modeling stage to determine the sensitivity and specificity for each protein in the validation cohort. All of the eight proteins yielded similar or better sensitivity and specificity values in the validation cohort ( Fig. 3B; supplemental Fig. S1), confirming that the identified biomarkers have robust classification power for early stage LC diagnosis.
Identification of Combinatorial Biomarker Panels with Improved Performance for Early Stage LC Diagnosis-We noticed that the sensitivity values of each biomarker ranged from 13.8% to 32.2%. Therefore, we attempted to identify combinatorial biomarker panels with better performance. We exhaustively evaluated the performance for all possible combinations between two and eight proteins (ϭ253 combinations). First, we employed a binary scoring system to convert the actual signal intensity of each protein to either 1 or 0, such that 1 represented signal intensity greater than the optimal cutoff value, and 0 otherwise. Next, we evaluated the performance of every possible combination in the discovery cohort. For a given combination of n proteins, the sum of the binary scores of the n proteins was assigned to each serum sample as a summary score. If the summary score of a sample was greater than k (1 Յ k Յ n), the sample was called positive. The sensitivity and specificity at the best discriminant ability value were recorded for each combination. Finally, we identified the combination and its k value with the best discriminant ability by requiring a minimum specificity of 90%.
As a result, the best combination, comprised of p53, ETHE1, and HRas, achieved 50.7% sensitivity at 90.7% specificity with a k value of 1. In other words, a serum sample would be scored positive when at least one (i.e. k ϭ 1) of the three proteins showed signal intensity greater than the corresponding optimal cutoff value. When this panel was applied to the validation cohort, we obtained similar values of sensitivity and specificity (Fig. 3B), demonstrating the robustness of this panel in diagnosis of early LC. Moreover, after combing the results of the discovery and validation stages, the overall sensitivity for diagnosis of SCLC of limited stage and stage I/II adenocarcinoma, squamous cell carcinoma is 53.3%, 45.5% and 54.3%, respectively. When only high-risk smokers (i.e. Ն 20-pack year & age Ͼ 55 years) were compared between early LC and negative controls, the performance of this biomarker panel remained almost the same at 50.0% sensitivity and 84.8% specificity.
Performance of the Biomarker Panel in Late Stage LC and Other Types of Cancer-To evaluate potential value of this biomarker panel in late stage LC diagnosis, we recruited a new LC cohort of 221 serum samples, collected from 43 patients with extensive stage SCLC, 99 patients with stage III/IV adenocarcinoma, and 79 patients with stage III/IV squamous-cell carcinoma, and profiled them on the LC focused arrays. By applying this biomarker panel to analyze the obtained data set, we observed a sensitivity of 49.8%, suggesting that this biomarker panel was also useful for late stage LC diagnosis.
It is known that many of the same tumor antigens can be found in patients with a wide variety of cancers, diminishing their value for accurate diagnosis of a cancer type. To evaluate the performance of this biomarker panel in other types of cancer, we profiled a cohort of 226 serum samples, collected from 34 rectal cancer (RC), 66 liver cancer (LiC), 27 cervical ELISA Validation of the Biomarker Panel-To transform the array-validated biomarker panel into a more clinically friendly platform, we developed an enzyme-linked immunosorbent assay (ELISA) for the three proteins. Two cohorts were assembled: one contained 226 samples randomly selected from those used in Phase II and 229 newly collected samples (see Fig. 1; supplemental Table S1). As expected, analysis of the ELISA data obtained with the samples used in the arraybased assays demonstrated that all three proteins showed significantly higher signals in both early and late LC groups as compared with those in healthy and LBL groups. To ensure more rigorous tests, the 229 newly collected samples were tested in a single-blind fashion. A similar result was obtained (Fig. 4A).
We next evaluated the performance of this biomarker panel with the combined ELISA data sets. The ELISA data were converted to a binary scoring system by using a cut off value of 2-S.D. above the mean of the signal intensity of the combined healthy group, following the standard ELISA protocol. Using the same criteria as described above, 49.6% and 58.8% of samples in the early and late stages of LC, respectively, were scored as positives (Fig. 4B). In contrast, only 10.3% and 13.7% of healthy and LBL samples were respectively scored as false positives. Therefore, this biomarker panel showed 49.6% sensitivity at 87.9% specificity for early LC diagnosis in the ELISA tests. Moreover, the overall sensitivity obtained in the ELISA tests for diagnosis of SCLC of limited stage and stage I/II adenocarcinoma, squamous cell carcinoma is 55.9%, 44.4% and 48.9%, respectively.
DISCUSSION
Our study design possessed and displayed several strengths. First, we employed the most comprehensive human proteome (HuProt) arrays, with Ͼ75% coverage of the human proteome to improve the likelihood of finding potential biomarkers. Second, we recruited 560 LC patients with SCLC and NSCLC who presented with all three forms at different disease stages, aiming at finding robust LC biomarkers. Third, we combined the LBL samples with healthy subjects as negative control groups to enable better discrimination of malignant from benign lesions. Finally, ELISA was used as an independent platform to evaluate the performance of the newly discovered biomarker panel. A limitation of this study is that only Chinese serum samples were employed, raising a possibility, though remote, that there could exist some ethnic bias. Therefore, further validation studies with serum samples collected from other ethnic groups are necessary to confirm the performance of this biomarker panel.
This design allowed us to rapidly discover and validate eight proteins, namely p53, ETHE1, CTAG1A, C1QTNF1, TEX264, CLDN2, NSG1, and HRas, as biomarkers for LC early diagnosis. Many of them are highly relevant in tumorigenesis. For example, p53 is a very well studied tumor suppressor involved in a plethora of cellular functions, such as inducing cell cycle arrest, apoptosis, senescence, DNA repair, or changes in metabolism (21,22). Many mutations in p53 are found in various types of tumors, including LC (23,24). HRas is a member of the Ras oncogene family. Somatic mutations in HRAS have been found to be associated with bladder cancer, thyroid carcinoma, salivary duct carcinoma, epithelialmyoepithelial carcinoma, and kidney cancers (25,26). ETHE1 is a member of the metallo beta-lactamase family that catalyzes the oxidation of a persulfide substrate to sulfite (27). This protein has not been reported as a biomarker for any diseases to the best of our knowledge. Interestingly, ETHE1 has been shown to suppress TP53 expression via formation of a protein complex with HDAC1 and p53 (28). This observation might provide novel insights into the etiology of LC development. CTAG1A is a known tumor cell antigen found in various types of cancers (29). Furthermore, seven of the eight biomarkers (except CTAG1A) showed positive immunohistochemistry staining in LC tissue sections (30). In addition, to our disappointment, none of the candidate biomarkers identified in Phase I could be validated in the anti-IgM profiles in Phase II. One possible explanation for this inconsistence might be the fact that the more generous criteria (e.g. lower specificity and sensitivity required) were used for selecting candidate biomarkers in Phase I because we intended to be more inclusive for not missing any potential candidates. The fact that none of the anti-IgM candidates could be validated in Phase II emphasizes the importance of implementing an independent validation step in biomarker discovery.
The biomarker panel identified in this study outperformed previously reported LC biomarkers. For example, the sensitivity of detecting circulating tumor antigens, such as CA125, CA199, neuron specific enolase (NSE), carcinoembryonic antigen (CEA), and cytokeratin 19 fragment (CYFRA 21-1) is only 5.0%, 4.9%, 19.7%, 17.2%, and 26.5%, respectively, in patients with stage I NSCLC (31). In addition, the fact that some of these tumor antigens, such as CYFRA 21-1, are found elevated in serum samples of patients with radiation pneumonitis has limited their use in distinguishing LC from pneumo-nitis (32,33). Finally, the concentration of many circulating antigens tends to be very low because only a fraction of these proteins is distributed to the plasma from a few cancer cells in the preclinical stage, making it extremely challenging to detect them (34,35). The adaptive immune system is able to effectively amplify and memorize immune responses to tumor antigens, thereby enabling the exploitation of discovery of autoantibodies as cancer biomarkers (36 -40). Several autoantibodies against tumor antigens, such as p53, ubiquilin 1, cyclin Y, livin, and survivin, have been found to be readily detectable in serum samples collected from LC patients (41)(42)(43)(44)(45). However, previous reports of identification of LC biomarkers suffered from small sample sizes, a lack of a proper disease control group, and/or limited subtypes of LC (45)(46)(47). As a result, to date, these reported autoantibody-based serological biomarkers do not provide sufficient sensitivity or specificity for LC diagnosis, let alone early LC diagnosis (45,48).
In summary, we performed a comprehensive autoantibodybased survey for the discovery and validation of serum biomarkers for early LC diagnosis. It is important to note that because the serum samples were collected from patients at diagnosis, the biomarkers identified in this study were not identified in a LC screening cohort. Therefore, it would be important in the future to examine the performance of these biomarkers with serum samples collected before a person shows any LC-relevant pulmonary symptoms. Furthermore, because some genes are known to be mutated in LC cancer, we believe that inclusion of mutated proteins on the protein arrays may further improve accuracy of LC diagnosis and reduce false positive rates. As compared with the proteinbased biomarkers for cancer diagnosis, we believe that the HuProt array-based approach offers a unique advantage because the identified biomarkers are autoantibody-based. Because most proteins are not stable, especially when secreted into the peripheral, the concentrations of these proteins can fluctuate tremendously from individual to individual, making them unreliable to be detected. On the other hand, autoantibodies are extremely stable in the blood and can be amplified by the immune system. Indeed, autoantibodies of the IgG/A/E isotypes can have long lasting memories in a patient, rendering them ideal biomarkers for diagnosis and prognosis. Therefore, we believe that HuProt array-based approach is and will continue to play a dominant role in cancer biomarker identification.
Acknowledgments-Materials and/or funding for the study described in this [article/presentation] are provided by CDI Laboratories. Dr. Zhu is a founder, consultant to, and Scientific Advisory Board member for CDI Laboratories. Under a licensing agreement between CDI Laboratories and the Johns Hopkins University, Dr. Zhu is entitled to royalties on an invention described in this [article/presentation]. The terms of this arrangement are being managed by the Johns Hopkins University in accordance with its conflict of interest policies. The other authors have declared that no competing interests exist. | 2018-04-03T04:30:18.984Z | 2017-10-11T00:00:00.000 | {
"year": 2017,
"sha1": "8509dbeda3386cf01f5764ce6711f78feb96dec8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1074/mcp.ra117.000212",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9462a5b4d13e4909718466d34a1649327239cb6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213916001 | pes2o/s2orc | v3-fos-license | Power-Law Return-Volatility Cross Correlations of Bitcoin
This paper investigates the return-volatility asymmetry of Bitcoin. We find that the cross correlations between return and volatility (squared return) are mostly insignificant on a daily level. In the high-frequency region, we find thata power-law appears in negative cross correlation between returns and future volatilities, which suggests that the cross correlation is \revision{long ranged}. We also calculate a cross correlation between returns and the power of absolute returns, and we find that the strength of \revision{the cross correlations} depends on the value of the power.
Introduction
It has long been known that return and volatility are negatively correlated, and early studies [1,2] attempt to explain the return-volatility asymmetry as a leverage effect: a drop in the value of a stock increases finance leverage or debtto-equity ratio, which makes the stock riskier and increases the volatility. The other promising explanation for the return-volatility asymmetry is the volatility feedback effect discussed in [3,4]: if volatility is priced, an anticipated increase in volatility raises the required return, leading to an immediate stock price decline. Although the two effects suggest the same negative correlations, the causality is different [5].
Comparing the two effects empirically, Baekaert et al. [5] and Wu [6] argue that the dominant determinant is the volatility feedback effect. However, the studies using GARCH-type models [7,8,9] suggest that volatility increases more after negative returns than positive ones, which favors the leverage effect.
To discuss the full temporal structure of return-volatility asymmetry, using squared returns as a proxy of volatility, Bouchaud et al. [10] calculate the returnvolatility correlation function and find that returns and future volatilities are negatively correlated. On the other hand, reverse correlations, i.e., correlations between future returns and volatilities are found to be negligible. The results are fitted to an exponential function, and it is concluded that the correlations are short ranged. In addition, the decay times 1 are estimated to be about 10 (50) days for stock indices (individual stocks).
While for most developed markets, negative correlations between returns and future volatilities are found, an interesting phenomenon is observed in Chinese markets. Qiu et al. [11] calculate the return-volatility correlation function for equities in the Chinese market and find that returns and future volatilities are "positively" correlated, which is called the anti-leverage effect. Further studies [12,13] also support the anti-leverage effect in the Chinese market.
Although the return-volatility asymmetry of Bitcoin has been investigated using various models, such as asymmetric GARCH-type and stochastic volatility, it seems that a consistent picture of the return-volatility asymmetry of Bitcoin has not yet been obtained. For instance, while Bouoiyour et al. [27] observe a volatility asymmetry that reacts to negative news rather than positive, Katsiampa [28] and Baur et al. [29] find an inverted volatility asymmetry that re-acts to positive news rather than negative. Moreover, several studies [30,31,20] find no evidence of a leverage effect in Bitcoin prices.
Bouri et al. [32] investigate return-volatility asymmetry in two periods separated at the price crash of 2013. They find that, while before the crash Bitcoin shows inverted volatility asymmetry, after the crash, and for the whole period, no significant volatility asymmetry is observed. Using the stochastic volatility model, Philip et al. [33] find that one day ahead volatility and returns are negatively correlated.
Here, we approach the return-volatility asymmetry of Bitcoin through returnvolatility cross correlations. We calculate a cross correlation between returns and a power of absolute returns. This is in part motivated by the existence of the Taylor effect [34,35], which suggests that the strength of autocorrelations of a power of absolute returns, |r| d , is dependent on the value of power d, and, typically, the maximum autocorrelations are obtained at d ≈ 1 for stocks [35] and at d ≈ 0.5 for exchange rates [36]. The Taylor effect is also present for Bitcoin [37]. Thus, we investigate how the cross correlation of Bitcoin is dependent on the value of power.
This paper is organized as follows. Section 2 describes the data and methodology. Section 3 presents the empirical results. Finally, we conclude in Section 4.
Data and Methodology
We use Bitcoin tick data (in dollars) traded on Bitstamp from January 10, 2015 to January 23, 2019 and downloaded from Bitcoincharts 2 . Let p ti ; t i = i∆t; i = 1, 2, ..., N be the time series of Bitcoin prices with sampling period ∆t. We define the return, R i , by the logarithmic price difference, namely, In this study, we consider high-frequency returns with ∆t = 2 min, and we also consider daily returns. We further calculate the normalized returns by r i = (R i −R)/σ R , whereR and σ R are the average and standard deviation of R i , respectively. We calculate the cross correlation, CC d (j), between returns and the d-th power of absolute returns at lag j as where µ r and µ |r| d are the averages of r i and |r i | d , and σ r and σ |r| d are the standard deviations of r i and |r i | d , respectively.
Empirical Results
First, in Figure 1, we show the cross correlation, CC d (j), of the daily returns for d = 2.0. The cross correlations are mostly consistent with zero for both positive and negative lags, j, except for j = 0 and 1, at which negative correlations are observed. For other ds, similar results are obtained. Thus, at the daily level, the cross correlations are mostly insignificant, except for contemporaneous and small, positive lags.
Next, in Figure 2, we show the cross correlation, CC d (j), calculated with 2-min, high-frequency returns for d = 2.0. For positive js, we find negative cross correlations lasting from small to large lags, which is consistent with the results observed for developed markets [10,38]. For negative js, we observe positive, but smaller, cross correlations at several small lags. For larger (negative) lags, the cross correlations are consistent with zero. For the contemporaneous correlations, i.e., j = 0, we observe negative cross correlations.
To examine the scaling properties of the cross correlations at positive lags 3 , we plot negative values of the results, i.e., −CC d (j) in Figure 3 in log-log scale.
We fit the cross correlations with the power law function of κj −γ and the exponential function of α exp(−j/τ ) in a range of j = [1,200], where κ,γ, α, (18) and τ are fitting parameters. The fitting results of the power law (exponential) function are depicted by the red (green) curve in Figure 3. We find that the cross correlations are better described by the power law function than by the exponential function. In particular, we recognize that the exponential function does not adequately describe the data points of cross correlations at small lags. This finding is different from the results of previous studies that observe exponential behavior in the cross correlation [10,11,13]. The exponential behavior in the cross correlation indicates that the cross correlation quickly disappears as the lag increases, i.e., the correlation is short ranged. On the other hand, the power law behavior 4 that we observe indicates that the cross correlation decreases slowly with the lag, i.e., the correlation is long ranged.
In Figure 4, we plot the results of γ as a function of d and find that γ increases with d. We fit the results to a quadratic function, γ(d) = αd 2 +βd+ρ, where α, β, and ρ are fitting parameters; the fitting results are listed in Table 1. From the fitting results, we recognize that for d → 0, the power γ seems to approach the value around 0.56. To investigate the strength of the cross correlations, we plot κ as a function of d in Figure 5. More precisely, κ represents the strength of the cross correlations at lag j = 1. We find that κ is a convex function and that the maximum strength is obtained around d ≈ 1.4. Thus, the correlation CC d (1) at d ≈ 1.4 gives a stronger correlation than the traditional cross correlation defined at d = 2.
Conclusion
At the daily level, cross correlations are mostly insignificant for Bitcoin. By examining high-frequency Bitcoin returns, we find that returns and future volatilities are negatively correlated and the cross correlations between returns and future volatilities show power law behavior. We calculate cross correlations between returns and the d-th power of absolute returns and find that the maximum cross correlation is obtained at d ≈ 1.4. Thus, we were able to obtain clear evidence on the cross correlation by choosing other values of d, rather than the traditional value of d = 2.
Our findings on cross correlations suggest that, in modeling asset time series, we should more seriously consider models that produce power law behavior in the cross correlations.
For example, Ref. [40] proposes a fractional random walk model combined with a simple auto-regressive conditional heteroskedastic model, denoted as FR-WARCH, and finds that the FRWARCH model exhibits a power law in the cross correlations.
There exist universal properties, such as volatility clustering and no autocorrelations in returns, that appear across various assets. These properties are called the stylized facts (e.g., [41]). The existence of stylized facts suggests that the price formation is governed by certain common dynamics. If Bitcoin has a different property in the cross correlation from other assets, there could exit a different type of dynamics in Bitcoin. To come to a definite conclusion about whether the power law behavior only appears in Bitcoin, it would be desirable to examine other assets in detail. | 2020-02-13T09:21:57.326Z | 2020-02-11T00:00:00.000 | {
"year": 2021,
"sha1": "f2066c9a015f1fd22ce9be0e38411283193d37dc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2102.08187",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0bca1d3aefcaf90068454d19d7fdaa3271f25bd8",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Physics",
"Economics"
]
} |
218979295 | pes2o/s2orc | v3-fos-license | Association of Growth Differentiation Factor 5 (GDF5) Gene Polymorphisms with Susceptibility to Knee Osteoarthritis in Saudi Population
Background: Osteoarthritis (OA) is among the most prevalent joint diseases. Reduced patient life quality and productivity represent a major personal and community strains. GDF5 gene is involved in the development of bone and cartilage. Studies reported a clear and highly reproducible association between susceptibility of knee osteoarthritis (KOA) and GDF5. This study aims to detect GDF5 gene polymorphism and to evaluate its association with susceptibility to KOA. Methods: This study is a case-control carried out at the female section of College of Applied Medical Sciences, Taif University. 50 Samples with KOA were collected from patients at King Faisal Medical Complex (Rheumatology Clinic), Taif, KSA. The body mass index (BMI) and the functional disability were estimated in all patients. CRP and RF were estimated in the serum of all included subjects by ELISA. ESR was determined by Westergren method. GDF5 T > C (rs143383) promoter polymorphism was assessed by PCRRFLP. Results: We observed a statistically significant elevation of CRP, RF, and ESR on the patients with KOA relative to the controls. Seventy-two percent of our patients have obesity grade I and II and 56% have severe functional disability on Womac index. The TT and TC genotypes of GDF5 gene polymorphism were statistically more frequent in KOA patients than in the controls. TT genotype may be a risk factor to KOA. The T allele was more frequent in KOA and C allele was more frequent in the controls. GDF5 polymorphism was not related to BMI and functional disability of KOA. GDF5 polymorphism was found to be significantly related to high ESR and CRP. Conclusions: The present study revealed a possible genetic link between KOA and GDF5 polymorphism and that the TT genotype may increase the risk of development of KOA. However, a future study with a large number of patients is needed to confirm our results.
expression of the GDF5 protein in knee joint [22]. In European and Asian populations, rs143383 polymorphism in the 5'-untranslated regions of GDF5 is associated with knee OA [23]. In this regard, studies on OA-associated single nucleotide polymorphism (SNP) rs143383 is a C to T transition located within the 5′ untranslated region (5′UTR) of GDF5, which codes for the extracellular signaling molecule GDF5. About the association between rs143383 and OA, the investigators demonstrated that the OA-risk T-allele of the SNP-mediated reduced mRNA expression relative to the C-allele in an luciferase assay conducted in a chondrocyte cell line; chondrocytes are the only cell type present in cartilage [22]. This study stressed rs143383 as the vital functional SNP responsible for OA. It was also shown that the T-allele was associated with diminished production of GDF5 in cartilage [17].
Studies of genome-wide association (GWAS) reported a clear and highly reproducible association between susceptibility of knee OA and GDF5. In particular, single nucleotide polymorphisms (SNPs) span a length of 130 kb comprising GDF5 and downstream UQCC1 (ubiquinol-cytochrome C reductase complex assembly factor 1) was related with a 1.2 to 1.8-fold raise in knee OA risk [24].
Genetic factors are known to be important in the pathogenesis of osteoarthritis. The etiology of osteoarthritis remains complex, and the latest genetic information indicates that susceptibility to osteoarthritis occurs from unique combinations of multiple gene-gene and gene-environment interactions. This study was carried out to detect GDF5 gene polymorphism and to evaluate its association with susceptibility to knee osteoarthritis.
Methods
This study is a case-control conducted in the female section at College of Applied Medical Sciences, Taif
Introduction
Osteoarthritis (OA) is a widespread polygenic multifactorial disorder accompanied by joint cartilage degeneration and synovial inflammatory changes [1]. OA is the primary cause of physical disability in the elderly, and is characterized by pain in the joints, inflammation, and rigidity [2,3]. It is among the most prevalent joint diseases [4]; the incidence of hip and/or knee OA with symptoms is about ~ 242 million worldwide [5], with other disorders it is Slisted as the eleventh highest international disability cause [4]. Reduced patient life quality and productivity represent a major personal and community strain with a universal occurrence of 3.8% [6,7].
Knee OA has a higher prevalence than other joints, with an incidence rate of approximately 45 percent, rising to 60.5 percent among obese individuals [8]. Incidence rises with each decade of age and with the highest annual occurrence between ages 55-65 [9]. Knee OA is considered to be a widespread joint degenerative disorder primarily affecting the elderly [10]. It is characterized by destruction of the cartilage of joints [11]. Precise etiology for OA evolution remains lacking. However, age, genetic factors, environmental conditions and inappropriate lifestyles are the known threats for OA [12]. The effect of certain genes is examined, as the genetic influence of the primary OA is high, i.e. 40% for the knee, 60% for the hip and 65% for the hand joints [13]. Growth differentiation factor 5 (GDF5) gene is located on the chromosome 20q11.2 and it regulates the expression of GDF5 protein. It is classified within the bone morphogenetic protein group (BMP) [14]. It involves in development of bone and cartilage, particularly in endochondral ossification process [15]. The mutation of GDF5 gene associates with generalized osteoarthritis and skeletal related congenital diseases [16].
GDF5 is recognized also as the cartilage-derived morphogenetic protein of the superfamily transforming growth factor-β (TGF-β), have been found to play a key role in the growth, repair and reconstruction of cartilage and bone [17]. GDF-5 binds to the transmembrane serine/threonine kinase type I and II receptors to activate its signaling pathway [18]. BMPR-IB (BMP receptor IB), BMPR-II, and activin receptor (ActR) type IIB have higher affinities for the GDF-5 ligand [19]. Upon GDF-5 binding, the receptors are phosphorylated to activate the downstream Smad pathway. The Smad proteins then translocate into the nucleus to regulate the transcription of various genes [20].
Several studies have revealed that GDF5 performs a major role in musculoskeletal processes, influencing endochondral calcification, joint development, tendon repair and bone formation [17]. Defects of this gene were shown to be correlated to abnormal joint development or skeletal disorders in humans and mice [21]. Moreover, the polymorphism in GDF5 gene is related with low
Statistical Analysis
Using the SPSS computer software version 22.0., the data was recorded, tabulated and analysied. This research showed qualitative data as mean and standard deviation, with a number and a percentage, and quantitative data. Chi-square (X 2 ) test was used for comparing qualitative variables. The value of P < 0.05, and Odd ratio (OR) and Confidence interval (C.I.) of 95% are taken as important.
Results
The present study included 50 patients with knee osteoarthritis (KOA) and 50 healthy individuals. The age of the patients ranged 25 and 66 years while in the controls it ranged between 28 and 68 years. It included 13 male patients and 37 females and14 male and 36 females of healthy individuals, Table 1. The C-reactive protein level (CRP) in KOA patients ranged between 1.01 and 8.12 mg/L with a mean value ± SD of 4.324 ± 2.045 while in the controls it ranged between 0.004 and 2.03 with a mean ± SD of 0.91 ± 0.5. The difference of CRP levels in patients and controls was statistically significant (t = 11.48, P < 0.0001), (Figure 1A). In patients with KOA the ESR ranged between 3 and 20 mm/hr with a mean ± SD value of 10.48 ± 3.52 mm/hr. In control individuals ESR ranged between 1 and 13 mm/h with 7.68 ± 2.97 mm/h as a mean ± SD value. The difference between RA patients and the controls was statistically significant (t = 4.299, P < 0.0001), ( Figure 1B). The RF was assessed in the serum patients with KOA and the controls, in patients the serum level ranged between 3.45 and 12.24 IU/ml with a mean ± SD value of 6.5 ± 2.7 IU/ml while in the controls it ranged between 0.023 and 5.16 with a mean ± SD value of 2.78 ± 1.4 IU/ml. The difference between both groups was statistically significant (t = 8.73, P < 0.0001), ( Figure 1C). Assessment of BMI in the included patients of KOA showed that most of our included patients were suffering from obesity, Figure 2.
Functional disability in patients with KOA
We analyzed the functional disability in patients with KOA by the use of the most common indices available. According to the kellgren score, 44% of our patients
Blood samples preparation
A 10 ml peripheral venous blood sample was collected from all included subjects. Five milliliters of each obtained sample was withdrawn in EDTA tubes for DNA extraction and ESR determination. The remaining sample was left for 1 hour in a serum separator collection tube to clot at room temperature and centrifuged at 3000 rpm for 5 minutes. The separated serum was obtained and stored in -20 °C until analysis. CRP and RF levels were estimated in serum samples from KOA patients and control subjects using Human C Reactive Protein ELISA Kit (CRP), abcam, Cambridge, MA, USA (Cat No. ab99995), and rheumatoid factor (RF), ELISA Kit, MyBioSource, Inc. San Diego, CA 92195-3308, USA (Cat No. MBS262327) respectively. ESR was determined using Westergren method.
Genomic DNA extraction
Genomic DNA was extracted and purified using the QIAamp DNA mini kit (Qiagen CA, USA) from the peripheral blood collected in tubes containing EDTA from knee osteoarthritis patients and controls. Purified DNA was kept at -80 °C until used for genotyping.
GDF5 T > C (rs143383) promoter polymorphism detection by (PCR-RFLP)
Genotyping of GDF5 T > C was performed using the polymerase chain reaction restriction fragment length polymorphism (PCR-RFLP) as described by Tulyapruek, et al. [29]. The PCR amplification was carried out using recombinant Taq polymerase master mix (Dream taqgreen, code number k1081, LOT: 00643300) (Thermo Fisher Scientific Ballics UAB, V A Cracino 8, LT-002241 Vilnius, Lithuania) in a 25 µl total volume. Primer sequences to amplify the promoter (rs143383) of GDF5 were: GATTTTTTCTGAGCACCTGCAGG (forward) and GTGTGTGTTTGTATCCAG (reverse). Initial denaturation for 5 minutes at 95 °C followed by amplification for 35 cycles in thermocycler (PCR Sprint, Thermofisher, Waltham, MA) and denaturation for 1 minute at 94 °C, then annealing for another 1 minute at 58 °C, extension for 1 minute at 72 °C and final extension for 10 minutes at 72 °C. Following the manufacturer's standard, 10 µL of PCR sample was incubated with 3 units of BsiEI restriction enzyme for 4 hours at 37 °C. The digested product was electrophoresed on 2% agarose gel with ethidium bromide staining before being visualized on a UV transilluminator. The fragments lengths were 104 and 230 bp in CC, 104, 230, and 344 bp in TC, and 344 bp in TT genotypes.
GDF5 gene polymorphism
We analyzed the frequency of GDP5 gene polymorphism in KOA patients; we observed that TT and TC genotypes were more frequent in KOA than in the controls and that TT genotype may be a risk factor to OA. The difference in the frequency of GDP5 polymorphism genotypes were statistically significant between KOA patients and the controls (P = 0.019), Table 3.
Relation between genotypes frequency of GDF5 gene polymorphism and the studied parameters
There was no statistically significant correlation between BMI and GDP5 polymorphism in our included patients (X 2 = 1.24, P = 0.99), Table 5.
Analysis of alleles frequency of GDF5 gene polymorphism revealed that the T allele was more frequent in KOA (56%) and C allele was more in the controls (62%). This difference was statistically significant P = 0.006, Table 4.
In the present study, we also studied the genetic influence of GDP5 polymorphism and different laboratory parameters in KOA. We found a statistically significant correlation between GDF5 polymorphism especially TC genotype and high ESR and high level of CRP. Zhang, et al. was also previously reported that a significant correlation between TC genotype and high ESR and with high CRP [33]. In addition, the present study, there was no significant correlation between GDF5polymorphism and body mass index (BMI) in patients with KOA. Our results support the results of Mohasseb, et al. [34]. On the contrary to our results Zhang, et al. who reported that a significant correlation between GDF5 polymorphism and increased body mass index [33]. We analyzed the GDF5 polymorphism with the functional disability in patients with KOA. We noted that there is no significant correlation between GDF5 polymorphism and functional disability in KOA. Our results agree with the previous results reported by Zhang, et al. [33]. On the contrary, Mohasseb, et al. reported a significant correlation between functional disability in KOA and GDF5 polymorphism [34]. Collectivity, the current study revealed a possible genetic link between KOA and GDF5 gene polymorphism (rs143383) and that TT genotype may increase the risk of development of KOA. Our results need to be confirmed by a study with large number of patients. Assessment of the frequency of different genotypes of GDP5 polymorphism in relation to the presence of anemia in KOA patients revealed no significant relation, while there was a significant relation between high ESR, high CRP and GDP5 genotypes, P = 0.045 and 0.033 respectively, Table 6.
Assessment of the functional disability in patients with OA according to the different GDP5 genotypes revealed no significant relation between genotypes and Kellgren score (P = 0.715), Lesquene index (P = 0.79), and Womac OA index (P = 0.65), Table 7.
Discussion
Osteoarthritis (OA) is the most common chronic, degenerative, and disabling joint disease worldwide [30]. Primary knee OA is the most prevalent type of OA that usually affects people over age 45. Knee OA contributes to functional and psychological and social dysfunction related to quality of life impairment [4]. The molecular history of primary knee OA includes several genes encoding proteins that have important roles in the mechanism of underlying disease. Prior researches had examined many OA-associated target genes in various communities [7]. However, the results of most reports did not achieve a consensus on the identified OA susceptibility gene. The genes involved in the pathogenesis of OA are not clear at the moment [15]. Many studies have suggested that mutations of the GDF5 gene can lead to this disorder but the findings of genetic correlation research have been conflicting due to the feasibility of reproducing important correlations [17].
GDF5 is a signaling molecule, which is involved in development of the bone and cartilage, as well as joint formation [15]. Genetic abnormalities in the GDF5 gene develop a wide spectrum of masculo-skeletal diseases [31]. In the current study, we observed that GDF5 gene polymorphism was significantly related to OA of the knee. The current work revealed that there was a significant difference in the frequency distribution of the GDF5gene polymorphism between KOA patients and controls. The TT genotype polymorphism was more frequent in KOA relative to controls. Also, the T allele was common in KOA. Our results support the results of several previous studies who also noted in their study this significant correlation [22,28,29,32]. On the contrary, several studies reported that there is no relation between GDF5 polymorphism and KOA [17,30,31].
Many reasons may be related to the variation of results between the studies published worldwide [32,33]. First it is widely recognized that gene and environment and their interactions are influencing the development of OA and all previous studies were carried out on different populations with different genetics profiles. Second, variation in patient's recruitment criteria and methods of genotyping may result in variation of results between studies. Third, it has | 2020-05-29T05:16:53.591Z | 2020-05-08T00:00:00.000 | {
"year": 2020,
"sha1": "2c3b9115c481ad10c498ff8da817cec006a0a061",
"oa_license": "CCBY",
"oa_url": "https://www.clinmedjournals.org/articles/ijii/international-journal-of-immunology-and-immunotherapy-ijii-7-048.pdf?jid=ijii",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2c3b9115c481ad10c498ff8da817cec006a0a061",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
235382141 | pes2o/s2orc | v3-fos-license | Optimization of reaction condition of recombinase polymerase amplification to detect SARS-CoV-2 DNA and RNA using a statistical method
Recombinase polymerase amplification (RPA) is an isothermal reaction that amplifies a target DNA sequence with a recombinase, a single-stranded DNA-binding protein (SSB), and a strand-displacing DNA polymerase. In this study, we optimized the reaction conditions of RPA to detect SARS-CoV-2 DNA and RNA using a statistical method to enhance the sensitivity. In vitro synthesized SARS-CoV-2 DNA and RNA were used as targets. After evaluating the concentration of each component, the uvsY, gp32, and ATP concentrations appeared to be rate-determining factors. In particular, the balance between the binding and dissociation of uvsX and DNA primer was precisely adjusted. Under the optimized condition, 60 copies of the target DNA were specifically detected. Detection of 60 copies of RNA was also achieved. Our results prove the fabrication flexibility of RPA reagents, leading to an expansion of the use of RPA in various fields.
Introduction
Recombinase polymerase amplification (RPA) exponentially amplifies a target nucleic acid sequence using two opposite primers at a constant temperature near 40 C [1e3]. In RPA, recombinase binds to the primers, and the primers of the resulting complex bind to the homologous sequences of the DNA template. Then, stranddisplacing DNA polymerase extends the primer, and singlestranded DNA-binding protein (SSB) binds to the unwound strand. RPA is more suitable than PCR for use in fields because it does not require a thermal cycler. Indeed, most papers on RPA published to date have been focused on detecting pathogenic organisms with amplicon-detecting technologies in consciousness of field use, such as a lateral flow assay [4], enzyme-linked oligonucleotide assay [5], and electrochemical assay [6e8].
Unlike PCR, the major limitation of RPA is that the RPA kit is sold by only two companies, Twist Dx, which is now owned by Abbott (San Diego, USA), and Jiangsu Qitian Gene Biotechnology (Ningbo, China). As a result, these RPA kits have solely been used in almost all the RPA-related studies. Such limitation makes the researchers have limited flexibility in exploring the effects of the concentration of each component on the reaction efficiency. In order to circumvent it, we previously prepared recombinant recombinase and SSB and used them to examine the effects of pH, temperature, and various additives on the efficiency of RPA [9]. In this study, we established the detection system of SARS-CoV-2 DNA and RNA and attempted to optimize the reaction condition, using one of the wellknown statistical methods Taguchi method [10,11]. The results have shown that the sensitivity of RPA increased markedly by optimizing the concentration of each component.
Preparation of standard DNA and RNA
Standard DNA and RNA corresponding to sequences 28,571e28,970 (System 1 in Fig. 1A) and 28,171e28,470 (System 2 in Fig. 1B) of the SARS-CoV-2 gene deposited in GenBank NC_045512.2 were prepared as follows. PCR was carried out using the oligonucleotides listed in Tables S1 and S2 and Taq polymerase (Toyobo, Osaka, Japan) for 35 cycles of 30 s at 95 C, 30 s at 55 C, and 30 s at 72 C. The amplified DNA was purified using a MagExtractor (Toyobo). In vitro transcription was carried out using RiboMAX Large Scale RNA Production System (Promega, Madison, WI) at 37 C for 3 h. The synthesized RNA was purified by NICK Columns (GE Healthcare, Buckinghamshire, UK). The purified DNA and RNA concentrations were determined spectrophotometrically at A 260 . The DNA and RNA were stored at À30 C or À80 C, respectively.
In System 1, the standard DNA was amplified by PCR using 400 nt_F and 400 nt_B as a pair of primers and the mixture of oligonucleotides 400nt-1, 400nt-2, and 400nt-3 as a template. The T7 promoter-bearing standard DNA was amplified by PCR using T7-400 nt_F and 400 nt_B as a pair of primers and the standard DNA as a template (Table S1). Standard RNA was synthesized by in vitro transcription using the T7 promoter-bearing standard DNA as a template. In System 2, the standard DNA was amplified by PCR using 300 nt_F and 300 nt_B as a pair of primes and the mixture of oligonucleotides 300nt-1 and 300nt-2 as a template. The T7 promoter-bearing standard DNA was amplified by PCR using T7-300 nt_F and 300 nt_B as a pair of primers and the standard DNA as a template (Table S2). Standard RNA was synthesized by in vitro transcription using the T7 promoter-bearing standard DNA as a template.
Materials
Recombinant uvsX, uvsY, and gp32 were expressed in Escherichia coli and purified from the cells as described previously [9]. The purified uvsX, uvsY, and gp32 preparations yielded a single band with molecular masses of 43, 22, and 34 kDa, respectively (Fig. S1). Bst DNA polymerase (large fragment) was purchased from New England BioLabs (Ipswich, MA), and creatine kinase was purchased from Roche (Mannheim, Germany). Recombinant thermostable quadruple variant (E286R/E302K/L345R/D524A) of moloney murine leukemia virus (MMLV) reverse transcriptase (RT) was expressed in Escherichia coli and purified from the cells as described previously [12].
RPA reaction and statistical analysis
The reaction mixture (30 mL) for RPA was designed and prepared according to Taguchi's L27 (Table S3) orthogonal array consisting of 13 factors. The reaction was performed in a 0.2 ml PCR tube at 41 C in PCR Thermal Cycler Dice (Takarabio, Otsu, Japan). The amplified products were separated on 2.0% (w/v) agarose gels and stained with ethidium bromide (1 mg/ml). Each reaction condition for cDNA synthesis was scored as 1, 2, or 3 according to the intensity (no, faint, or clear, respectively) of amplified products. The data were analyzed by the following equation: where S/N m is signal-to-noise ratio of and s m is the score of each reaction condition (m ¼ 1, 2, ---, 27). The S/N x,i of level i (¼ 1, 2, or 3) of factor x (¼ 1 to 13) was the total of nine out of 27 S/N m values where the level of factor x is i. For example, the S/N 1,1 and S/N 2,1 were calculated as follows: Accordingly, the lowest S/N x,i value of the three S/N x,i values (S/ N x,1 , S/N x,2 , S/N x,3 ) indicates that level i is the most appropriate. Variation (V x ; x ¼ 1 to 13) and percentage contribution (P x ) for factor x were calculated using the following equations: Accordingly, high V x and P x values indicate that the effect of the difference in the three levels in factor x is high on the reaction efficiency.
Establishment of the RPA detection systems of SARS-CoV-2 DNA
For use as the assay in the optimization of RPA reaction condition, we established a detection system (System 1) of SARS-CoV-2 DNA (Fig. 1A). In System 1, nucleocapsid phosphoprotein gene was selected as a target according to the previous report [13]. We designed three forward and three reverse primers (Table S1), and selected one combination (1Fþ4 and 1Rþ8) that exhibited the best performance in sensitivity. The size of amplified product by the primer combination was 128 bp.
Round 1 of optimization of RPA reaction condition
We designed 13 factors and three concentrations (levels 1e3) for each factor (Table S4). Level 2 was set as the concentration used in the standard condition with which we previously examined the effects of pH, CH 3 COOK concentration, and temperature on the RPA reaction efficiency [9]. Levels 1 and 3 were set as 25e50% and 200e400%, respectively, of level 2. According to Taguchi's L27 orthogonal array (Table S3), the RPA reaction was carried out with 6 Â 10 8 copies (2 Â 10 7 copies/mL) of standard DNA and primers in System 1. The reaction products at 30, 45, and 60 min was analyzed by agarose gel electrophoresis. As an example, one of the results is shown in Fig. S2. Of the 27 reaction conditions, six (2, 3, 7, 17, 24, and 25) exhibited clear, four (5, 12, 13, and 18) exhibited faint, and the other 17 exhibited no 128-bp band corresponding to the amplified product. The signal-to-noise ratios for each reaction condition (S/N m ; m ¼ 1 to 27), those for each level of each factor (S/ N x,i ; x ¼ 1 to 13, i ¼ 1, 2, or 3), and variations (V) and percentage contributions (P) for each factor were calculated (Table S4). The results indicated that the Mg(OCOCH 3 ) 2 concentration exhibited the highest V (142) and P (39.7%) values with the optimal concentration of 7 mM (level 1).
It is known that the optimal concentration of Mg(OCOCH 3 ) 2 depends on primer and target sequences. Indeed, in the RPA kit sold by Twist Dx, Mg(OCOCH 3 ) 2 is not premixed but is added by users. High V and P values in our results suggested that the optimal range of Mg(OCOCH 3 ) 2 concentration is relatively narrow.
Rounds 2 and 3 of optimization of RPA reaction condition
The 13 factors and each three levels in Round 2 are shown in Table S5. Based on the results of Round 1 where the Mg(OCOCH 3 ) 2 concentrations were set as 7, 14, and 28 mM, for levels 1, 2, and 3, respectively, they were set as 5, 8, and 11 mM in Round 2. The concentrations of PEG35000, dNTPs, ATP, and primers were also altered. Tris-HCl (pH 8.2) was replaced with phosphocreatine. Twenty-seven RPA reactions were carried out with 6 Â 10 8 copies (2 Â 10 7 copies/mL) of standard DNA and primers in System 1. Results are shown in Table S5, indicating that the optimal uvsY, gp32, and ATP concentrations were not level 2 but level 1 (35 ng/mL) for uvsY, level 3 (400 ng/mL) for gp32, and level 1 (3 mM) for ATP concentrations. In addition, the uvsX, uvsY, gp32, and ATP concentrations exhibited relatively high V (21.1, 35.3, 130.4, and 28.6, respectively) and P values (4.2%, 6.9%, 25.7%, and 5.6%, respectively). These results suggested that these concentrations were rate-determining factors.
In the RPA process, the balance of the binding and dissociation between uvsX and DNA primer is important. In the presence of ATP, uvsX binds to DNA primer to form the nucleoprotein with the aid of uvsY. Upon hydrolysis of ATP, uvsX dissociates from DNA primer and is replaced by gp32. Thus, uvsX, uvsY, and ATP shift the balance to the binding, while gp32 shifts it to the dissociation. If this binding affinity is not high enough, the nucleoprotein cannot invade double-stranded DNA, thereby preventing DNA primer from binding to the target sequence. On the other hand, if the binding affinity is too high, uvsX remains occupied even after the elongation starts, preventing another nucleoprotein from binding to the target sequence and starting the elongation. Therefore, it was thought that the binding affinity of the reaction condition consisting of level 2 for all 13 factors' concentrations was too high.
Based on the results of Round 2, we attempted to lower the binding affinity by increasing the concentration of gp32 and decreasing the concentrations of uvsY and ATP. Thirteen factors and each three levels in Round 3 were determined (Table S6). Twentyseven RPA reactions were carried out with 6 Â 10 4 copies (2 Â 10 3 copies/mL) of standard DNA and primers using System 1. Results are shown in Table S6. Level 2 was optimal for the uvsX and gp32 concentrations. The uvsY concentration exhibited low V (12.6) and P (3.9%) values. These results suggested that the balance between the binding and dissociation of uvsX and DNA primer was adequately adjusted.
Performance of the optimized reaction condition
For use as the assay in the optimization of RPA reaction condition, we established two detection systems (Systems 1 and 2) of SARS-CoV-2 DNA and RNA (Fig. 1B). In System 2, ORF8 protein gene was used as a target because Centers for Disease Control and Prevention (CDC), USA reported the PCR primers targeting this region, and these primers are widely used in approved diagnostics of SARS-CoV-2 RNA. We designed nine forward and eight reverse primers (Table S2) and selected one combination (2F-15 and 2R-11) that exhibited the best performance in sensitivity. The size of amplified product by the primer combination was 99 bp.
RPA was carried out with 60e6 Â 10 7 copies of standard DNA, and RT-RPA was carried out with 60e6 Â 10 7 copies of standard RNA, both at 41 C for 1 h. In the analysis of the RPA or RT-RPA products in the subsequent electrophoresis, the optimized conditions detected 60 copies of standard DNA ( Fig. 2A) or RNA (Fig. 2B).
Finally, we compared the sensitivities of RPA before and after optimization. Using System 2, RPA was carried out with 60e6 Â 10 7 copies of standard DNA (Fig. S3). The condition after optimization detected 60 copies of standard DNA while that before optimization did not detect 600 copies. These results indicated that by optimizing the reaction conditions for three enzymes, 100 to 1000-fold higher sensitivity was achieved.
Generally, the performance of a nucleic acid amplification test depends largely on the performance of the enzymes involved. Indeed, DNA polymerases and RTs whose activity and/or stability have been improved by genetic engineering technique are currently used in PCR and RT-PCR [14,15]. However, such improvement has not been done for recombinase and SSB. It is of note that the optimal concentrations of uvsX, uvsY, and gp32 in the RPA reaction solution are in the range of 1e10 mM, which is 1000-fold higher than reverse transcriptase and thermostable DNA polymerase in cDNA synthesis and PCR. Such high protein concentration makes RPA reagents less flexible in fabrication. To solve this problem, increase in activity and/or binding ability of recombinase and SSB is required. On the other hand, considering the filed use of RPA, it is anticipated that storage of the reagents at room temperature is possible. To address this issue, use of thermostable recombinase and SSB from thermophilic organisms might be useful.
In PCR, various additives that increase the reaction efficiency have been reported: bovine serum albumin, trehalose, sorbitol, glycerol, Triton X-100, and Tween 20 stabilize an enzyme, and dimethyl sulfoxide, formamide, and ammonium sulfate increase specificity [16,17]. Helicase increases specificity by decreasing nonspecific binding [18]. Spermidine suppresses reaction inhibition problems encountered while analyzing clinical stool samples [19,20]. In RPA, little is known for such additives except for the recent report that betaine increases specificity [21]. Our results in this study might make the evaluation of the effects of various additives on the RPA reaction efficiency easier.
3.5. Insights into the effect of the balance between the binding and dissociation of uvsX and DNA primer on the RPA reaction efficiency As described above, Round 2 revealed that the reaction efficiency of RPA depends on the balance of the binding and dissociation between uvsX and DNA primer. To further understand this issue, we examined the effects of the concentrations of uvsX, uvsY, gp32, and ATP on RPA efficiency. The optimized condition obtained by Round 4 were used as standard conditions. Fig. 3 shows the analysis of the RPA products at 30 min using agarose gel electrophoresis. An amplified DNA band was observed at 400e4000 ng/mL uvsX, 40e400 ng/mL uvsY, 400 ng/mL gp32, and 0.35e3.5 mM ATP, while it was not observed at 40 and 120 ng/uL uvsX, 4 and 12 ng/mL uvsY, 40, 120, 1200, and 4000 ng/mL gp32, or 10 and 35 mM ATP. These results indicated that the optimal concentration of gp32 is narrower than that of uvsX, uvsY, and ATP, suggesting that the gp32 concentration is critical for the balance of the binding and dissociation between uvsX and DNA primer.
It is known that uvsX, uvsY, and gp32 form a ternary complex with a single-stranded DNA (ssDNA) [22]. Gajewski et al. performed the crystal structural analysis of the uvsY-ssDNA complex and showed that uvsX exists as a heptamer [23]. They also provided a model showing that uvsY promotes a helical ssDNA conformation that disfavors the binding of gp32 and initiates the assembly of the ssDNAeuvsX filament [23]. We presume that this model might be applicable to the mechanism of RPA reaction.
In conclusion, the sensitivity of RPA and RT-RPA for SARS-CoV-2 DNA and RNA increased by optimizing the concentration of each component using a statistical method. Our results pave the way for use of RPA in various fabrications.
Notes
The authors declare no competing financial interest. | 2021-06-10T13:22:06.364Z | 2021-06-10T00:00:00.000 | {
"year": 2021,
"sha1": "7549de92435c26a3d1cbe1a3e5e7c01daa732203",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.bbrc.2021.06.023",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "bdb580200ea5ecdf159f7967676a2740c05eb214",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252696838 | pes2o/s2orc | v3-fos-license | Consensus from an expert panel on how to identify and support food insecurity during pregnancy: A modified Delphi study
Background Food insecurity and hunger during pregnancy have significant implications for the health of the mother and baby. Assisting clinicians when they encounter women who are experiencing hunger or food insecurity during their pregnancy will increase the opportunity for better birth and pregnancy outcomes. At present there are no guidelines for Australian clinicians on how to do this. Methods This study uses a modified Delphi technique, allowing diverse participation in the process, to create consensus on the ways to address and respond to food insecurity during pregnancy. This modified Delphi collected data via two rounds of consensus. The opinions collected from the first round were thematically categorised and grouped. The topics were integrated into the survey for the second round and circulated to participants. During the second round, priorities were scored by giving five points to the topic considered most important, and one point to the least important. Results Through two rounds of consultation, the panel achieved consensus on how to identify food insecurity during pregnancy, with some clear items of consensus related to interventions that could be implemented to address food insecurity during pregnancy. Experts achieved consensus on items that have importance at the institution and policy level, as well as services that exist in the community. The consensus across the spectrum of opportunities for assistance, from the clinical, to community-provided assistance, and on to government policy and practice demonstrate the complexity of this issue, and the multipronged approach that will be required to address it. Conclusion This is the first time such a consultation with experts on hunger and food insecurity during pregnancy has been conducted in Australia. Items that achieved consensus and the importance of the issue suggest several ways forward when working with pregnant women who are hungry and/or food insecure. Supplementary information The online version contains supplementary material available at 10.1186/s12913-022-08587-x.
Introduction
Food insecurity, defined as inadequate access to healthy, affordable, and culturally appropriate food, impacts more women than men, particularly those of reproductive age [1,2]. Food insecurity and hunger during pregnancy have significant implications for the health of the mother and baby. Pregnant women who are food insecure frequently have poor diet quality and sub-optimal nutritional intake during pregnancy, leading to negative maternal and child health outcomes [3,4]. When compared to women who are food secure, women who are food insecure during pregnancy are at a higher risk of gestational diabetes [5], low birth weight [6,7], maternal stress [4,8], excess maternal weight gain [5], birth defects [9], premature birth, and struggle to breastfeed [10,11]. The impacts of food insecurity, and as a result inadequate nutrition, during pregnancy can be both significant and long term for mother and child, leading to challenges with child growth and development [4,5,12,13].
Approximately 1 in 10 pregnant women are food insecure in Australia [14]. A recent systematic review of interventions specifically focused on addressing food insecurity during pregnancy found that the main interventions are nutritional supplementation and/or nutrition education, [15] however, the limited number of robust evaluations or long-term interventions mean that evidence for any one intervention type is limited. Recent research has found that while health care providers in one Australian antenatal setting were aware of the importance of maternal nutrition for the short-and long-term health of both the mother and baby, they were uncertain how to broach issues surrounding food insecurity, and when they do, they have few strategies to assist the hungry or food insecure parent [16]. Assisting clinicians when they encounter women who are experiencing hunger or food insecurity during their pregnancy will increase the opportunity for better birth and pregnancy outcomes [17]. Clinical practice guidelines are often used in these situations to provide guidance for clinicians when dealing with patient concerns, however, as yet there are no such Australian guidelines or advice that assist antenatal management of women who are food insecure both during and following pregnancy.
The development of clinical practice guidelines and other forms of clinical advice traditionally consists of gathering scientific evidence, and formal and explicit consensus judgement methods [18]. This study uses a modified Delphi technique, allowing diverse participation in the process, to create consensus on the ways to address and respond to food insecurity during pregnancy with the information able to be used for the development of clinical practice guidelines for use by antenatal clinicians to assess and respond to food insecurity and hunger during pregnancy [19][20][21].
Study design
This study employed a modified Delphi approach to gain consensus and seek expert opinion in an iterative structured manner [19][20][21]. The Delphi approach involves key stakeholders and experts, and often uses focus groups or individual interviews, workshops, meetings, or seminars [22,23]. However, these methods typically require faceto-face interaction, a challenge during COVID-19 related travel limitations, and a process that does not allow for the engagement of people from disparate geographical regions. To overcome these challenges, this study employed a modified Delphi approach where consensus was achieved via online methods. Such an approach has been said to be characterized by greater openness attributed to anonymous participation [24], and an increased diversity of participants thanks to the increased accessibility of the online format [25]. Due to the exploratory nature of this work, this modified Delphi also included open-ended questions to gain further insight from experts. The use of open-ended questions allows items to be generated by the expert panel organically, in addition to the items that are generated and included from a review of the literature [26].
There are a number of guidelines for using the Delphi approach to achieve consensus [22]. This study defines consensus as 75% agreement combined with results of ranking, a suggestion made by both Diamond, Grant [22] and Foth, Efstathiou [27] who highlight the importance of defining consensus prior to beginning the first round. The combination of these two techniques, percentage agreement and ranking, allowed for consensus to be achieved in two rounds. The Delphi procedure allows for flexibility in delivery and number of rounds, typical modifications restricting the number of rounds to two or three due to the difficulty of sustaining a high response rate over subsequent rounds [28][29][30]. Including two rounds is supported by a large body of research that employs the Delphi procedure [31][32][33]. Kaynak and Macaulay [34], suggest that rather than being employed as a decision-making tool, the Delphi technique should be considered a tool of analysis. This means that the aim is not to achieve a definitive answer, but instead, to aid in the development of possible solutions. As a result, it is not necessary to continue rounds until all items reach consensus, but only until a clear pattern is discerned. In the current study, round one consisted of suggestions for practice based on current evidence [15] and asked participants to make suggestions on actions they considered useful in responding to food insecurity and hunger during pregnancy. The inclusion of open-ended question in the first round is consistent with previous research suggesting that the first round be as exploratory as possible [35], the subsequent round followed, where these suggestions were ranked and refined, a common feature of Delphi approaches as described in the literature [23,36,37].
Participants and recruitment
There is little agreement on the number of participants required to achieve consensus [23]. Linstone and Turoff [19] recommend a minimum of ten participants, acknowledging that when increased beyond this number, the Delphi can become labour intensive, with a large amount of data being gathered. While Okoli and Pawlowski [38] suggest a sample of between 10 and 18 sufficient to achieve consensus. Furthermore, it has been reported that improvements in reliability once the number of experts in the panel rises above 15 are negligible [39]. As a result of the level of time commitment required from the panel members, Hanafin and Brooks [40] suggest attrition rates of 16-28% should be expected per round. Allowing for this level of attrition and aiming to have at least ten contributors in the second round, we aimed to recruit a sample of 15-20 experts. While there is little agreement on the size of the sample [39], critical in the membership of the panel is that it is balanced in terms of the composition of members from different areas of expertise and experience [41,42].
Participants were recruited through professional networks of the researchers via email, direct contact through publicly available contact information, and via social media (Twitter and LinkedIn) to reach a broad audience. Participants were also invited to share the invitation to the first round with people in their network who might also have expertise in nutrition, pregnancy, and food insecurity. The recruitment material included a link where potential participants could read the plain language and informed consent statement and provide their contact details.
Data collection
Those who expressed their interest in being involved in the study through the recruitment procedure described above were provided with a link to an online survey via email. This modified Delphi collected data via two rounds of consensus.
Round 1: In round one, participants were asked to identify possible interventions which might be effective for targeting food insecurity among pregnant women, and considerations when dealing with food insecurity among pregnant women. They were asked to rank how important food insecurity is during pregnancy, whose responsibility it is to manage and respond to food insecurity, and if consideration of food insecurity should be included in standard clinical practice. In addition, experts were asked to suggest at least five potential aspects of care that need to be considered when supporting a pregnant woman who was food insecure, at least five possible ways to address food insecurity, and at least five possible barriers to addressing food insecurity among pregnant women. These suggestions were thematically collated for rating and ordering in the subsequent round. Participants were also asked to identify their main areas of professional practice, where they are allocated, and to self-identify their level of expertise from 1 (novice or in training) to 10 (expert). The round one survey was open for four weeks to include as many experts as possible.
Round 2: Participants who completed round one were sent a summary via email of the current research that seeks to address food insecurity in pregnancy. This summary was based on a systematic review [15] completed by the authors and describes the current situation of food insecure pregnant women and current evidence-based interventions. The summary of previous evidence was provided in round two, rather than earlier, to discourage the bandwagon effect [43], a common limitation of Delphi, allowing for free flowing ideas to be generated in round one. This material was designed to orientate the expert to the focus of the study, a process that has been found to be a useful way to build the research relationship and provide the experts with an easy summary of the current evidence [44].
A list of all suggested priorities and ways to respond to food insecurity during pregnancy, based on the outcomes of round one and the current literature, were compiled into a new survey and emailed to each expert who was involved in round one. Experts were asked to rate each item in terms of importance on a 5-point Likert scale (1 = not important at all, 2 = not very important, 3 = moderately important, 4 = very important, 5 = extremely important). A free-text space was provided for feedback on the items and to comment on their decision-making process. The experts were given two weeks to complete their responses and were sent a reminder via email after the two weeks had lapsed.
Data analysis
The opinions collected from the first round were thematically categorised and grouped. The topics were integrated into the survey for the second round and circulated to participants. During the second round, priorities were scored by giving five points to the topic considered most important, and one point to the least important, the mean score for each item was calculated. Responses for extremely important and very important were groups together to determine consensus. Participants were asked to choose five of the same items to be ranked from highest to lowest priority. Participants were given the opportunity to provide qualitative responses related to their selection, these were thematically analysed and are presented here verbatim [45].
Results
In total, 12 experts completed round one of the Delphi and 11 completed round two. The one participant who completed round one but not round two was followed up via email twice, but made no reply. Participants were located in various states in Australia; four in Queensland, three in Tasmania, two in Victoria, and one each in New South Wales, and the Australian Capital Territory. Participants had expertise in academia and/or research (n = 6) and in clinical practice as a midwife or dietitian (n = 9); most (n = 9) identified themselves as having a high level of expertise with food insecurity, hunger, and pregnancy. See table one for demographic characteristics of the sample (Table 1).
Most participants (n = 10, 83%) considered food insecurity to be a serious concern for mother and baby during pregnancy. These concerns related to participants concern about the consequences of food insecurity during pregnancy for both mother and baby.
If a woman is food insecure it will compromise short-and long-term outcomes for her and her offspring and intergenerationally (Research/academia, Dietitian/nutritionist, expert).
With those participants who were also clinicians were concerned about both the physical impacts of food insecurity and hunger and about the mental health implications, both in the short and the long term.
Immediate risk for nutritional deficiency for key nutrients e.g., iron, folate, iodine. There is also the stress of being able to feed oneself and support a healthy pregnancy. Long term affects relationship with food and eating behaviours which can influence both their own health and their feeding of their child. Can set up disordered eating (Dietitian/nutritionist, expert).
Concerns related to the multiple factors that can impact food insecurity were acknowledged by other participants. These factors were said to be exacerbated by the COVID-19 pandemic as financial security during periods of restrictions impacted people's employment and food security. Tas Most participants suggested (n = 11) that asking women about their food security status pregnancy should be standard part of clinical practice. Participants suggested that this should be included in general practice and pregnancy care.
Through my work in
Should be more included in pregnancy care guidelines to raise the importance of this area, care should be provided in a team approach with food and eating advice, access to food relief and other social supports. There needs to be more investment and critical looking into the system. Access to income support payments in pregnancy do not reflect the increased need at this time putting the health of pregnancy at risk (Dietitian/nutritionist, high level of experience).
Others considered this to be a 'system' problem, with the real solution lying in system change spearheaded by governments, and in the absence of governments, an approach that brings in other actors within the healthcare system to provide comprehensive care to people who are food insecure.
The government -policies should exist that eliminate food insecurity. Until they do so, a multidisciplinary approach is most appropriate -primary care, social work, mental health, dietitians (Midwife, expert).
Participants were asked to identify barriers that prevent them from addressing food insecurity among their clients. These barriers can be grouped into three broad categories. The first are those barriers that mean a clinician (45) 2 (18) 1 (9) * Participants were able to choose more than one option, percentages do not total 100 cannot personally assist a patient or client, for example, some suggested that there was insufficient education about food insecurity in the midwifery curriculum, while others highlighted the time barriers they face when providing care. The second barrier relates to a misunderstanding or uncertainty about whose responsibility addressing food insecurity among patients or clients is, or about how important addressing food insecurity is when there may be other concerns, including those specifically related to pregnancy, or others such as domestic violence, mental illness, or homelessness. The final barrier relates to the systemic level challenges that prevent participants from providing assistance related to food insecurity to clients or patients. System level barriers include a lack of government financial support, to those that exist at the health care level, including timing of provision of care and a lack of routine food insecurity screening.
Identification of a food insecure pregnant woman
Participants were asked to rate the importance of a range of considerations when supporting or identifying a pregnant woman who is (or who they suspect to be) food insecure or hungry ( Table 2). Of the nine statements posed, eight achieved consensus, with five achieving 100% agreement. Mean scores for eight of the nine statements were over 4 out of a possible 5. Agreement and rank prioritisation were consistent. Participants identified linking women with appropriate social care services such as emergency community food assistance as both the most important consideration and as the highest priority. The item that was ranked as the lowest priority and where only 64% of participants indicated that this was extremely or very important was providing women with food literacy or nutrition education.
Addressing food insecurity during pregnancy
Participants were asked to rate the importance of a range of actions when addressing food insecurity during pregnancy (Table 3). Of the 14 statements posed, ten achieved consensus, with one achieving 100% agreement and seven achieving 91% agreement. All the actions that achieved consensus were system level actions, that could be achieved through policy or institution level cooperation. Agreement and rank prioritisation were consistent. Mean scores for 10 of the 14 statements were above 4 out of a possible 5. Participants identified creating a social care arrangement specifically for food insecure pregnant women, one that might include access to nutrition supplements, and care and support for pregnant women that focuses on reducing stigma and blame, as the key priority and the most important action. Items that were ranked as the lowest priorities included providing women with food literacy or nutrition education, linking women with other health care services, and directly providing food (either through food parcels or via commercial meal kits).
Interventions to address food insecurity during pregnancy
Participants were asked to rate the importance of a range of activities that could be implemented to address food insecurity during pregnancy (Table 4). Of the nine statements posed, four actions achieved consensus, two achieved 100% agreement, one was determined to be approaching consensus (73% agreement). Mean scores for five of the nine statements were above 4 out of a possible 5. Actions that were highly ranked were those which can be influenced at the clinic level, including routine food security screening, the introduction of clinical practice guidelines, and referral to emergency and community food assistance; these items are consistent with other questions in round two and results of round one. Items that were ranked as the lowest priorities included providing women with food literacy or nutrition education, and directly providing food (either through food parcels or via commercial meal kits).
Discussion
This study adopted a modified Delphi to facilitate a systematic and rigorous consultation exercise on the issue of food insecurity and hunger during pregnancy. This study drew on the clinical and research experience of an expert panel on food insecurity, hunger, and pregnancy in Australia. Through two rounds of consultation, the panel achieved consensus on how to identify food insecurity during pregnancy, with some clear items of consensus related to interventions that could be implemented to address food insecurity during pregnancy. This is the first time such a consultation with experts on hunger and food insecurity during pregnancy has been conducted in Australia, the items that achieved consensus and the importance of the issue suggest several ways forward when working with pregnant women who are hungry and/or food insecure. Experts achieved consensus on items that have importance at the institution and policy level, as well as services that exist in the community. The consensus across the spectrum of opportunities for assistance, from a clinical or institutional level, to community-provided assistance, and on to government policy and practice demonstrate the complexity of this issue, and the multipronged approach that will be required to address it. Of importance when considering these responses is that experts in this study were not looking for a band aid or temporary solution to issues of food insecurity and hunger during pregnancy, but, rather, were seeking a solution that was both tested and found effective and addressed some of the structural reasons that people are food insecure.
Institutional level solutions
Items that achieved consensus that could be implemented at the institution level, for example at individual clinics or hospitals or indeed in all hospitals, include routine screening and linking pregnant women with a range of services, both internal and external to the clinical setting. Including routine screening and then providing programs that link food insecure households or individuals to the services via health care settings is a solution that has grown out of the suggestion that supporting food security can lead to improvements in population health [46,47]. There is increasing interest in routine screening and the role of healthcare systems in addressing food insecurity in non-pregnancy healthcare settings [48,49]. The International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) Z codes (Z55-Z65) allow for the classification and documentation of the social determinants of health in electronic medical health records. The official guidelines for coding and reporting of the ICD-10-CM suggest that all clinicians, not just physicians involved in the care of a patient, document the Z codes to report on the social determinants of health of the patient [50]. Screening tools that are based on or match these codes could be incorporated into existing screening mechanisms or could be used as the basis a more comprehensive referral system.
While screening for food insecurity does not exist in the Australian health care setting, screening exists in health care settings in other countries, where it has been found to be part of a successful approach when seeking to address food insecurity [51][52][53]. Both the American Academy of Family Physicians, and the American Academy of Paediatrics recommend that food insecurity is routinely screened [54]. A systematic review of routine screening in the health care setting found that screening is generally conducted via a brief screening tool comprising one or two items [54]. As the results of this current study demonstrate, there is an appetite for routine screening from clinicians to identify and assist food insecure pregnant women. In Australia, there is work underway that seeks to highlight the link between health care settings and actions that address food insecurity. For example Kerz, Bell [48], have validated a brief tool to be used in a Australian paediatric health care setting, with findings suggesting that food insecurity is prevalent among families of children attending paediatric outpatient hospital appointments. While McKay and colleagues [14] have demonstrated that a brief tool measuring food insecurity could be used in a clinical setting as a component of a referral pathway for pregnant women who are identified as food insecure. It is possible that this two-item tool could be included in the electronical medical health record and incorporated into standard practice. While there is evidence for the importance and acceptability of screening for food insecurity, practical considerations need to be made. For this reason, researchers suggest that short screening measures be employed [14,48]. Short screening measures are limited in that they do not allow for the assessment of the severity of food insecurity, however, they are more appropriate for busy clinical settings and take into consideration barriers including time constraints and increased workloads experienced by clinicians, while still being able to determine food secure status [55]. While it is clear that clinicians consider food insecurity screening as an important component of antenatal healthcare [16,56], there are a variety of barriers that may prevent them from asking their patients about their household food security. This includes a lack of guidelines, uncertainty surrounding responsibility for screening, inadequate clinical knowledge and training, and time constraints [16,56]. While approaches to screening are gaining traction, there remain gaps in how to best link food insecure pregnant women to the services they need. A recent systematic review exploring interventions that have sought to address social needs during pregnancy care after standard or routine screening, found that while there are evidence based interventions for family and domestic violence, there are few interventions for other social needs, including for people who have been identified as food insecure [57]. A different review found that despite an increase in the number of care settings that screen for food insecurity, those who are referring food insecure or hungry women and families, are largely referring them to external services (for example, in the USA supplemental nutrition assistance program, SNAP, and women, infants, and children supplemental nutrition assistance program, WIC, are common places of referral). However, referral and assistance can be less formal and include providing information about emergency and community food assistance, as well as providing assistance via community health workers or social workers [58]. Research suggests that these linkage programs have positive health outcomes [59].
Community level solutions
Experts rated community level solutions, such as linking women to appropriate income support and social care services, and referrals to intimate partner violence services. The consideration of these community levels services reflects an acknowledgement that while food insecurity has clinical implications, the solutions may lie outside the hospital setting, with structural and systemic level changes known to take considerable time, highlighting the need for immediate solutions to solve short term need. While the definition of food insecurity is a lack of appropriate food, there are many reasons that individuals and households are unable to access food. Estimates suggest that 13% of Australian's live below the poverty line [60] due to rising living costs, stagnant wage growth, and unemployment and underemployment. Poverty predisposes low-income individuals towards a suboptimal diet [61]. Many people who live below the poverty line are also in receipt of government welfare payments, however, these payments may be insufficient to cover the basic costs of living, increasing stress and pressure on individuals and households. There is a body of research that highlights the role of poverty and income on chronic food insecurity [62], with many people who live on low incomes forgoing food for other basic living expenses [63]. Low-income neighbourhoods are more likely to have limited options for fresh produce and whole grains [64], and are more likely to have access to fast-food outlets and convenience stores [65]. Low income households are often forced to make decisions about the food they can purchase, sometimes in the absence of health considerations [66]. For many individuals and families, the experience of food insecurity and its impact on diet is greater than limited funds and lack of access to healthy food, and while many households experience short term or acute food insecurity, for other families, the experience of food insecurity can be long term, often intergenerational [67,68]. While there is evidence to suggest that at current levels, government-provided income support is below the poverty line for most families, it can provide some mitigation from the more serious impacts of food insecurity [69,70] and as highlighted by experts in the current study, referral to appropriate income support should be included in any approach to address household food insecurity in a health care setting.
In addition to the physical experience of hunger and the physiological impacts of poor nutrition, there are also psychological implications for food insecurity. Work from the USA has identified a relationship between receipt of welfare and intimate partner violence, finding that intimate partner violence was associated with negative health outcomes and greater material hardship [71]. There is a significant amount of evidence highlighting the risk of domestic violence during pregnancy [72], and there is emerging evidence suggesting a relationship between food insecurity during pregnancy and intimate partner violence. According to Ricks, Cochran [73], food insecurity can be linked to violence in three main ways. First, economic abuse can produce food insecurity, as one partner in the relationship controls or restricting access to finances by the other partner [74]. Second, many individuals who escape an abusive relationship rely on financial assistance and low-wage jobs for survival, therefore lacking the financial ability to secure food. Third, some evidence to suggest that a food insecure environment may increase the rate of violence [75,76]. The relationship between food insecurity, pregnancy, and intimate partner violence is unidirectional. Pregnant women who are food insecure are more likely to experience violence from an intimate partner [76,77], and there is a predictive effect of intimate partner violence on food insecurity in longitudinal studies [78]. There are a range of factors for intimate partner violence, including drug and alcohol use, prior violence, traditional attitudes to gender and a range of socio-demographic characteristics [79]. As highlighted by experts in this study, there is a need to consider intimate partner violence when considering food insecurity. Screening for intimate partner violence already exists in most antenatal care in Australia, and as such most practitioners working in this space will already be able to screen for intimate partner violence, posing an opportunity to include food insecurity screening at the same time [80]. Promisingly, a recent systematic review suggests that pregnant women who are experiencing intimate partner violence with or without other mitigating factors are likely to benefit from screening, referral, and supportive counselling [81].
The main community level responses to food insecurity and hunger in Australia are through emergency and community food assistance; foodbanks and pantries, soup kitchens, and school lunch and breakfast programs [68]. While these community solutions play an important role in the charitable response to food insecurity and hunger, many people who use these services experience shame and stigma [82], and various restrictions on how and when they can be used means they are generally not able to meet all the needs of those who are experiencing hunger and food insecurity [83]. While many Australian emergency and community food assistance providers refer clients on to other services including family violence and income support services [84], to date, there has been limited evidence to suggest that clinicians refer patients to emergency and food assistance providers, nor that this is something that they would consider doing [16]. Unlike the more formal system in the USA, in Australia, these services are typically informal and are not a part of a systemic approach to food insecurity through partnership with government, health care, and charity. Encouraging research from the USA highlights a growing body of evidence demonstrating positive partnerships between healthcare systems and local food assistance programs as a way to reduce food insecurity and hunger and assist people to access the services they need [58,85]. This research suggests that such a partnership may result in increased food intake, including increased fruit and vegetable intake [86], and better health outcomes [87,88].
Government level solutions
While most high-income countries have some form of government assistance for those who are experiencing hunger or food insecurity, these programs can vary widely. As described above, the main response of governments to food insecurity and hunger in Australia is through emergency and community food assistance. This sector has grown since the 1990s, most rapidly in the past decade. The sector in Australia operates through both formal and informal networks and is comprised of food banks, food pantries, soup kitchens, and meals programs operated by charities [89]. Foodbank Australia, Australia's largest re-distributor of community food, has reported an increase in the number of people accessing their services, with almost one million people receiving food from Foodbank each month in 2021 [90]. This system is complemented by a range of government-provided or -run welfare programs that provide income support for aged, disabled, parents of young children, people seeking employment, and family tax benefits that help low-income families with the cost of raising children. Unlike the system in the USA, there is no large government program that is specifically designed to provide food assistance. The USA in comparison has multiple federal programs that are specific to food aid. The main federal food assistance programs include the Supplemental Nutrition Assistance Program (SNAP); the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC); and the National School Lunch Program (breakfast, lunch, and summer meals) [91].
Experts identified structural solutions to food security as possible ways to address food insecurity during pregnancy and as interventions that could be tried. While Australia does not currently have a comprehensive food security program like that of the USA, we may have reached a time when key stakeholders need to advocate for one. Enrolment in federal food assistance programs in the USA is associated with improved outcomes across multiple dimensions, including food security, nutrition, health, development, and health care costs [92,93]. Importantly, the WIC program has been found to significantly reduce food and nutritional security among both pregnant women and children [94,95]. Such a program allows for a clear referral pathway for women who are identified as food insecure when pregnant and ensures a systematic approach to ensuring adequate food and nutrition [52,96].
Limitations
While it is clear that solutions to targeting food insecurity in pregnancy have been acknowledged by experts in this area, there are a number of limitations that need to be taken into consideration. The aim of the study was to include 15-20 experts in round one in order to maintain the 10 needed for round two. Despite a very board recruitment campaign, this was not achieved, and more participants may have provided a diversity of opinions. However, most participants (11 of 12) who completed round one responded to round two, thereby eliminating the problem of attrition common in other Delphi studies. Secondly, COVID-19, and time and financial constraints meant that a face-to-face Delphi was unachievable. We chose a modified Delphi to attract a broad range of experts from all over Australia. Having an in-person meeting may have altered the results or provided more solutions. This study does not include the voices of consumers, and as such, solutions that may have been identified by those experiencing food insecurity may have been missed. However, a recent Australian study with pregnant women and their experience of hunger and food insecurity suggested that that while over half (57.1%) were comfortable that their health care provider might asked them about their household food security, most (61.6%) did not expect to be asked [97]. Finally, as described all participants were clinicians and were not experienced in policy or in working at a system level, this could be a limitation in both the suggestions provided here and the possible outcomes of this work if taken up by clinicians. Despite these limitations, the consensus in this study is a strength and provides support for the results presented here.
Conclusion
Through a rigorous and systematic modified Delphi, we have been able to provide a number of suggestions, supported by a panel of experts, on how to identify and support food insecure pregnant women in a clinical setting. The logical next step from this study is the creation and testing of acceptability of clinical practice guidelines for the assessment and support of food insecurity during pregnancy. This is a significant gap in clinical care in the Australian health care setting.
Supplementary Material 1 | 2022-10-05T14:09:54.276Z | 2022-10-05T00:00:00.000 | {
"year": 2022,
"sha1": "c605d3341d99f1ede4d6b81a155b8d2c38331d1e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "c605d3341d99f1ede4d6b81a155b8d2c38331d1e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
111390753 | pes2o/s2orc | v3-fos-license | Primary Step Towards In Situ Detection of Chemical Biomarkers in the UNIVERSE via Liquid-Based Analytical System: Development of an Automated Online Trapping/Liquid Chromatography System
The search for biomarkers in our solar system is a fundamental challenge for the space research community. It encompasses major difficulties linked to their very low concentration levels, their ambiguous origins (biotic or abiotic), as well as their diversity and complexity. Even if, in 40 years’ time, great improvements in sample pre-treatment, chromatographic separation and mass spectrometry detection have been achieved, there is still a need for new in situ scientific instrumentation. This work presents an original liquid chromatographic system with a trapping unit dedicated to the one-pot detection of a large set of non-volatile extra-terrestrial compounds. It is composed of two units, monitored by a single pump. The first unit is an online trapping unit able to trap polar, apolar, monomeric and polymeric organics. The second unit is an online analytical unit with a high-resolution Q-Orbitrap mass spectrometer. The designed single pump system was as efficient as a laboratory dual-trap LC system for the analysis of amino acids, nucleobases and oligopeptides. The overall setup significantly improves sensitivity, providing limits of detection ranging from ppb to ppt levels, thus meeting with in situ enquiries.
Introduction
The search for traces of past or present life in the solar system arouses the curiosity of many scientists. Detection of organic biomarkers has become a key challenge in planetary exploration, in order to understand whether they played a role in the origin of life on Earth [1,2]. Taking Earth-based techniques to develop a spatial instrument suite with multiple capabilities is, however, a real challenge. In space, even simple chemical analyses involve complex sample handling inside a probe's instrument systems and require a sophisticated design with mass and power constraints as major factors. Within this framework, the technique of choice which was, and is still, employed for landed missions dedicated to the quest of life traces is pyrolysis gas chromatography mass spectrometry (Py-GC-MS) [3]. Based on a compound's volatility, it was designed to detect low- to intermediate-molecular-weight organic biomarkers with improved sensitivity, leading to the in situ detection of organic molecules, as demonstrated by the Rosetta mission [4][5][6][7]. From Viking to the next ExoMars 2020 mission, this approach has been greatly improved with the use of online chemical derivatization agents, multi-column chromatographs and integrated traps (tenax, carbosieve, glass bead) [3,4,[8][9][10][11][12]. These onboard instruments enable the detection of organic molecules, such as polycyclic aromatic hydrocarbons (PAHs) [13], amino acids [14,15] and sugars [16,17]. While such molecules have already been found in the interstellar medium and in many meteorites, their detection on extra-terrestrial planets remains a difficult task, and their assignment to a definite biological origin is still questioned, as they can be abiotically produced. To find additional data leading to unambiguous features of extant or extinct life and/or of prebiotic chemistry beyond Earth, researchers now pay attention to molecular biological polymers [18]. As a consequence, future onboard instruments must be able to detect and quantify all these compounds in one pot. As solid, liquid and aerosol samples are anticipated, the future instrument platform should involve versatile sample analysis. Gas and/or vapor samples can be analyzed directly by mass spectrometers or gas chromatograph-mass spectrometers [19,20], which have already been successfully used in previous planetary [11,21] and cometary missions [10,22,23]. Liquid (lake, icy regolith/cryovolcanic meltwater) and solid samples, undergoing melting, extraction or solubilization into a liquid mobile phase before analysis, could be analyzed either by gas chromatography (GC) or liquid chromatography (LC).
In recent years, micro-fluidic systems involving sandwich and/or competitive immunoassays [24,25], microchip capillary electrophoreses [26][27][28] or nanopore-based analysis [29] have been designed [30]. A wide range of molecular-sized compounds, from amino acids and nucleobases to oligopeptides and oligonucleotides, would then likely be detected. Although these methods have already demonstrated real benefits in terms of sensitivity (1 µM to 0.1 nM) [31,32], some problems remain unresolved, even on Earth. Firstly, only a few molecules can be analyzed simultaneously, compared to multidimensional methods used in laboratories, e.g., two-dimensional gel electrophoresis (2D-PAGE) or two-dimensional liquid chromatography (2D-LC) [28]. Another major hurdle is the very small sample volume that can be analyzed in one run (from 1 nL to 10 µL [31]), which reduces the method's sensitivity and sample representativity. In addition, only a pure extract can be injected in these micro-fluidic systems; as a result, they require a previous complex and multi-step off-line sample preparation [25,27,[33][34][35]. Thanks to remarkable improvements in multidimensional systems and UPLC stationary phases, an online analytical platform allowing both the purification and analysis of a various set of compounds could now be considered for exobiological studies.
The aim of the present study is to develop a liquid setup able to concentrate and separate a wide range of potential extra-terrestrial peptide-like molecules. The developed configuration, in addition to its intrinsic qualities, such as the concentration and great versatility despite the drastic operating conditions, would have to facilitate a simple and fast separation suitable for in situ analysis. Placed online with a spatialized detector [36,37], the generic unit should potentiate the one-pot detection of diverse molecules with increasing complexity and present at nanomolar or picomolar levels.
MS detectors, coupled to liquid chromatography, have already allowed major advances in organics characterization of meteorite, tholin and comet analogues [38][39][40]. Thanks to a Q-Orbitrap High Resolution Mass Spectrometer (HRMS) that allows a comprehensive assessment of the data obtained, several analytical features were assessed to define the best trapping and separation conditions. Finally, the approach, combined with a simplified sample preparation protocol and spatialized detector, could potentially be validated for space life-search experiments.
Chemicals and Solutions
Several monomers and polymers considered to be strong biosignatures of life (amino acids, nucleic acids and oligopeptides) were used at different stages of optimization, as they are distinct in terms of polarities, chemical structures and molecular masses, and such biomarkers were used to select generic trapping parameters.
Preparation of Standard Solutions
Two stock solutions were prepared with oligopeptides solubilized in high purity water, amino acids in HCl 1 M and nucleobases in NaOH 0.1 M The first solution contained the peptides (0.5 µg/mL) in high-purity water, the second was composed of the amino acids and nucleobases (1.5 × 10 −6 M). The standard solution containing the oligopeptides was used to prepare a working solution at 0.01 µg·mL −1 , which was used to select the optimal trapping parameters of the trapping-LC setup in comparison with the 1D-LC method. This solution was further mixed and diluted with the standard solutions, containing amino acids and bases, to prepare the different series of calibration solutions to validate the optimized system.
1D-LC Setup
The analysis was performed with a Wadose LC isocratic pump, interfaced with a Q-Exactive Hybrid Quadrupole-Orbitrap mass spectrometer equipped with an ESI source (Thermo Fisher Scientific, Waltham, MA, USA). The MS functions were controlled by the Xcalibur data system (Thermo Fisher Scientific), whereas injection and HPLC solvent elution were monitored and controlled by our software developed on LabVIEW. The analytical column was a semi-polar Hypersil Gold aQ (50 × 1 mm, 1.9 µm, 175 Å, Thermo Fisher Scientific). The mobile phase consisted of acetonitrile (ACN)-0.1% formic acid and water-0.1% formic acid. Elution was performed with 10% and 20% ACN at a constant flow rate of 110 µL·min −1 . Experiments were conducted at 40 • C.
Trapping-LC Setup
The analysis was performed with the same instrumentation. The pump enabled the loading of the sample on the trapping setup, followed by the backflush and the analytical separation of analytes. Two different trapping columns, a semi-polar Hypersil Gold aQ (20 × 2.1 mm, 12 µm, 175 Å; Thermo Fisher Scientific) and a polar Hypercarb (20 × 2.1 mm, 7 µm, 175 Å; Thermo Fisher Scientific), were used.
One thousand microliters of the sample was injected in the preparative loop. The compounds were transferred to the trapping columns at 500 µL·min −1 for 180 s. Non-retained compounds (e.g., matrix interferences, salt) were flushed to waste. Once the loading completed, trapped analytes were backflushed at 110 µL·min −1 until all targets were eluted on the analytical column. During this time, trapping and analytical columns were connected in series and eluted by means of the 1D-LC mobile phase. The valve scheme is described in Section 3.3. The automation was performed by LabVIEW®software (version 2016, National Instrument Corporation, Austin, TX, USA) that controlled the pump, valves and column oven.
The system was compared to a laboratory dual-trap system with two quaternary Accela LC pumps (600 and 1250) working together and interfaced with the same Q-Exactive Hybrid Quadrupole-Orbitrap Mass Spectrometer. The Accela 600 pump provided the loading of the sample on a trapping-LC unit, while the 1250 pump controlled the backflush and the analytical separation of analytes. Before injection, samples were stored at 4 • C using a Stack cooler CW (CTC Analytics AG, Zwingen, Switzerland). MS functions and HPLC solvent gradients were controlled by the Xcalibur data system (Thermo Fisher Scientific).
Mass Spectrometry
The analysis was carried out on a Q-Exactive mass spectrometer. Mass detection was performed in positive ion mode. The electrospray voltage was set at 4.0 Kv. The capillary and heater temperatures were 275 • C and 300 • C, respectively. The sheath, sweep and auxiliary gas (nitrogen) flow rates were set at 35, 10 and 20, respectively (arbitrary units).
MS analyses were performed by either full scan or targeted selected ion monitoring mode (tSIM). The full scan mode was employed when standard solutions were analyzed. Mass spectra were acquired at 70,000 resolution, AGC target 10 6 and max IT 200 ms. Compounds were analyzed in the range of 300-2000 m/z when solutions of oligopeptides were analyzed (i.e., solutions used for the selection of optimal multidimensional parameters), and in the range of 75-1100 m/z when solutions contained amino acids and nucleobases (i.e., calibration solutions).
tSIM MS offered superior sensitivity when complex samples were analyzed. It was then used to determine the recovery of compounds with resolution at 17500, AGC target at 10 5
UPLC General Features
In this rationale, peptides were used as molecular targets. To overcome the lack of extra-terrestrial peptide standards, amino-acids polymers differing in terms of molecular weight and polarity were considered [33]. Oligopeptides with alanine, glycine and leucine were considered as valuable targets, since their building blocks are the main amino acids in the acid hydrolysates of meteorites, tholins and comets [22,41,42]. The sensitivity of the system was thus investigated based on concentrations of meteorite compounds. Amino acid and nucleobase concentrations present in carbonaceous meteorites range from ppb to ppm levels (ng·g −1 to µg·g −1 of meteorite) [15,43]. Assuming a similar range in the universe, the limit of detection of any technique used in situ has to be at least at the ppb level. For a 1 g sample of liquid, melted or extracted in 1 mL of solvent, a detection limit of ng·mL −1 is mandatory.
To develop a simple but efficient liquid chromatographic system for space experimentation, a single isocratic pump was used to perform the trapping and separation of both compounds. For that purpose, a minimum length of flexible stainless steel capillaries together with Viper Fingertight Fitting system were selected to provide virtually dead-volume-free plumbing, minimizing extra-column dispersion.
Stationary phases were chosen according to their relevance for peptide-like compound analysis. Alone and in series, short columns (20-150 mm) with stationary phases exhibiting different polarities were previously evaluated in gradient mode for the analysis of laboratory cometary analogues [33]. Briefly, in isocratic mode, elution on Reverse Phase C 18 Hypersil Gold aQ, with acetonitrile as the organic solvent, allowed the study of complex mixtures with high peptide retention. To decrease the mobile phase consumption and increase the sensitivity, a low-diameter (1 mm) Hypersil Gold aQ column was selected as the analytical column. To ensure the high solubility of the oligopeptides in the mobile phase, with the lower energy consumption of the column oven, the temperature was set at 40 • C.
Automation and control (oven, pump and valves) were performed by an interface programmed on LabVIEW.
1D-LC Configuration
For direct injection, best separation was achieved with a 10/90 ACN/H 2 O mobile phase, regarding intensities and retention times of the different oligopeptides ( Figure 1, Table 1). For direct injection, best separation was achieved with a 10/90 ACN/H2O mobile phase, regarding intensities and retention times of the different oligopeptides ( Figure 1, Table 1). Increasing the volume of injection would be a way to improve sensitivity as a slight amount of complex and highly diluted sample is expected to be available [44]. An online liquid-trapping system would then be necessary to enable a large volume injection, to clean up samples (highly aqueous, salts-containing, etc.) and to selectively trap molecules of interest.
Trapping-LC Configuration
Regarding space constraints, trapping must be performed under an unusual configuration with a single pump for trapping and elution.
Various trapping factors, such as column stationary phases, loading and backflush parameters, strongly influence compound recovery and cleanup efficiency [45][46][47]. Stationary phases of the trapping columns were previously selected to characterize high-molecular-weight compounds in a cometary ice analogue. Briefly, Hypersil Gold aQ allowed the retention of semi-polar and apolar peptides, while more polar and low-mass compounds were refocused at the head of a Hypercarb column. By serially coupling both columns and setting a loading flow rate of 500 µL·min −1 for 120 s and a backflush of 240 s, this dual-trap setup led to the best retention of all standards [33].
To adapt this system to in situ analysis, the laboratory loading pump was suppressed and a switching valve was added ( Figure 2). The designed system was thus composed of two trapping columns coupled to the analytical dimension. In that configuration of a single pump, the backflush step corresponded to elution on the analytical column. The only parameter to be optimized was then the nature of the mobile phase. To evaluate the system, trimethionine was chosen as an internal standard. Increasing the volume of injection would be a way to improve sensitivity as a slight amount of complex and highly diluted sample is expected to be available [44]. An online liquid-trapping system would then be necessary to enable a large volume injection, to clean up samples (highly aqueous, salts-containing, etc.) and to selectively trap molecules of interest.
Trapping-LC Configuration
Regarding space constraints, trapping must be performed under an unusual configuration with a single pump for trapping and elution.
Various trapping factors, such as column stationary phases, loading and backflush parameters, strongly influence compound recovery and cleanup efficiency [45][46][47]. Stationary phases of the trapping columns were previously selected to characterize high-molecular-weight compounds in a cometary ice analogue. Briefly, Hypersil Gold aQ allowed the retention of semi-polar and apolar peptides, while more polar and low-mass compounds were refocused at the head of a Hypercarb column. By serially coupling both columns and setting a loading flow rate of 500 µ L·min −1 for 120 s and a backflush of 240 s, this dual-trap setup led to the best retention of all standards [33].
To adapt this system to in situ analysis, the laboratory loading pump was suppressed and a switching valve was added ( Figure 2). The designed system was thus composed of two trapping columns coupled to the analytical dimension. In that configuration of a single pump, the backflush step corresponded to elution on the analytical column. The only parameter to be optimized was then the nature of the mobile phase. To evaluate the system, trimethionine was chosen as an internal standard. Peaks tailing and broadening of the highest molecular weight oligopeptides with 10% ACN were not suitable for the elution of non-targeted oligopeptides. Backflush and elution with 20% ACN gave, on the contrary, a real benefit in terms of separation, as less coelution occurred for the studied peptides compared to the direct injection 1D-LC configuration (Figures 1 and 3). On the whole, peaks were well-defined. The delay of 180s in elution was particularly interesting for polar and/or very-low-molecular-weight compounds, which were no longer eluted at the death retention time. Backflush with 20% ACN gave also the best recoveries, except for phenylalanine tripeptide (loss of 22%, Figure 4). Peaks tailing and broadening of the highest molecular weight oligopeptides with 10% ACN were not suitable for the elution of non-targeted oligopeptides. Backflush and elution with 20% ACN gave, on the contrary, a real benefit in terms of separation, as less coelution occurred for the studied peptides compared to the direct injection 1D-LC configuration (Figures 1 and 3). On the whole, peaks were well-defined. The delay of 180s in elution was particularly interesting for polar and/or verylow-molecular-weight compounds, which were no longer eluted at the death retention time. Backflush with 20% ACN gave also the best recoveries, except for phenylalanine tripeptide (loss of 22%, Figure 4).
Interest of the Trapping-LC Setup for In Situ Experiments
Under space constraints, time and solvent consumption have to be considered. In our configuration, if LC was chosen to be part of the on-board instrumentation, it would analyze samples in less than 20 min with 2 mL of a single mobile phase. These features comply with in situ conditions and constitute a good basis for future improvements.
The retention capability of our designed system was compared to direct injection without trapping. The performance of the system was evaluated by injecting the same number of oligopeptides in direct (20 µ l, 0.5 µ g/mL) and trapping configurations (1000 µ l, 10 ng/mL). As illustrated by Figure 5, there was no major difference in peptide retention and detection. Both distributions of oligopeptides were similar. The trapping was, however, not efficient for all the peptides, as alanine one was not retained.
Interest of the Trapping-LC Setup for In Situ Experiments
Under space constraints, time and solvent consumption have to be considered. In our configuration, if LC was chosen to be part of the on-board instrumentation, it would analyze samples in less than 20 min with 2 mL of a single mobile phase. These features comply with in situ conditions and constitute a good basis for future improvements.
The retention capability of our designed system was compared to direct injection without trapping. The performance of the system was evaluated by injecting the same number of oligopeptides in direct (20 µL, 0.5 µg/mL) and trapping configurations (1000 µL, 10 ng/mL). As illustrated by Figure 5, there was no major difference in peptide retention and detection. Both distributions of oligopeptides were similar. The trapping was, however, not efficient for all the peptides, as alanine one was not retained. The trapping-LC system led, however, to significantly higher signal intensities as similar responses were obtained with a 50-fold lower concentration in trapping configuration.
Regarding targets for future space exploration missions, amino acid and nucleobase trapping was then evaluated. Contrary to nucleobases, in the optimized peptide trapping conditions, no amino acid was retained, except for phenylalanine and tyrosine ( Figure 6). The trapping-LC system led, however, to significantly higher signal intensities as similar responses were obtained with a 50-fold lower concentration in trapping configuration.
Regarding targets for future space exploration missions, amino acid and nucleobase trapping was then evaluated. Contrary to nucleobases, in the optimized peptide trapping conditions, no amino acid was retained, except for phenylalanine and tyrosine ( Figure 6). The trapping-LC system led, however, to significantly higher signal intensities as similar responses were obtained with a 50-fold lower concentration in trapping configuration.
Regarding targets for future space exploration missions, amino acid and nucleobase trapping was then evaluated. Contrary to nucleobases, in the optimized peptide trapping conditions, no amino acid was retained, except for phenylalanine and tyrosine ( Figure 6). Analyses were then performed with a laboratory dual-trap system. Figure 7 shows the responses of the two trapping systems for retained amino acids, bases and oligopeptides. For all the targets, similar retention was obtained but with a higher standard deviation for the single-pump system (up to 26% for Phe-Phe). Analyses were then performed with a laboratory dual-trap system. Figure 7 shows the responses of the two trapping systems for retained amino acids, bases and oligopeptides. For all the targets, similar retention was obtained but with a higher standard deviation for the single-pump system (up to 26% for Phe-Phe). To further exemplify the sensitivity of the system when coupled to a mass spectrometer, the recovery of some targeted compounds was calculated using calibration curves (Table 2). Linearity ranged from 0.25 to 10 ng·mL −1 . To further exemplify the sensitivity of the system when coupled to a mass spectrometer, the recovery of some targeted compounds was calculated using calibration curves (Table 2). Linearity ranged from 0.25 to 10 ng·mL −1 . Retained peptides, nucleobases and amino acids were detected at the ng·mL −1 level. This clearly demonstrates the effectiveness of this online trapping approach when highly diluted and complex samples are analyzed. This trapping unit, coupled to a liquid chromatography system, would then enlarge the set of data about potential exobiological molecules without denaturing them. Despite its ability to retain different organics in terms of polarity, chemical structure and molecular weight in a single run, this broad approach should also be able to raise the signal of highly diluted compounds. This is fundamental for liquid in situ experiments, since compound extraction would previously have to be reduced to an extreme simplicity with large volumes of final extracts (in the order of milliliters), and thus with a low recovery achievement.
Conclusions
In situ detection of biomarkers in the solar system has become an appealing project, partly guiding past, present and future space missions. Up to now, in situ instrumentation was mainly designed to detect and determine concentrations of volatile organic compounds or derivatives. In this work, we present the first trapping unit for extra-terrestrial peptide-like compounds. The screening of several parameters showed that a trapping unit placed in series with an analytical column significantly enhanced the range of potential compounds to be analyzed. Through this LC setup, we do not pretend to separate all individual compounds in the sample as MD-LC systems do. Nevertheless, by avoiding mobile phase changes and reducing the system's dead volume due to long tubing and viper connections, the chromatographic dilution of a compound's band, as well as the tailing and broadening of peaks, are limited. As a result, the detection of very low concentrations of analytes is facilitated. Under space conditions, this system could present several advantages, since it would (1) elude chemical derivatization of non-volatile and polar compounds, as is necessary for current on-board GC-MS instruments, (2) limit the misinterpretation of chromatograms, (3) enlarge the range of potential biomarkers targeted and (4) reduce the complexity of offline sample preparation protocol used with microfluidic systems, without decreasing a compound's signal intensity. It could then represent a powerful tool for exobiological studies. | 2019-04-14T13:02:46.197Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "8ead37888f7ef5e861b1932dd93cf0f011e79ea3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/molecules24071429",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ead37888f7ef5e861b1932dd93cf0f011e79ea3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
238990644 | pes2o/s2orc | v3-fos-license | Comprehensive speed breeding: a high‐throughput and rapid generation system for long‐day crops
Breeding cycle time largely determines the efficiency of crop genetic improvement. To accelerate the generation progress, Watson et al. (2018) proposed a concept of 'speed breeding (SB)' by extending the photoperiod and increasing the light intensity during plant growth. However, there have been few reports on the successful application of SB in semi-winter and winter long-day crops. Here, we propose a comprehensive speed breeding (CSB) system for the rapid generation and high-throughput culture of long-day crops, which can accomplish 4.5 generations for both semi-winter canola and winter wheat per year. With the aid of extra far-red light, three and 5.5 generations can be accomplished for the typical winter and semi-winter type canola. CSB also exhibits high efficiency in spring type canola, and can achieve up to 6.5 and 5.5 generations with or without extra far-red light, respectively. This strategy is expected to greatly accelerate gene pyramiding of superior alleles, screening of recombinants for QTL mapping and functional genomics research by rapid purification of multiple mutated alleles.
Breeding cycle time largely determines the efficiency of crop genetic improvement. To shorten the generation time, Watson et al. (2018) proposed a concept of 'speed breeding (SB)' by extending the photoperiod and increasing the light intensity during plant growth and achieved up to 6 and 4 generations per year for the spring wheat (Triticum aestivum) and spring canola (Brassica napus), respectively. To realize the high-throughput application of SB in combination with molecular breeding to longday winter crops, we proposed an updated SB system designated as comprehensive SB (CSB) including vernalization of germinated seeds, high-density seedling culture, and accelerated flowering and maturation with optimized light regime. First, germinated seeds placed on wet tissue paper with visible radicles are vernalized at 4.5°C during a 22-h light period and at 9°C during the 2-h dark period (Figure 1a; left), with the vernalization time varying from different genotypes. Subsequently, a 96-well tray is developed for the seedling culture in a hydroponic scheme (Figure 1c), which is convenient for sample collection and genotyping in 96-well plates. Finally, the target plants are transplanted into pots and cultivated in a growth chamber, in which cold air is directed to circulate from the roof to the air flue and pass the plants, finally flowing back to the roof (Figure 1b). The light was supplied by light emitting diode (LED) board with an optimized spectrum (Figure 1a; upper right) (Bantis et al., 2018), with the intensity of around 300 lmol/m 2 /s at the bench height and over 900 lmol/m 2 /s at 10 cm below the LED bars. The chamber is programmed to run a 22 h/2 h (light/dark) photoperiod at 22°C with a humidity of 70% (Figure 1a; lower right). In the total growth area of 10.8 m 2 , 2035-5400 or 675-1350 adult plants of wheat or canola can be placed with a density of 187.5-500 or 62.5-125 plants/m 2 .
Germinated seeds of semi-winter and winter canola varieties (ZS11 and Darmor-bzh) were used to test the effect of CSB on long-day crops. After 17 days of VGS, ZS11 plants exhibited the visible flower buds at 42 days after germination (DAG) and flowered uniformly at around 48 DAG. All the siliques turned to deep yellow with hard and black seeds inside at 87 DAG ( Figure 1d). Despite the rapid growth, seeds harvested from both 80 and 87 DAG plants were viable with a germination rate of more than 99% ( Figure 1e). Generally, each ZS11 plant could produce more than 800 mature seeds from 30 siliques with a thousand-seed weight (5.24 AE 0.30 g, n = 10), greater than that (4.57 AE 0.19 g, n = 10) obtained from field conditions in Wuhan. In contrast, without vernalization, only approximately half of the plants could generate tiny flower buds at 145 DAG (Figure 1f), indicating a period of cold treatment is necessary for reducing the cycling time of semi-winter canola under the CSB conditions. CSB also functioned well for a typical spring type variety Westar, which generated the first flower at 32 DAG, finished anthesis at around 37 DAG, and produced mature seeds at 67 DAG ( Figure 1g). Relative to the previous SB procedure (Hickey et al. 2019), CSB can reduce the generation time of spring type canola by 40.7% (from 113 to 67 days). Unexpectedly, despite 60 days of VGS, Darmor-bzh plants remained in the vegetative stage at 148 DAG (Figure 1k; upper left). We further evaluated the application potential of CSB in wheat by culturing the winter variety Yannong19 and the spring variety Chinese Spring. After a removal of all tillers, YN19 took approximately 58 days from germination to anthesis after 30 days of VGS and gave birth to about 20 mature seeds at around 83 DAG (Figure 1h). Chinese Spring plants progressed to anthesis at about 50 DAG (without vernalization) and produced approximately 45 mature seeds at around 75 DAG, 16 days advance than that before (Watson et al., 2018) (Figure 1i). Both harvested seeds displayed normal germination rates ranging from 90% to 100%.
Since the CSB procedure failed to induce flowering of typical winter type canola variety Darmor-bzh, we further improved the light regime by adding 500 lmol/m 2 /s far-red light (Figure 1j). Under the new light condition with a 55-day VGS treatment, Darmor-bzh could generate visible flower buds at 92 DAG and mature seeds at around 125 DAG (Figure 1k). Aided by this improved CSB protocol, the life cycle of Westar and ZS11 could also be further accelerated by 12 and 21 days, respectively (Figure 1l; 1m). RNA sequencing of the penultimate leaf in ZS11 showed that additional far-red light in CSB significantly increased the expression of multiple activators of flowering, including the B. napus orthologues of PHYTOCHROME A, CONSTANTS, and PHYTOCHROME-INTERACTING FACTOR 4 ( Figure 1n). Thus, extra far-red light can collectively enhance the transcriptional levels of FLOWERING LOCUS T (FT) homologues at both stages, leading to earlier flowering than that in solo CSB.
We attempted to apply CSB in marker-assisted backcross breeding programs by introgressing a favorable haplotype of BnaA9.CYP78A9a, which can significantly increase seed weight and silique length, from ZS11 to an elite restorer 621R. As shown in Figure 1o, eight generations were accomplished within 23 months and multiple improvement lines in the BC 5 F 3 families were obtained. Among them, 621R-A9 exhibited a 97.7% background recovery rate of the recipient genome as revealed by whole-genome re-sequencing (Figure 1p). Field evaluation showed that the thousand-seed weight and silique length of 621R-A9 were significantly higher than those of 621R (Figure 1q). Interestingly, some other yield and quality-related traits were also (q) Agronomic traits of 621R-A9 under field conditions. **Significantly different at P < 0.01. The bars indicate the standard error of the mean. All pots measured 10 9 10 9 10 cm and all the scale bars is 10 mm. significantly optimized relative to the recipient (Figure 1q). These results indicate that CSB is effective and time-saving for molecular breeding in semi-winter canola.
In summary, we propose a CSB system for the high-throughput culture and rapid generation of long-day crops. Application of CSB can cycle 4.5 and 5.5 generations per year for semi-winter and spring canola, respectively. Complementation of extra far-red light to CSB not only helps to reproduce winter canola for 3 generations per year but also further accelerate one more generation for other type canola. Moreover, about 4.5 to 5 generations in 1 year can be accomplished for both spring and winter wheat under CSB. This strategy is expected to greatly accelerate gene pyramiding of superior alleles, screening of recombinants for QTL mapping and functional genomics research by rapid purification of multiple mutated alleles. | 2021-10-16T06:16:36.710Z | 2021-10-15T00:00:00.000 | {
"year": 2021,
"sha1": "21a93a345de3b86a4b4e28fc4c1f0815f7e0e4d1",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/pbi.13726",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f27127c638f9ce92edf944d49496f55a424f1233",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257018464 | pes2o/s2orc | v3-fos-license | A comparative analysis of carbon reduction potential for directly driven permanent magnet and doubly fed asynchronous wind turbines
Wind power generation does not emit greenhouse gases or pollutants, but there are some carbon emissions from the manufacturing, transportation, operation, and waste disposal of wind turbines. Directly driven permanent magnet and doubly fed asynchronous wind turbines currently have the largest market share in China, but few Chinese studies have compared their differences in carbon reduction potential. This paper uses life cycle assessment (LCA) to quantitatively analyze the full life cycle carbon emissions of the two wind turbines to determine which type of wind turbine has greater carbon reduction potential, obtaining the following results. (1) The full life cycle greenhouse gas emissions of 2.5 MW directly driven permanent magnet and doubly fed asynchronous wind turbines are 8.48 and 10.43 g CO2/kWh, respectively. The direct‐driven permanent magnet wind turbine is superior in terms of carbon reduction. (2) The stage with the greatest impact and the greatest difference between the two wind turbines in the full life cycle is the production stage, during which the carbon emissions of the directly driven permanent magnet and doubly fed asynchronous wind turbines are 1.045 × 106 and 1.210 × 106 kg, respectively. (3) According to sensitivity analysis, proper waste disposal and transportation can reduce carbon emissions from wind turbines. These research findings can be used to help achieve carbon peaking and neutrality goals, as well as the technological development of wind power enterprises.
| INTRODUCTION
The consumption of fossil fuels is increasing as industrialization progresses, resulting in more waste and pollutants in the natural environment, which significantly impacts the environment and human health. Wind power, which is green and low carbon, is an important way to address environmental and energy issues. 1 The cumulative installed capacity of wind power in China was 328 million kW at the end of 2021. The national wind power generation capacity was 652.6 billion kWh, a 40.5% increase year on year. Wind and photovoltaic power generation capacity in China represented approximately 11% of the total society's electricity consumption. When carbon peaks in 2030, wind power installed capacity is expected to reach 800 million kWh, accounting for 15% of total power generation. When carbon neutrality is expected in 2060, wind power installed capacity may exceed 2 billion kWh, and the wind power generation proportion will be greater than 30%. 2 Therefore, the growth of the wind power industry is critical to achieving the goals of carbon peaking and neutrality.
The most common wind turbine technologies in China are doubly fed asynchronous and directly driven permanent magnets. It can be seen from Figures 1 and 2 that the difference between the two types of wind turbines is that the wind wheel shaft of the doubly fed wind turbine is connected to the generator rotor after passing through the gearbox, rather than directly driven. The rotor shaft of a direct-drive wind turbine is directly connected to the generator rotor, and the gearbox is omitted. In 2020, the doubly fed asynchronous type accounted for 60.9% of newly installed offshore wind turbine capacity, while the direct-driven permanent magnet type accounted for 30.5%. 5 However, it is unknown which of the two types has a greater potential for carbon reduction and whether a greater carbon reduction effect can be achieved by increasing the proportion of a specific category. Therefore, analyzing and comparing the carbon reduction potentials of the two types of wind turbines is critical.
Although wind energy does not emit greenhouse gases, materials and energy are required throughout the life cycle of wind turbines, so some carbon emissions are unavoidable. 6 Its carbon emissions over its entire life cycle can be quantified using life cycle assessment (LCA). LCA is a valuable environmental management tool that can be used to assess carbon emissions and cumulative energy demands of products or services over their entire life cycle. 7,8 Most previous research on wind power using LCA focused on a single wind farm. For example, Al-Behadili and El-Osta 9 investigated the full life cycle and payback time of energy investment in Libyan wind farms; Ardente et al. 10 investigated the energy performance and conducted the LCA of an Italian wind farm with a 20-year time scale, and targeting a wind farm in Shenyang City, China, Gao et al. 11 calculated and compared the carbon emissions of wind turbine production, transportation, operation, and waste emission and treatment by LCA to those of coal power generation, concluding that wind power generation had significant energy-saving and environmental benefits. Other studies concentrated on wind farms that used both sea and land systems. Bonou et al. 12 conducted a life cycle analysis of wind farms on land and at sea. Xiang et al. used LCA to compare offshore and onshore wind power systems and analyzed the carbon emissions of wind farms equipped with various power turbines, finding that offshore wind farms with higher power provided greater carbon reduction benefits.
The difference in carbon reduction potential between directly driven and doubly fed wind turbines is rarely investigated in China. Most research is based on life cycle background databases from other countries, which do not accurately reflect Chinese reality.
| Research object and method
Until 2021, the wind turbine with the most installed capacity among onshore wind farms in China was the model with a single unit capacity of 2.5 MW, accounting for 40% of the onshore wind farm installed capacity. Therefore, the 2.5 MW directly driven permanent magnet wind turbine and 2.5 MW doubly fed asynchronous wind turbine are chosen as research objects in this paper. The full life cycle carbon footprints and cumulative energy demands of two types of wind turbines are compared using eFootprint, China's first LCA software with independent intellectual property rights, and the domestic life cycle background database to determine the type with greater carbon reduction potential.
| System boundary
A wind turbine's system boundary includes four stages: production, transportation, operation, and waste disposal. For more information, see Figure 3. The main purpose of this study is to analyze the differences in the carbon footprint of two types of wind turbines. During installation and commissioning, the carbon footprint of the two types of wind turbines is almost the same. According to the calculation and review of relevant studies, although this process will also emit a lot of greenhouse gases, its carbon footprint accounts for less than 1% of the total carbon footprint, so this process will not be separately listed in the 2.4 Inventory analysis.
| Model assumptions
Three assumptions are made in this paper. (1) The tower is 80 m tall, and the ground is level. (2) Wind turbines have a service life of 20 years. 13,14 (3) The wind turbine is located in a wind power plant in the Chinese province of Guangdong, and its annual power generation time is 2630 h. 15
| Inventory analysis
The wind power foundation, tower, blade, hub, nacelle cover, nacelle chassis, transmission mechanism, generator, anemometry system, and electronic control system are the main components of the doubly fed asynchronous wind turbine. In contrast to the doubly fed type, the low-speed wind wheel of a directly driven wind turbine is directly connected to the generator, removing the need for a complex transmission mechanism. The anemometry and electronic control systems are essentially electronic equipment, with volumes and masses that are less than 5% of the total unit. They have many parts, and their data are difficult to obtain, so they are not discussed in this paper. 16,17 The materials and energy consumption of the components in the 2.5 MW doubly fed asynchronous and directly driven permanent magnet wind turbines are listed in Table 1. The majority of the data comes from wind turbine manufacturers' product manuals.
T A B L E 1 Material consumption of wind turbine components at the production stage.
| Stage of transportation
Because wind energy resources are often found in remote areas, the carbon footprint in the transportation stage is primarily due to the consumption of fossil fuels by transportation vehicles. This paper assumes that transportation is provided by 1.000 (10 4 kg) gasoline-powered trucks. The trucking life cycle history is derived from the database of the LCA software eFootprint. It is assumed that the truck transportation distance to deliver the wind turbine components to the wind farm is 1000 km, and the concrete required by the wind power foundation is 50 km. 18 manufacturing stage, compensating for some materials' carbon emissions, the total carbon emission in this stage (−1.742 × 10 5 kg) is negative, accounting for -15.6% of the total. The carbon emission of the doubly fed asynchronous wind turbine is 1.371 × 10 6 kg and the carbon emission in the manufacturing stage is also the highest (1.298 × 10 6 kg), accounting for 94.7% of the total. The carbon emissions from the operation, transportation, and waste disposal stages are 2.334 × 10 5 , 7.290 × 10 4 , and -2.336 × 10 5 kg, respectively, accounting for 17.0%, 5.3%, and -17.0%. The carbon footprint of a directly driven permanent magnet wind turbine is 81.4% times that of a doubly fed asynchronous wind turbine.
Therefore, the carbon emissions of direct-driven permanent magnet and doubly fed asynchronous wind turbines are 8.48 and 10.43 g CO 2 /kWh, respectively, according to Equation (1 where b is the carbon emission per kWh (g CO 2 /kWh); B is the total carbon emission in the entire life cycle (g CO 2 ), and Q is the annual average power generation capacity (kWh/year). The production stage is when the carbon emissions of the two wind turbines are highest, and their difference is greatest. Figure 5 depicts the carbon emission ratio of each component. The main structural difference between the two is in the generator and transmission mechanism. Because of its low revolving speed, the directly driven permanent magnet generator requires more magnetic poles (typically above level 90), resulting in a larger volume and weight than the doubly fed asynchronous generator. The generator mass in the 2.5 MW directly driven permanent magnet wind turbine is approximately 6.500 × 10 4 kg, whereas the doubly fed asynchronous generator mass is only approximately 1.200 × 10 4 kg. 28 The transmission mechanism of a directly driven wind turbine is simplified because its wind axle is directly connected to the generator rotor, eliminating the speed-up gearbox, and greatly reducing transmission mechanism mass.
The nacelle mass of doubly fed asynchronous wind turbines is generally greater than that of directly driven permanent magnet wind turbines. The nacelle mass of the 2.5-MW doubly fed asynchronous wind turbine (including the impeller and generator) is approximately 1.530 × 10 5 kg, whereas it is approximately 1.320 × 10 5 kg in the directly driven permanent magnet type. A larger nacelle mass necessitates increasing the tower and wind power foundation mass (assuming a flat terrain). Figures 5 and 6 show that the tower is responsible for most of the carbon footprint at this point. Furthermore, the carbon footprint of the tower in Figures 5 and 6 stems solely from its manufacturing process and does not include any other carbon footprint components.
To summarize, the materials and energy consumption of the doubly fed asynchronous wind turbine are greater than those of the directly driven permanent magnet type during the manufacturing stage, and the carbon emission of the former is 124.3% of that of the latter.
The mode of transportation determines the amount of carbon emitted during transportation: the distance traveled and the mass of the goods. A truck with a load of 1.000 × 10 4 kg transporting 1.000 × 10 3 kg of goods for 1 km emits 0.140 kg of CO 2 . Because the two wind turbines' transportation modes and distance are assumed to be the same, carbon emissions are solely determined by the mass of goods. At this point, the carbon emissions of the doubly fed wind turbine are 7.290 × 10 4 kg, which is 1.390 × 10 4 kg higher than that of the directly driven wind turbine.
Carbon emissions in the operating stage are caused by daily maintenance and component replacement. Routine inspection and replacement of lubricating oil are part of daily maintenance. Because the doubly fed type has a gearbox, its carbon emissions are slightly higher in this process than the directly driven type. The failure and replacement rates of each component of the wind turbine are related to component replacement. The speed-up gearbox is eliminated in the directly driven wind turbine, lowering the overall failure rate, and the overall mass is lower than in the doubly fed type. Therefore, the carbon emission from directly driven wind turbine component replacement is only 79.7% of that of the doubly fed asynchronous type.
According to the assumptions of this paper, metals are recycled in the waste disposal stage, while other materials are treated as municipal solid waste to be discharged. Therefore, the doubly fed asynchronous wind turbine consumes more metals and can offset 17% of its carbon emissions, whereas the directly driven type can only offset 15.6%.
| Payback period for energy
The energy payback time is the number of operating years required for a wind power system to recover its primary energy consumption over its life cycle. The sum of energy required in the wind turbine production, transportation, operation, and waste disposal stages is typically referred to as consumption. The relationship between total energy consumption and annual system power generation can intuitively reflect the return on investment of unit energy. 29 The annual operation time of the wind turbines studied in this paper is 2630 h, resulting in an annual average power generation capacity of 6.575 × 10 6 kWh, equivalent to 23,670 GJ. Throughout the life cycle, the cumulative energy demands of 2.5-MW doubly fed asynchronous and directly driven permanent magnet wind turbines are 1.890 × 10 7 and 1.560 × 10 7 MJ, respectively. Therefore, the energy payback times for the 2.5 MW doubly fed asynchronous and directly driven permanent magnet wind turbines are 0.80 and 0.66 years, respectively, according to Equation (2) where CED is the cumulative energy demand (MJ); Q is the annual average power generation capacity (MJ/years); EPT is the energy payback time (years).
| Sensitivity analysis
The wind turbine's waste disposal stage is critical in determining its carbon footprint throughout its life cycle. Therefore, the treatment of waste materials directly impacts the environmental effects produced at this stage. Sensitivity analysis is performed for the wind turbine transport process, waste disposal treatment methods, and metal recovery rate. 30
| Waste disposal method
Wind turbine waste materials are treated in three ways: recycling, landfill, and incineration. 31,32 Metals (steel, copper, iron) are recycled, while nonmetallics (glass fiber, epoxy resin, polyester resin, acetone) are landfilled or incinerated. Keeping all other variables constant and assuming that each method's utilization rate is 100%. The carbon emissions generated by the full landfill of nonmetallic materials for the doubly fed asynchronous wind turbine will be 4.430 × 10 3 kg and the carbon emissions generated by the full incineration will be 7.415 × 10 4 kg. The carbon emissions generated by the full landfill of nonmetallic materials for the directly driven permanent magnet wind turbine will be 4.230 × 10 3 kg and the carbon emissions generated by the full incineration will be 7.056 × 10 4 kg, as shown in Figure 7. Landfills are currently used primarily in China to dispose of nonmetallic materials. Currently, landfills are the primary method of disposing of nonmetallic materials in China.
| Recovery rate of metals
Metal recovery can significantly reduce wind turbine carbon emissions over their entire life cycle. When all other variables remain constant, carbon emissions from doubly fed asynchronous and directly driven permanent magnet wind turbines can be reduced by 3.950 × 10 4 and 2.790 × 10 4 kg, respectively, for every 10% increase in recovery rate, as shown in Figure 8.
| Transportation
Highway transportation is classified as either gasoline or diesel. When all other factors are held constant, the carbon emissions of doubly fed asynchronous and directly driven permanent magnet wind turbines in heavy-duty gasoline transportation are 7.290 × 10 4 and 5.900 × 10 4 kg, respectively. Carbon emissions from doubly fed asynchronous and directly driven permanent magnet wind turbines are 9.214 × 10 4 and 7.457 × 10 4 kg, respectively, when compared to heavyduty diesel transportation, as shown in Figure 9. For doubly fed asynchronous and directly driven permanent magnet wind turbines, the carbon emissions reductions in transportation using gasoline over diesel are -1.924 × 10 4 and -1.557 × 10 4 kg, respectively.
F I G U R E 7 Sensitivity analysis on disposal methods of nonmetallic materials.
| Conclusion
(1) The carbon emission of the 2.5-MW doubly fed asynchronous wind turbine is 10.43 g/kWh. In comparison, the carbon emission of the directly driven permanent magnet wind turbine with the same power is 8.48 g/kWh, a difference of only 81.4%. Their total energy demands are 1.890 × 10 7 and 1.560 × 10 7 MJ, respectively, and their energy payback times are 0.80 and 0.67 years, as shown in Figure 10. In terms of carbon reduction, the directly driven permanent magnet wind turbine is more promising, but its emissions are far lower than the 1050 g/kWh of traditional thermal power generation. 33 (2) The 2.5-MW asynchronous wind turbine production process contributes 94.7% of the total carbon emissions in the wind turbine life cycle, while the 2.5-MW directly driven permanent magnet wind turbine production process contributes 93.7%. 2.5-MW doubly fed asynchronous and directly driven permanent magnet wind turbines contribute 5.30% and 5.65% of carbon emissions during the transportation stage. The two wind turbines contribute 17% and 16.7% in the operation stage, respectively.
Wind turbine metal recycling significantly impacts the full life cycle results in the waste disposal process. Carbon emissions from 2.5-MW doubly fed asynchronous and directly driven permanent magnet wind turbines are -17.0% and -15.6%, respectively.
The carbon emissions from the two wind turbines in their entire life cycle are primarily caused by the manufacturing stage, followed by operation and waste disposal.
(3) According to the findings of the sensitivity analysis, the method of waste disposal of nonmetallic materials affects the carbon footprint of wind turbines throughout their life cycle. In the case of the 2.5 MW doubly fed asynchronous wind turbine, landfill increases carbon emissions by 0.32% over its entire life cycle, whereas incineration increases carbon emissions by 5.41%. For the 2.5-MW directly driven permanent magnet wind turbine, landfill increases carbon emissions by 0.38% over its entire life cycle, whereas incineration increases carbon emissions by 6.32%. Therefore, the carbon footprint of landfills is significantly lower than that of incineration. (4) Metal recovery has a significant impact on the carbon footprint of wind turbines over their lifetime. Carbon emissions are reduced by 2.88% for every 10% increase in recovery rate for the 2.5-MW doubly fed asynchronous and directly driven permanent magnet wind turbines, respectively. Therefore, increasing metal recovery rates can effectively reduce carbon emissions. (5) The carbon footprint of heavy-duty gasoline transportation of the two wind turbines is 20.9% lower than that of heavy-duty diesel transportation. Clearly, heavy-duty gasoline transportation emits less carbon dioxide than heavy-duty diesel transportation.
| Outlook
This paper makes four recommendations based on a comparison of the carbon emissions of directly driven and doubly fed wind turbines over their entire life cycle, as well as investigations of energy and power enterprises.
(1) The 2.5 MW directly driven permanent magnet wind turbine emits 1.116 × 10 6 kg of CO 2 over its entire life F I G U R E 10 Comparison of carbon emission and energy payback time of the two types of wind turbine. cycle, with a cumulative energy demand of 1.560 × 10 7 MJ. The 2.5 MW doubly fed asynchronous wind turbine has two values: 1.371 × 10 6 kg and 1.890 × 10 7 MJ. The carbon emissions and cumulative energy demand of the 2.5-MW directly driven wind turbine are less than those of the doubly fed wind turbine, with the former being 81.4% and the latter being 82.5%. Therefore, the direct-drive permanent magnet wind turbine is preferred in terms of energy conservation and emission reduction; increasing its market share can help save energy and reduce emissions. (2) Wind turbine production, operation, and transportation all contribute to carbon emissions and energy consumption over their entire life cycle. In contrast, recycling wind turbine metals during the waste disposal stage reduces carbon emissions and energy requirements throughout the entire life cycle. Therefore, the waste treatment methods chosen significantly impact carbon emissions and energy demands throughout the life cycle of wind turbines. According to the sensitivity analysis, the carbon emissions produced by landfill are much lower than those produced by incineration, and no other harmful gases are produced. Wind turbine carbon emissions can be reduced by increasing the metal recovery rate. Therefore, it is suggested that metal materials with higher recycling values be used in the manufacturing process, that the recovery rate of wind turbine metals be improved during the waste disposal stage, and that waste materials be treated in landfills.
(3) Manufacturers can optimize the wind turbine's design and manufacturing process. The total carbon emissions of directly driven permanent magnet and doubly fed asynchronous wind turbines are primarily from the manufacturing stage, accounting for 93.7% and 94.7% of total carbon emissions. Therefore, increasing energy efficiency and material utilization during the manufacturing stage while designing and manufacturing more lightweight and ecological wind turbines is effective in reducing carbon emissions and wind turbine cumulative energy demands. (4) Figure 5 shows that the carbon emission of the tower represents the largest proportion in the production stage, and the tower's main composition is steel. Therefore, by adopting cleaner steel production methods, accelerating the transformation, energy conservation, and efficiency increase of the steel industry, and reducing carbon emissions and energy consumption in steel production, the wind power industry's energy conservation and emission reduction can be indirectly promoted. Furthermore, wind power manufacturers should actively seek out more environmentally friendly materials that can replace steel, as material selection plays a significant role in reducing carbon emissions from wind turbines.
AUTHOR CONTRIBUTIONS
All authors contributed to the study's conception and design. Material preparation, data collection, and analysis were performed by Zhi-Yu Zhuo, Meng-Jie Chen, and Xiu-Yu Li. The first draft of the manuscript was written by Zhi-Yu Zhuo and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. | 2023-02-19T16:13:25.156Z | 2023-02-17T00:00:00.000 | {
"year": 2023,
"sha1": "a9130c2389dbfbcb85d34e2706aee2190c3c65d4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/ese3.1425",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "de92680fdde84325485ae135a647e0789d0e9943",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
12442748 | pes2o/s2orc | v3-fos-license | Measuring Physician Quality and Efficiency in an Era of Practice Transformation: PCMH as a Case Study
Practicing physicians face myriad challenges as health care undergoes considerable transformation, including advancing efforts to measure and report on physician quality and efficiency, as well as the growth of new care models such as Accountable Care Organizations and patient-centered medical homes (PCMHs). How do these transformational forces relate to one another? How should practicing physicians focus and prioritize their improvement efforts? This Special Report examines how physicians’ performance on quality and efficiency measures may interact with delivery reforms, focusing on the PCMH. We note that although the PCMH is a promising model, published evidence is mixed. Using data and experience from a large commercial insurer’s performance transparency and PCMH programs, we further report that longitudinal analysis of UnitedHealthcare’s PCMH program experience has shown favorable changes; however, cross-sectional analysis indicates that National Committee for Quality Assurance’s PCMH designation is positively associated with achieving program Quality benchmarks, but negatively associated with program Efficiency benchmarks. This example illustrates some key issues for physicians in the current environment, and we provide suggestions for physicians and other stakeholders on understanding and acting on information from physician performance measurement programs.
INTRODUCTION
P racticing physicians face myriad challenges as health care undergoes considerable transformations. Among these transformations, efforts to analyze and report on physician-level measures of quality and efficiency are growing rapidly. Simultaneously, there is growth of new payment and delivery models such as Accountable Care Organizations and patient-centered medical homes (PCMHs). How do these forces of change relate to one another? Does the measurement and reporting of quality and efficiency facilitate practice transformation or impede it? Do practice transformations lead to better or worse scores on quality and efficiency of care, and if so, how? And what are the implications of these 2 related trends on physicians and their patients?
OVERVIEW OF PHYSICIAN QUALITY AND EFFICIENCY MEASUREMENT AND REPORTING
Following the mantra "you can't improve what you don't measure," both the public and the private sectors have accelerated programs to measure and publicly report on the quality and efficiency of physician services. Although a full description of such efforts is beyond the scope of this article, some of the more prominent initiatives include programs from the Centers for Medicare and Medicaid Services such as the Physician Quality Reporting System and the forthcoming Value-Based Payment Modifier program; local and regional measurement and reporting collaboratives such as Minnesota Community Measurement, the Aligning Forces for Quality supported by the Robert Wood Johnson Foundation, and the Net-work for Regional Health Improvement; and programs from private payers such as the Blue Cross and Blue Shield of Massachusetts Alternative Quality Contract and the UnitedHealth Premium Designation Program. 1 These measurement and reporting programs, while varying in their focus and approach, generally rely on claims-based measures of quality and efficiency because of the wide availability of claims data for analysis, the ability to achieve larger sample sizes, advances in analytic methods such as episode groupers, and lower administrative costs compared with measures requiring chart abstraction. And although the federal Meaningful Use program has provided incentives for physicians to adopt electronic health record (EHR) systems that can report on quality measures, issues of reliability, validity, and feasibility of EHR-based reporting continue to be substantial. 2 The stakes thus are high and growing greater for practicing physicians as they consider how they are being measured, as well as how to approach the emerging new models for care delivery and payment. Primary care physicians in particular need to focus on how the PCMH might relate to physician-level quality and efficiency measurement. The PCMH model includes primary care transformation using teams and proactive care plans; enhanced access and care coordination; and a systems-based approach to whole-person care. It also includes a new payment model-typically a blended payment program that includes fee-forservice payments, a care management fee that supports the enhanced services in the model, and performancebased bonuses or other enhanced reimbursement. 3 Many demonstration projects rely heavily on criteria developed by the National Committee for Quality Assurance (NCQA), which has developed a PCMH Recognition Program. As of May 1, 2013, more than 26,634 clinicians had achieved this recognition, which is based on specific standards in 6 areas: enhanced access and continuity; identification and management of patient populations; planning and management of care; provision of self-support and community resources; tracking and coordination of care; and measurement and improvement of performance. Practices meeting criteria can achieve 1 of 3 levels of recognition. 4 The NCQA criteria have evolved over time, with initial standards released in 2008 and updated in 2011 and 2014.
Evidence is mixed on how well the PCMH model works, however. In a small randomized controlled trial, the performance of 18 intervention practices, based on the NCQA Physician Practice Connections PCMH model, was compared with that of a control group of 14 practices, measured over a 2-year time period. 5 Practice performance was evaluated on 11 quality mea-sures based on the Healthcare Effectiveness Data and Information Set, 10 efficiency indicators, and a panel of measures assessing cost of care. Relative to the control group, the intervention group showed modest improvement on a minority of quality and efficiency indicators and reduced emergency department visits, but no cost savings. A larger, 3-year multipayer medical home pilot project that provided financial incentives for achieving NCQA PCMH recognition reported improvement on only 1 of 11 quality measures, and no changes in utilization or costs when measured against those of comparison practices. 6 And a study that compared costs and utilization among Medicare beneficiaries in practices with and without PCMH recognition found lower total annual Medicare payments, levels of emergency department visits, and acute care hospital payments among PCMH practices, but no differences in hospital admissions or readmissions. 7 A systematic review by the Agency for Healthcare Research and Quality concluded that the PCMH is a promising, rapidly evolving innovation, but more well-designed studies that evaluate the full PCMH model are needed before drawing firmer conclusions as to the model's effectiveness. 8 Physicians therefore are faced with the challenge of dealing with multiple change initiatives occurring during a time of considerable uncertainty. To make this dilemma concrete: should physicians spend time and energy focusing on structural measures and process changes such as those embodied in NCQA PCMH recognition? Or should they focus on improving quality and efficiency measures from private payers that could affect their fee schedule, degree of participation in narrow networks, or patient volumes? Or should they focus on making sure they report and improve on measures from Centers for Medicare and Medicaid Services to avoid reductions in fees from the Medicare fee-for-service program?
UnitedHealthcare Premium Designation
UnitedHealthcare, a large national health insurer, has been operating a large-scale program to measure and report on physician quality and efficiency performance since 2005. This program, UnitedHealth Premium Designation Program (hereafter, "Premium program"), uses claims and administrative data to assess quality and efficiency performance of physicians across multiple specialties. The program involves nearly 250,000 US physicians, operates in 41 states, and covers 21 medical specialties. The program's approach to measuring quality and efficiency illustrates how many claims-based measurement programs work.
The Premium program assesses the quality of care physicians provide using more than 300 measures across all specialties (172 for primary care), including those endorsed by the National Quality Forum, NCQA, and others that were developed by medical specialty societies or expert panels and reviewed by committees of practicing physicians. Physicians' performance on quality of care is assessed by identifying specific opportunities to provide evidence-based care, determining whether that care was provided during a given time period (1 to 3 years, depending on the measure), aggregating the successes and opportunities attributed to these successes across all eligible rules, and then comparing a physician's success rate with a benchmark.
The Premium program assesses efficiency by measuring case-mix and risk-adjusted costs using a data set that includes fee-for-service claims from the more than 20 million enrollees in UnitedHealthcare's commercial plans each year whose care is not paid via capitation. The data set contains actual costs incurred-that is, allowed costs rather than billed charges-for physician, hospital, pharmacy, and other services. Physicians are compared using the cost of patients (population-based measurement) and episodes of care (episode-based measurement) attributed to them and are measured against benchmarks after adjustment for risk and episode (disease) class, patient severity, physician specialty, geographic area, and patient pharmacy benefit status. More detailed information about the quality measures, rules used to attribute patients and episodes of care to physicians, and other methodologic details is available online. 9
UnitedHealthcare and PCMH: Longitudinal and Cross-Sectional Experience
UnitedHealthcare was an early proponent of the PCMH model and developed a number of medical home programs (single-payer and multipayer) across markets in the United States. UnitedHealthcare's experience has been largely favorable in these programs (which have relied on NCQA PCMH certification as a qualification) when measured longitudinally, demonstrating improvement in quality measures for preventive and chronic care, care coordination, access, and patient satisfaction, and in saving approximately 6.2% of medical costs on average. 10 We were, however, also interested in looking at PCMH performance on a cross-sectional basis to assess the model outside of a demonstration environment and to see if PCMH-recognized physicians have different levels of quality and efficiency performance as compared with other physicians. By leveraging the national scope of our data, we were able to perform one of the largest descriptive analyses to date of the PCMH model.
Using physician name and National Provider Identifier, we were able to match the Premium program physicians to NCQA PCMH-recognized physicians, resulting in a match of 17,343 unique physicians in primary care (internal medicine, family practice, or pediatrics). We compared this group with 17,323 primary care physicians in the Premium program data set who were not recognized as a PCMH by NCQA.
Looking at quality in this cross-sectional analysis, we found a positive association between achieving Quality in the Premium program and NCQA recognition status, with significantly higher odds of having PCMH recognition and of passing the Premium program Quality Designation compared with not passing this designation while having PCMH recognition. We also found, however, significantly lower odds that a physician who met both our Quality and Efficiency criteria would have PCMH recognition.
As it appeared that PCMH recognition was positively correlated with better Quality performance, but negatively correlated with combined Quality and Efficiency performance, we further analyzed the association between PCMH recognition and Efficiency using both χ 2 and logistic regression analyses. In these analyses, we found a negative association between meeting the Premium program Efficiency designation criteria and having PCMH recognition, with significantly lower odds that a Premium program Efficiencydesignated physician had PCMH recognition than a physician not meeting the Efficiency criteria.
Implications for Physicians
The differences between these longitudinal and crosssectional views of physician performance and their relationship to PCMH recognition illustrate a number of issues facing physicians today. First, there is unlikely to be a single view of physician performance; rather, there are likely to be multiple views, with different levels of analysis, time frames, and methodologies. These views may or may not point in the same direction and yet may still be "correct." Second, it is important for physicians to understand and develop the appropriate actions based on the data, and endeavor to avoid both overreaction and underreaction. For example, in our analysis of the association between NCQA PCMH recognition and empirically measured quality and efficiency performance in the Premium program, we found that this recognition was positively associated with achieving Quality benchmarks, but negatively associated with achieving Efficiency benchmarks. Rather than just inferring that the PCMH model is "less efficient" or being overly alarmed that achieving PCMH recognition would negatively affect one's practice, physicians should keep several points in mind. It is possible that practices that have achieved NCQA PCMH recognition increase the use of underused services that, although helping them achieve better Quality results, increase comparative episode costs that make them appear to be less efficient. Additionally, it is possible that some PCMH practices are in fact less efficient, perhaps because of practice redesign or other factors that negatively affect workflows, as has been reported with other practice changes such as adoption of EHRs. 11,12 Third, as the Premium program uses allowed charges, it is possible that groups with higher fee schedules differentially seek or achieve NCQA recognition, with price variation being the major contributor to lower efficiency.
Lastly, it is possible that PCMH recognition is associated with other factors (such as more comprehensive data capture) that systematically enhance measured quality while reducing comparative episode efficiency. It is also important to keep in mind the emerging results from longitudinal analyses, which tend to consistently show improvement in quality performance, with more mixed results on efficiency.
In addition, any measurement endeavor will have intrinsic limitations. For example, the Premium program is based on data from a commercially insured population and may not be representative of other groups such as those with Medicare or Medicaid, or the uninsured. Also, the quality and efficiency measures in the Premium program do not capture all areas of clinical medicine and were not designed specifically to measure performance of the PCMH model. In particular, measures of expanded access, comprehensiveness of care, and care coordination, all of which are fundamental to high-quality primary care, may not be well captured by currently available quality measures. Finally, episode-based efficiency measurement has limitations, and although the Premium program measures both total population and attributed episode costs, other methods for measuring cost or efficiency could have different patterns of association with PCMH recognition.
KEY QUESTIONS AND ISSUES FOR PHYSICIANS BEING MEASURED
Just as in clinical practice, the experienced physician will incorporate multiple perspectives and apply triangulation to develop the best course of action in diagnosis or treatment under conditions of uncertainty, that same approach can be used in understanding and acting on the data from measurement programs.
Key issues physicians should focus on include the following:
CONCLUDING THOUGHTS
Even as EHRs, clinical registries, and other more fine-grained sources of data evolve and mature, claimsbased measurement is here to stay for the foreseeable future. It is critical that practicing physicians develop a deeper understanding of how they are being measured. Perhaps the most noteworthy first step: be sure to look at your data and the methodology of the measurement program. If there are errors in the measures, reach out to correct them and help improve the measurement program. Then, use the data to work on meaningful improvement. Those who are doing the measurement, such as payers, need to continue to work to better align measurement approaches to achieve greater consistency, and work to develop common approaches across payers to achieve larger sample sizes. At all levels-the physician, the practice, and the system-major opportunities exist to improve quality and efficiency of care through measurement and improvement. | 2017-10-27T07:44:42.433Z | 2015-05-01T00:00:00.000 | {
"year": 2015,
"sha1": "124711692dcf73a922afdbe895389eb6c0c69435",
"oa_license": null,
"oa_url": "http://www.annfammed.org/content/13/3/264.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "124711692dcf73a922afdbe895389eb6c0c69435",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
163475 | pes2o/s2orc | v3-fos-license | A switchable light-input, light-output system modelled and constructed in yeast
Background Advances in synthetic biology will require spatio-temporal regulation of biological processes in heterologous host cells. We develop a light-switchable, two-hybrid interaction in yeast, based upon the Arabidopsis proteins PHYTOCHROME A and FAR-RED ELONGATED HYPOCOTYL 1-LIKE. Light input to this regulatory module allows dynamic control of a light-emitting LUCIFERASE reporter gene, which we detect by real-time imaging of yeast colonies on solid media. Results The reversible activation of the phytochrome by red light, and its inactivation by far-red light, is retained. We use this quantitative readout to construct a mathematical model that matches the system's behaviour and predicts the molecular targets for future manipulation. Conclusion Our model, methods and materials together constitute a novel system for a eukaryotic host with the potential to convert a dynamic pattern of light input into a predictable gene expression response. This system could be applied for the regulation of genetic networks - both known and synthetic.
Background
Gene expression systems with both spatial and temporal regulation are key components of engineered and synthetic biological networks. Engineered systems generally use a controlled external stimulus to signal to a specific promoter element, producing a rapid and dose-dependent response [1]. The external stimulus, used at the level of both the whole organism and cell culture, has often been a small, cell permeable molecule, which functions as an activator for the corresponding promoters [2][3][4]. Heat shock gene promoter systems can also be utilised for conditional gene expression using heat or irradiation as the stimulus [5].
The yeast artificial light switchable promoter system proposed by Shimizu-Sato et al. demonstrates many of the advantages of inducible systems, including low background expression, high inducibility, reversibility and dose-dependence [6]. It combines these desirable features with non-toxicity and a lack of pleiotropic and unantici-pated effects which are inherent properties of chemically inducible systems. This system is based on the properties of the plant phytochrome B photoreceptor (PhyB), which reversibly changes its conformation in response to red (λ max = 660 nm) or far-red light (λ max = 730 nm). The farred light absorbing conformer (PhyB Pfr) binds to the phytochrome interacting factor 3 (PIF3) protein, whereas interaction between the red light absorbing conformer (PhyB Pr) and PIF3 is much less efficient [7]. In the proposed system, PhyB and PIF3 are expressed as chimeric proteins, fused to the DNA-binding (GBD) or the transcriptional activator (GAD) domain of the GAL4 transcription factor, respectively, giving a typical two -hybrid interaction assay. The cis component of the system is the lacZ reporter gene controlled by a GAL4-responsive artificial promoter. In darkness, PhyB-GBD binds the promoter, but does not induce transcription. Red light illumination converts PhyB into the Pfr form, therefore facilitating PhyB-PIF3 interaction, which recruits PIF3-GAD to the GAL4-dependent promoter resulting in the activation of transcription. Subsequent far-red light illumination coverts PhyB Pfr to Pr and this is followed by the dissociation of the PhyB-GBD -PIF3-GAD complex and abrogation of transcription. The authors demonstrated the dose-dependent response of the system and the dynamics of photoreversible activation of the lacZ reporter gene, derived from quantitative liquid culture assays.
Recently, another genetically encoded signalling system based on PhyB -PIF3 interaction, with different chimeric proteins, has been successfully used for photoswitching of actin assembly through the Cdc42-WASP-Arp2/3 pathway in E.coli [8].
All phytochromes (PhyA-E) in the model plant Arabidopsis thaliana are capable of light-dependent conformational changes, but interacting proteins have only been investigated for the two most abundant phytochromes (PhyA and PhyB) [7,9,10]. FAR-RED ELONGATED HYPOCOTYL 1 (FHY1) and FHY1 LIKE (FHL) proteins control the nuclear import of PhyA via specific interactions with the Pfr conformer [11,12]. It follows that, besides the PhyB-PIF3 pair, other phytochrome-interacting protein combinations could be employed as the "light sensing" module of the expression system.
Functional phytochrome receptors consist of the apoprotein and the covalently linked chromophore called phytochromobilin. Since the chromophore is not synthesised in yeast, an analogous compound, phycocyanobilin (PCB), purified from cyanobacteria, is added to the media. PCB is taken up readily by yeast cells and is autoligated by phytochrome apoproteins resulting in photochemically functional phytochrome photoreceptors [13][14][15]. When expressed in yeast with PCB, PhyA behaves like other phytochrome receptors: the Pr ↔ Pfr conversion is controlled by red and far-red light [15][16][17].
The light switch described by Shimizu-Sato at al., translates light-dependent protein interactions into transcriptional regulation of a selected gene [6]. Beta-galactosidase is the most widely used reporter gene in yeast; however, the protein has a half-life of more than 20 hours in this system, and it can be detected in vitro only [18]. By comparison, the firefly luciferase has a 1.5 hour half-life in yeast, and luciferase activity (luminescence) can be monitored in real-time and in vivo, which makes this reporter a better tool for monitoring dynamic changes in transcription, as has been elegantly demonstrated recently through the monitoring of the cell-cycle and respiratory oscillations monitoring in agitated liquid yeast culture [19,20].
Our aim was to create and mathematically model an inducible gene expression system, based on the principles described above, but containing novel components that provide more stringent regulation and in vivo real-time detection of transcription in yeast colonies on solid media.
Selection and testing of components for the light inducible expression system
Detection of promoter induction via beta-galactosidase activity is a well characterised method in S. cerevisiae however it requires time-consuming sampling and in vitro analysis. In order to provide a real-time, in vivo detectable reporter in our system, the GAL4-responsive GAL1 promoter was fused to the firefly luciferase gene (GAL1:LUC) (Fig 1). Figure 1B shows the resulting gene circuit in the community standard Systems Biology Graphic Notation (SBGN) [21]. As a constitutive control, the ADH1:LUC (ALCOCHOL DEHYDROGENASE I) construct was prepared and stably integrated into the genome. Yeast colonies prepared as described in Materials and Methods reached a steady state of luminescence 16-18 hr after luciferin was applied (Additional file 1). As expected, ADH1:LUC produced much higher light emission than GAL1:LUC independent of the GBD/GAD fusion proteins expressed (Additional files 1 and 2). Separate set of patches were irradiated with red light (R) or far-red light (FR), or R immediately followed by FR (R/FR), or were kept in darkness. R light induced a rapid increase of luminescence in the case of yeast patches expressing GAL1:LUC, but not in the ADH1:LUC-expressing patches or in patches expressing GAL1:LUC without GBD/GAD fusion proteins (Fig 2 and Additional files 1 and 2). Luminescence reached a maximum 14-16 hr after the R light, followed by a slow decrease. In contrast, FR light alone induced very low levels of luciferase activity, which was essentially the same, when R light treatments were followed by FR light immediately (Fig 2). Since the relative (fold) induction was the highest in yeast cells having the GAL1:LUC reporter and expressing PHYA-GBD and FHY1-GAD (Fig 2A), this set of interacting proteins were used in further experiments. These results demonstrate that (i) appropriate LUC markers can be used to report phytochrome photoconversion and light-induced protein-protein interactions in our system; (ii) LUC enzyme activity is unaffected by light in yeast and (iii) yeast patches grown on solid media and treated with luciferin represent stable and reliable experimental material for luminescence imaging.
Luciferase as a reporter for gene expression in yeast
In order to calculate changes in the rate of transcription from real-time luminescence data, it was necessary to determine the relationship between transcription and enzyme activity.
The copper inducible CUP1 promoter was fused to luciferase gene, expression was induced and CUP1: LUC RNA and luciferase activity were tested over 7 hours time course. Figure 3 shows a 3-4 h delay in the induction of LUC activity relative to LUC mRNA expression. These data contributed to determine the kinetic parameters (mRNA half-life, translation rate) for the Luc reporter model.
An unidentified compound functions as a chromophore Text for this sub-section
Phytochromes are chromoproteins consisting of the apoprotein and a covalently linked, linear tetrapyrrole chromophore, phytochromobilin (PΦB) [22]. In the absence of chromophore, phytochromes cannot absorb light, do not show light dependent conformation changes and, therefore, do not function as photoreceptors. Phytochrome apoproteins are synthesised in the Pr form in plants and after autoligation of PΦB are capable of light absorption and photoconversion into the Pfr conformer.
Light-responsive gene promoter system , or PHYBNT-GBD/GAD-PIF3 (C) fusion protein-pairs were grown in darkness to form patches (merged colonies) for two days at 30°C, treated with 2.5 mM luciferin and transferred to 22°C for 17.5 h. Separate yeast patches were irradiated with single red (R), or far-red (FR) light pulses, or with red pulses immediately followed by far-red pulses (R/FR), or were kept in darkness (Dark). Luminescence values normalised to the pre-pulse levels are shown; time 0 h is the start of the light treatment. The luciferin pretreatment is shown in Additional file 1. E: Selected luminescent images of yeast patches used to obtain data in panel A. I: red light-induced, NI: non-induced dark control, T0: last images before the light pulse. Consecutive luminescent images taken in every two hours are shown. Pictures are displayed in pseudo-colors: red-white or blue-black colors indicate high or low expression levels, respectively.
Cyanobacteria (Synechococcus and Synechocystis sp.) harbour phytochrome-like photoreceptors, which use a chromophore (PCB) with similar structure to that of PΦB [23]. Plant phytochromes binding PCB are fully functional photoreceptors and, because of the relative ease of PCB purification it is generally used as an exogenously added chromophore [24]. However, it was unclear whether yeast cultures contained chromophore-like compounds that could serve as the chromophore for plant phytochromes. To test this, yeast cells with the GAL1: LUC reporter and expressing PHYA-GBD and FHY1-GAD were grown on media lacking PCB and the same light treatments were administered as in Fig 2A. To our surprise, significant R induction was detected in the absence of PCB (Fig 4). The fold-induction was reduced to 30% compared to the results with added PCB (Fig 4 vs. Fig 2A). Moreover, Figure 4 shows that FR light alone, or R followed by FR light also gave qualitatively very similar results to the photoreceptor with PCB. The basal expression level of GAL1:LUC was not affected significantly by the presence or absence of PCB (Additional file 2). These results demonstrate that an unidentified compound naturally present in yeast, can serve as a chromophore for phytochromes expressed in this heterologous system.
Model assumptions and structure
To use our regulatory system for synthetic biology we developed an ordinary differential equation (ODE) model of its function based on kinetic data from the literature and experimentally determined parameter values ( Fig 1A, B and Additional file 3). The model describes all the known phytochrome properties (e.g. photoconversion, dark reversion, sequestration, etc), using yeast phytochrome data to provide a realistic description of the light-switch function (for the detailed model description and structure, see Methods).
In summary, the model assumptions are: 1) Overall concentrations of Phys and PIF3/FHY1/FHL are constant, 2) Before the light impulse all the phytochromes are in the inactive (Pr) form, and sequestered in a slow acting pool. This might be related to their inclusion in SAPs(Sequestrated Areas of Phytochromes)-like structures similar to those observed by microscopy in cytosol [15] (see Methods for more details).
3) The dark reversion rate is the same for the free Phy and the Phy-FHY1/FHL complex. 5) The luciferase -luciferin subsystem is approximated as a steady state before light treatments.
Kinetics of induction of luciferase mRNA and luciferase activ-ity from the CUP1:LUC reporter gene Figure 3 Kinetics of induction of luciferase mRNA and luciferase activity from the CUP1:LUC reporter gene. Yeast cells harboring the CUP1:LUC construct were grown in liquid rich media overnight at 30°C. CUP1:LUC expression was induced by 1.5 mM copper-sulphate (final concentration) at time = 0 and aliquots were harvested hourly. The samples were used to prepare crude protein extracts and to isolate total RNA. Luciferase activity was measured by in vitro assays and luciferase mRNA was determined by qRT-PCR reactions. Switching without PCB Figure 4 Switching without PCB. Yeast cells harboring the GAL1:LUC reporter and expressing PHYA-GBD and GAD-FHY1 fusion proteins were grown in darkness at 30°C for two days on media without PCB. Luciferin and light treatments and imaging were performed as in Fig. 2A. 6) The initial sharp decrease in luminescence following the application of luciferin is due to the diffusion of luciferin from the point of application on the yeast patch into the agar medium. This enables the model to provide a good fit to conceptually similar systems with different interaction partners. For example, the adapted model fits Shimizu-Sato's data with good accuracy using parameter values derived from the literature [6,7,16,25]. This model is simpler, for the main part because the slow LacZ degradation obscures the long-term kinetics (see the model equations in Methods and simulation results in Additional file 4).
To account for the initial difference in cell density for each yeast patch that affects luminescence intensity ( Fig 5) and for the non-uniformity of the solid media, the initial conditions were set for each patch (each experiment) individually, so that each experimental curve is considered for two regimes, "diffusion" and "phytochrome". The former starts from the application of luciferin with initially decreasing luminescence, which approaches an approximate steady state after 17-18 hours. The latter begins from the light treatment and continues to the end ( Fig 6A). The first regime fits separately to the diffusion part of the model and provides the initial luciferin level at the time of light application. Such decomposition of the initial conditions is introduced to describe the temporally changing substrate availability that emerges from the solid culture conditions. Modelling the solid media allows a much wider variety of experimental applications in conditions that are most common in yeast and synthetic biology. The diffusion coefficient in our experiments corresponds to diffusion rate of 3 to 10 mm/h, which is in good agreement with literature data for agar gel [26].
Parameter values for model equations were obtained from fitting the model to all the timeseries data from luciferase imaging (Fig 6A, B), within the parameres ranges derived from the literature ( Table 1). The parameter data for luciferase protein degradation rate was supported by additional experiments using cycloheximide treatment of yeast cultures constitutively expressing the luciferase gene (data not shown). The degradation rate constant is estimated to be 0.2-0.8, which corresponds to a 0.8-3 hour half-life. This is similar to the value measured in yeast and mammalian cell cultures and plants [27][28][29].
Model predictions
The refined model both captures qualitative dynamics and enables a quantitative description of the light switching behaviour. Moreover, it enables us to deduce which parameters would be critical for particular behaviours of the system. By varying these parameters we showed that predicted intermediate state during the photoconversion of Pfr_FHL is crucial to match the slow switching off of the observed LUC expression (Fig 7A). The biochemical nature of this state as well as its experimental measurement is the subject of further experiments. Also, according to the model simulation (Fig 7B, C), shortening of reporter protein half-life does not affect the longevity of reversal of the transcription activation but significantly reduces the intensity of the luminescence. The light switch model also gives several predictions about the long-term system behaviour ( Fig 8A). In particular, we predicted based on experimental data for 50 h, that in the experimental conditions considered the complete removal of the transcriptional activation effect should take a relatively long time (100 hours), and this has been confirmed by experiments (data not shown). Furthermore, with the given dynamics, we can manipulate subsequent applications of R and FR, to achieve a wide range of desirable profiles of transcription activation ( Fig 8B, c, and 8D). Fig 8B illustrates the different types of behaviour of light input, which depend on the interval between R and FR treatments. It is clear from simulations that a small interval (2 min) between R and FR causes the increase of transcription rate and, accordingly, the increase of the luminescence intensity with time. Meanwhile, a longer interval (5 hours) produces the stable base line of input oscillations. On the basis of these simulations one can create a specific protocol of light input, combining the given modes as required, and thus obtaining the "square" shape ( Fig 8C) with two modes of light regime, and "sigmoid" shape ( Fig 8D) with three modes. Thus, the overall system could be used as a tool for the design of experiments with flexible perturbations of the system, for example, by changing the time intervals between.
Discussion
We developed a photo-regulatory genetic switch for yeast cells that combines several desirable properties. In addition to the widely recognised interacting pair PhyB-PIF3 we have tested other possible protein combinations. We found that the PhyA-FHY1 (and PhyA-FHL) pair provides higher induction level with lower background than that of the PhyB-PIF3 pair in our experimental conditions.
Previous experiments were carried out using agitated liquid yeast cultures at 30°C [6]. We used yeast colonies grown on solid media, for the reason that this setup facilitates light treatments, continuous monitoring of luminescence and potentially allows spatial patterning of light input and biological response. Our experimental system represents a reliable, reproducible and simple set up for investigation of dynamic transcription.
Phytochrome photoperception in yeast has previously been reported with addition of exogenous chromophore (PCB) [6,11]. Our system also showed light responsiveness without exogenous chromophore, albeit at a lower level (Fig 4). We propose that phytochromes can employ an unidentified compound from yeast as a chromophore, but the light absorbing efficiency of the constituted receptor is less than that of the holoprotein binding PCB. As a result, R treatment induces a reduced amount of phytochrome Pfr, which results in less efficient induction of transcription. The heterologous chromophore may be specific for some yeast strains, or its weak activating effect could have been difficult to detect using previouslyemployed reporter genes.
We developed a mathematical model that describes the system and fits the experimental data with great accuracy. The model incorporated experimental variability arising from the cultures on solid media, via substrate diffusion that corresponds to observations and sets the initial sub- (Fig 6B), model fits better to the longer intervals (at least 1 hour between R and FR treatments), compared with the shorter treatment intervals, when FR is given immediately or 30 min after R. It can be seen from the time series that there is only a small quantitative difference between immediate and 30-mindelayed FR, while after a 1 hour delay the shape of the response resembles that of a single R treatment (Fig 6B), differing only in amplitude. This qualitative shift in the system behaviour requires further analysis, which may shed light on mechanism of transcription activation by R light and inactivation by FR light in the system. We found that the kinetics of induction were slower in our conditions compared to previously reported experiments [6]. For our tests, yeast patches were grown at 30°C for two days, and then, due to technical issues, the plates were moved to 22°C in the imaging chamber, so the effect of light treatments was investigated at 22°C. We found evidence that the system responds more quickly at 30°C (data not shown), but a complete explanation requires further investigations.
Time course of luciferase luminescence intensity in different light conditions
Our system does not display an instantaneous shutting off of target gene expression. It takes a substantial period of time to completely remove the Luc signal after FR treatment. Modelling suggests that this is not simply due to stability of the Luc reporter (see Fig 7B, C), but rather reflects persistent PhyA activity. It should be noted that far-red exposure does not convert all the active PhyA into Pr form, but by itself produces about 3% of Pfr form [30]. Additionally, we propose a residual physical interaction between the Pr form of PhyA and FHY1/FHL as a possible explanation for the slow kinetics. This was supported with model simulations that correspond to the experimental kinetics. However, the properties of the intermediate state remain to be determined.
Conclusion
The current work initially aimed to create a system to provide well-defined, light-induced perturbations in tran- scription to a genetic oscillatory circuit, to effect entrainment of the oscillations to a rhythmic light regime. The light switchable system presented here meets the requirements for an entrainment tool; moreover the mathematical model will facilitate the design of any desired entrainment mode. Hence, using the light switch with the corresponding model provides a powerful tool for regular perturbing any gene system of interest with a predictable amplitude and period. Moreover, with spatially-patterned light inputs, such as images, the system would allow spatio-temporal regulation, which could facilitate a greater understanding of biological processes in which inter-cellular communication is involved.
Constructs, yeast strains and growth conditions
Plasmids expressing PHYA-GBD, PHYBNT-GBD, GAD-PIF3, GAD-FHY1 and GAD-FHL fusion proteins have been described [6,11,12]. PHYBNT corresponds to an Nterminal fragment of PHYB containing residues 1-621. GAL1, CUP1 and ADH1 promoters containing full 5' untranslated regions and the 3' un-translated region (terminator) of the GAL2 and ADH1 gene were amplified from S. cerevisiae PJ69-4A genomic DNA using the following primers: GAL1 Fwd: 5'-AAAGTCGACATTACCACCATATACATATCC-3' The promoter:luciferase-terminator constructs were assembled in pBluescript SK plasmid using the restriction sites designed for the PCR primers (sites are underlined in the sequences of the primers above). All plasmids were transformed into E.coli by the SEM method and cultured under standard conditions [31]. The GAL2 or the ADH1 terminator was used for the GAL1, CUP1:LUC or the ADH1:LUC construct, respectively. The constructs were verified by sequencing and re-cloned in the integrating plasmid pδ-UB [32]. The final clones were linearized with XhoI and transformed in yeast strain PJ69-4A by standard LiAC/carrier DNA/PEG protocol. Transformants were plated on synthetic dropout media (SD) without uracil SD(-U). Selected strains carrying the GAL1:LUC construct were co-transformed with plasmids pD153 or pGADT7 (Clontech) expressing GBD-or GAD-fusion proteins, respectively [6]. Transformants were selected and maintained on SD(-LW) plates. Preparation of media and transformation of yeast cells was done according to the Clontech Yeast Protocols Handbook (Clontech).
Model simulation and predictions
In vivo luminescence imaging, light treatments 2 ml of selective SD media was inoculated with yeast cells and incubated for 16 hr at 30°C with agitation. 20 μl drops of the cultures were transferred to SD agar plates containing 10 μM PCB, irradiated with far-red light at 70 μmolm -2 s -1 fluence rate for 10 min and incubated for 48 hr at 30°C in darkness. All further manipulations were conducted under green safety light. Yeast cells formed merged colonies (or patches) with 5-8 mm diameter. 20 μl of 2.5 mM luciferin solution was pipetted at the center of each patch and the plates were transferred in the imaging chamber at 22°C. Images were taken every 15 minutes using a liquid nitrogen-cooled CCD camera (Visitron Systems GmbH, Munich, Germany). Luminescence was quantified using the Metamorph software (Molecular Devices, Downingtown, PA). Unless stated otherwise, light treatments were administered 17-18 hr after the application of luciferin. The duration of each light treatment was 10 min and fluence rate of light (independent of wavelength) was 70 μmolm -2 s -1 . Red and far-red light was provided by Snap-Lite LED modules (Quantum Devices, Barneveld, WI).
Induction of CUP1:LUC expression, qRT-PCR and in vitro luciferase assays
Yeast cells carrying the CUP1:LUC construct were inoculated in 10 ml of SD(-U) media and were grown for 16 hr at 30°C with agitation. The starter cultures were diluted to a final volume of 100 ml with fresh SD(-U) media. CUP1:LUC expression was induced by adding CuSO 4 solution to a final concentration of 1.5 mM. Samples were harvested hourly from induced and non-induced cultures. 2 ml or 100 μl of the cultures were pelleted and frozen for RNA quantification or for luciferase assays, respectively. Total RNA was isolated by using the RNeasy Plant Mini Kit (QIAGEN) according to the manufacturer's instructions. cDNA synthesis and qRT-PCR was performed as described [33]. Primers for qRT-PCR were: Luciferase-specific signals were normalised to ACTIN 1 (ACT1) levels for each sample. For in vitro luciferase activity measurements, frozen cell pellets were re-suspended in 100 μl of Cell Culture Lysis Buffer (Promega), and vigorously vortexed. After incubation on ice for 5 min, cell debris was pelleted by centrifugation and the supernatant was used as crude protein extract. 20 μl of protein extracts was mixed with 30 μl of the Steady-Glo Luciferase Assay Reagent (Promega) in the wells of a microtiter plate and luminescence was measured in the TopCount NXT luminometer (Perkin-Elmer) for an hour after the addition of the reagent. Counts during monitoring were averaged and normalized to total protein content of the extracts. Protein concentrations were determined by the Bradford assay [34].
Model description and structure
Our principal model system (Fig 1A, B) includes two chimeric proteins: phytochrome fused to the GAL4 DNAbinding domain (Phy_GBD), in the active (Pfr) and inactive (Pr) forms, and binding protein PIF3 (or FHL/FHY1) fused to the GAL4 activation domain (FHL_GAD). According to existing experimental data, the recombinant phytochromes are quite stable in yeast. Although the light lability of plant PhyA Pfr is well-described, no detectable difference was observed between the stability of the Pfr and Pr forms of oat PhyA over an 80 hour time period in yeast [24]; moreover, no significant decay in the total PhyA and PhyB amounts over 120 hours was reported [35,15]. This provides the basis for assuming that our model proteins are present constitutively, so neither production nor degradation occurs in the model system. Two pools, Pool_Phy and Pool_PIF3, fulfil the mass conservation laws for Phy and PIF3.
In plants, the Pr forms of phytochromes are localized in the cytoplasm in the dark and are translocated to the nucleus in their Pfr form after light absorption [36]. In the yeast system, however, all fusion proteins are constitutively nuclear-localized due to the natural Nuclear Localisation Sequence (NLS) present in the GBD tag or the presence of the SV40 NLS motif fused to the GAD fusion partner. Therefore, in this system the only light-dependent event is the interaction of phytochromes with their corresponding protein partners. Taken together, these details give us reason to locate the interacting proteins and the processes of association and dissociation in the nucleus.
Not instantaneous kinetics of induction (Fig 2) prompted us to suggest the existence of two phytochrome pools: slow and fast. It has been reported that the sequestration of recombinant PhyA into the cytosolic SAPs (sequestered areas of phytochrome) in yeast has no dependence on light [15]. We, therefore, propose the presence of sequestered and free Phy pools (less and more easy to access, respectively) in the nucleus with a reversible interchange occurring between them. We assume that only the free pool is available for binding to its interaction partner, and, thus, the transition between slow (sequestrated) and fast (free) pool is responsible for the shape of initial light response.
It is well known that the phytochrome photoconversion cross-section (σ) for Pr and Pfr forms depends on the wavelength of light. Red (approximately 660 nm) and far red (approximately 730 nm) light are the most effective for Pr → Pfr and Pfr → Pr photoconversions, respectively. Nevertheless, it is evident from the cross-section data that the absorption spectra of the Pr and Pfr forms of Phy significantly overlap [30]. This means that monochromatic light of biologically relevant wavelength (i.e. red) does not convert all the Phy to the Pr or Pfr form, but rather determines the specific distribution ratio of the forms in the total Phy pool. We thus have to account for the activation and inactivation of phytochrome by both red and farred light, so that: Exact values for Ki and Ka for the different wavelengths were adopted from [30]. In the model Pr ↔ Pfr transitions are applied for both associated and free form of the phytochromes.
Dark reversion has been reported for PhyA and PhyB in yeast cultures [15,35,13]. According to these data, only a fraction of the total Pfr pool is subject to dark reversion (20-40% of the total amount) with a half-life of 20-40 min. For simplicity in the current model we assume a single Pfr pool that is dark reversible and has a longer halflife than the range suggested by Hennig et al [13]; however, the model is still in good agreement with the overall kinetics described in the literature [35,15].
We assume that dark reversion of the complex Pfr_PIF3 (Pfr_FHY1, Pfr_FHL occurs with the same rate. Therefore, both the photoconversion and dark reversion processes contribute to dissociation of the transcriptional activation complex. Finally, for the PhyA_FHY1/FHL complexes, we assume the existence of an additional state, Pr_FHY1/FHL, that has the ability to activate transcription to some extent, as it has been previously demonstrated by [11]. Although the reference above corresponds to PhyA, in our experimental conditions PhyB demonstrated the same kinetics (Fig 2C), so we assume the intermediate state for PhyB-PIf3 as well. According to our hypothesis, this complex is produced as an intermediate product of photoconversion of the Pfr_FHY1/FHL complex after FR exposure. Thus, we propose that Pr proteins that have previously been Pfr can interact with FHY1/FHL and activate transcription.
Mass Action kinetics were used to describe complex formation and dissociation, translocation, translation, and degradation. Transcription was described with a Hill function and the reporter enzymatic reaction follows Michaelis-Menten kinetics (see Fig 1B for
( )
The equations (1)-(5) describe changes in concentrations of all the phytochrome components, while (6) and (7) correspond to changes in concentrations of luciferase mRNA and protein, respectively.
Luminescence level is calculated according to the Michaelis-Menten equation: Light emission is measured in terms of Relative Light Units (RLU) per second and this corresponds to the rate of the light emission reaction for the colony [28]. The parameter RLU is a conversion factor that translates the number of moles of luciferin reacted into the RLU measurement by the instrument. This also accounts for the discrepancies in colony sizes (Fig 5), growth rate, and instrument characteristics.
'Diffusion' part of the model
Our experimental setup involves the application of a relatively small amount of luciferin substrate (20 μl) to a yeast patch, growing in a 100-mm diameter plate on an agar gel of 5-7 mm thickness. We assumed that the initial decrease in luminescence level just after luciferin application predominantly resulted from the diffusion of substrate through the gel. This was confirmed by an additional experiment (Additional file 5). Taking into account that the thickness of the gel is much smaller than the diameter of the plate, we assumed that the diffusion of luciferin could be described with the diffusion equation in polar cylindrical coordinates: The particular solution of form was found to fit experimental data with the best accuracy.
Here, S is the cytosolic luciferin concentration, D is the diffusion coefficient, r0 is the effective colony radius, A and B are constants of integration.
PHYB-PIF3 Model Reactions (via Shimizu-Sato's system)
Model for the Shimizu-Sato's system has the similar structure but differs in reporter -LacZ. Model lacks the description of the reporter protein kinetics due to the stability of LacZ protein and the overall relative shortness of the timescale investigated in the paper (2 h) (See Additional file 4 for the data and model simulation).
Estimation of photoconversion rates
For estimation of photoconversion rates we used the data for the photoconversion cross-section of Pr, Pfr andP and Pfr/P ratios at photoequilibrium of type -I phytochrome [30].
Fitting to experimental results
The model was developed in SBTOOLBOX2 for MATLAB and fitted with a particle-swarm optimisation algorithm from the SBPD package in SBTOOLBOX2 [37,38]. Experiments were designed to cover all possible states of the system that have to be addressed in the model. We started with fitting the model to the simple experimental protocol, including dark conditions and red light application with or without the subsequent immediate far-red application ( Fig 6A). Dark experiments taken separately provided us with parameter values for the luciferase system (see Table 1), namely the degradation and translation rates, that were fixed during the following optimization procedure. Light response parameter values were estimated from R and R-FR experiments. For that the model was simultaneously fitted to five sets of ON-OFF experiments, each containing seven experiments: R, dark and five combination of R followed by FR with intervals 0 h, 0.5 h, 1 h, 3 h and 9 h (Fig 6B). Thus, a total of 35 timeseries (each of 210-360 timepoints) were fitted simultaneously. Fitting results demonstrate a good accuracy (see Fig 6A, B) with the root mean square deviation of 1.9*10 -3 .
As we aimed to account for increasing variability arising from solid culture conditions, our model parameters comprise the members which appear specific in each experiment. First of all, this relates to parameters corresponding to 'diffusion' section (D, r0, A and B) as they establish the initial conditions by the time of light treatment. Secondly, parameter RLU that accounts for variability in colony size and growth rate also has to be locally | 2017-06-29T17:39:08.501Z | 2009-09-17T00:00:00.000 | {
"year": 2009,
"sha1": "618d1e4c45ce78a052f5d057171ed391ea1f37cd",
"oa_license": "CCBY",
"oa_url": "https://jbioleng.biomedcentral.com/track/pdf/10.1186/1754-1611-3-15",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83e7f9ab86e441606f64d6f3acc97ee315565c96",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259885566 | pes2o/s2orc | v3-fos-license | The Oxygen Evolution Reaction Drives Passivity Breakdown for Ni–Cr–Mo Alloys
Corrosion is the main factor limiting the lifetime of metallic materials, and a fundamental understanding of the governing mechanism and surface processes is difficult to achieve since the thin oxide films at the metal–liquid interface governing passivity are notoriously challenging to study. In this work, a combination of synchrotron‐based techniques and electrochemical methods is used to investigate the passive film breakdown of a Ni–Cr–Mo alloy, which is used in many industrial applications. This alloy is found to be active toward oxygen evolution reaction (OER), and the OER onset coincides with the loss of passivity and severe metal dissolution. The OER mechanism involves the oxidation of Mo4+ sites in the oxide film to Mo6+ that can be dissolved, which results in passivity breakdown. This is fundamentally different from typical transpassive breakdown of Cr‐containing alloys where Cr6+ is postulated to be dissolved at high anodic potentials, which is not observed here. At high current densities, OER also leads to acidification of the solution near the surface, further triggering metal dissolution. The OER plays an important role in the mechanism of passivity breakdown of Ni–Cr–Mo alloys due to their catalytic activity, and this effect needs to be considered when studying the corrosion of catalytically active alloys.
Introduction
Metallic materials are essential to our modern society as structural materials in most industries, infrastructures, energy, and transportation sectors, and they are desired due to their high strength, ductility, thermal and electrical conductivity, and possibility for their machining and forming.The main limiting factor of metallic materials is material degradation due to corrosion in aqueous media, [1] resulting in global annual costs of over 3% of industrialized nations' gross domestic product, [2,3] causing a significant environmental footprint and even disasters. [4,5]However, many metals and alloys exhibit a phenomenon, so-called passivity, due to the spontaneous formation of a few nanometer-thin and continuous passive oxide film on the surface, which greatly reduces the corrosion rate so that the material can be used in corrosive environments for extended periods of time. [1]Passivity has been described by the point defect model (PDM), which considers the electrochemical and chemical reactions at the metal/oxide and oxide/electrolyte interfaces, the generation and annihilation of point defects, and the ionic transport across the oxide film.[20] Ni is commonly alloyed with Cr and Mo, where Cr is known to increase the general corrosion resistance due to the formation of a Cr 2 O 3 oxide film on the surface, [20][21][22][23][24] while the role of Mo is debated.Reports suggest that Mo suppresses the dissolution of Cr, [20,[23][24] stabilizes the passive film in the case of local corrosion attack known as pitting corrosion. [19,25,26]avoring the Cr 2 O 3 formation, [27][28][29] and has beneficial properties when it comes to re-passivation of the surface. [26,30,31]Ni alloys are less extensively studied than stainless steel, and there is still a lack of understanding of the corrosion mechanisms.Commonly used electrochemical techniques for studying passivity and corrosion of stainless steel may not be directly applicable to Ni-based alloys because the measured electrochemical current is not only due to corrosion reactions. [32] providing valuable information of the electrochemical behavior, the passive film, and metal dissolution. However, rsults obtained from separate experiments do not always correlate well with each other, particularly due to very different experimental conditions.For example, in commonly used XPS measurements, the sample is placed under ultrahigh vacuum (UHV) conditions. Hence, he correlation of XPS results with experiments performed in electrolyte or ambient conditions is not straightforward. A fundamental ad complete understanding is also missing regarding how the chemistry and structure of the passive film evolve in realistic aqueous conditions and how that correlates with the onset of dissolution, which determines the breakdown of passivity.
In this work, we have carried out a comprehensive in situ study where we combine several synchrotron techniques to characterize the surface region of a Ni-Cr-Mo alloy immersed in NaCl electrolyte during electrochemical polarization at stepwise increased anodic potentials, as shown in Figure 1.X-ray reflectivity (XRR) and ambient-pressure XPS (AP-XPS) were used to investigate the thickness and chemistry of the passive film.Grazingincidence X-ray diffraction (GI-XRD) was used to determine the change in the metal lattice underneath the oxide film.X-ray fluorescence (XRF) was used to quantify the concentration of dissolved elements in the electrolyte.X-ray absorption near-edge structure (XANES) was used to study the chemical state of the species dissolved into the electrolyte and the chemical state of corrosion products formed on the surface.The XRR, XRF, and GI-XRD were integrated into one setup, while XANES and AP-XPS were measured in separate experiments.Combining these techniques allowed us to study the corrosion process, detect the passivity breakdown in situ, and correlate it with the onset of the OER from data measured in NaCl electrolytes of both pH 7 and pH 2.
Results
Figure 2 shows the electrochemical behavior of the Ni-Cr-Mo alloy in a 1 m NaCl solution at pH 7 and pH 2 to highlight the presence of OER.As seen in Figure 2a, the alloy exhibits a stable passive range with low current densities in the range of μA cm −2 , up to ≈600 mV vs Ag/AgCl at pH 7 and ≈900 mV at pH 2, respectively.At higher potentials, the current density increases, appearing like passivity breakdown.The thermodynamic onset for OER at pH 7 and 2, calculated with the Nernst equation E red = 1023 − 59•pH (mV vs Ag/AgCl), is also marked in Figure 2a.The values, 610 mV at pH 7 and 905 mV at pH 2, which agree well with the potentials at which the current increases, suggest that the OER takes place.Even if Cl ions are present in high concentration, the chlorine evolution reaction (CER) can be neglected since the standard potential is 1161 mV vs Ag/AgCl, which is outside the potential range used in Figure 2. Also, the onset potential of CER is not pH dependent since protons don't take part in the reaction as in OER, (this is further discussed in the Supporting Information). [58]The difference in the electrochemical behavior can also be seen in Figure 2b where the sample was increasingly polarized with different scan rates.As can be seen from the faster scan rates, the difference between pH 2 and 7 is more pronounced, and the curves do not converge at higher potentials, as is seen for the slow scan rates.Metal dissolution, which requires the transport of cations through the oxide film on the surface, is suppressed at higher scan rates due to slow kinetics compared to OER.At higher scan rates, there is also less time to alter the local pH during OER, which will be discussed in more detail later.This is why a separation between the polarization curves measured in pH 7 and pH 2 can be observed for the faster scan rate.
The semiconducting property of the passive film was investigated through Mott-Schottky analysis, as shown in Figure 2c.The Mott-Schottky plots show a positive slope in the lower potential region and a negative slope in the higher potential region, typically observed for passive films on Cr and Ni-Cr alloys consisting of Cr 2 O 3 . [59,60]The negative slope at higher potentials is characteristic of a p-type semiconductor, which is the potential region relevant to this study.For a p-type semiconductor, the point defects are cation vacancies, and the defect density can be calculated from the slope of the linear region of the Mott-Schottky plot.The defect density is slightly higher for the film formed at pH 2 than for pH 7.
The electrochemical impedance spectroscopy (EIS) Nyquist plot in Figure 2d shows part of a semicircle modeled with an equivalent circuit (see the Figure S10f, Supporting Information).At 800 mV, the charge transfer resistance was significantly lower in pH 7 than in pH 2, with values of 3.5 kΩ and 21.8 kΩ, respectively.This large discrepancy diminishes at higher applied anodic potentials, as shown in Figure 2e,f, where the EIS response becomes close to equivalent at 1000 mV.The charge transfer resistance drops significantly at these high anodic potentials where the current density due to OER is larger.At potentials of 900 mV and above, an inductive loop is observed in the Nyquist plot, implying that metal dissolution takes place. [61]These EIS spectra were modeled with the equivalent circuit shown in Figure S10g (Supporting Information).The convergence of the EIS data goes in line with the observed convergence of the polarization curve at high anodic potentials, as shown in Figure 2a.As mentioned above, this is due to the local change of pH near the surface, which will be discussed in more detail below.The fitting parameters for the EIS data can be found in the Supporting Information.
Figure 3 shows the composition and thickness of the surface oxide in the passive range.The oxide thickness and composition were calculated from the AP-XPS data and presented versus the potential, as shown in Figure 3a,b.At OCP and in the passive range, the oxide film is thicker at pH 7 than at pH 2. The oxide film is also richer in Cr 3+ oxide at pH 7 at OCP.As the sample is anodically polarized, the oxide film grows thicker and becomes enriched in Mo 6+ oxide.Also, the hydroxide layer grows thicker, which can be due to further hydroxylation of the oxide surface or hydroxylation of dissolved ions according to the PDM. [6]In addition, the fact that the steady-state oxide thickness is smaller at pH 2 indicates higher dissolution rates of the oxide at pH 2 compared to pH 7 in the passive range.From the oxide composition, it can be deduced that a thinner oxide at pH 2 at OCP and in the passive range is due to enhanced dissolution of Cr 3+ since the oxide at pH 2 contains less Cr 3+ .XRR was also used as a complementary method to quantify the oxide thickness under real operando conditions, and the data is shown in Figure 3c.The difference in oxide thickness for the different pH values is also seen in the results extracted from modeling the XRR data, shown in Figure 3d.The oxide is thinner at OCP, and in the passive range for low pH, and as the anodic potential increases, the oxide film grows thicker.A schematic model of the surface region used for the interpretation of the XPS and XRR results is shown in Figure 3e.
Figure 4 shows the passive film breakdown seen from the in situ surface analysis.A dramatic change in the surface chemistry of the alloy is observed at 800 mV for pH 7 and 900 mV at pH 2, corresponding approximately to the respective onset of passivity breakdown.This is evident from the AP-XPS spectra in Figure 4a,b.At these potentials, a significant increase in the Cr(OH) 3 and Mo 6+ oxide peaks is seen with a corresponding decrease in all metallic and other oxide components.This means that the Mo 6+ /Cr(OH) 3 layer became so thick (≈5 nm, which is the AP-XPS probing depth under these conditions described in the Supporting Information) that no photoelectrons from the underlying material can escape.This change in surface chemistry correlates with an increase of one order of magnitude in the current density, as shown in Figure 4c,d.The correlation between the drastic increase in current and the formation of a thick layer of Mo 6+ /Cr(OH) 3 layer suggests that this is coupled to the OER, which occurs in this potential range.The difference in potential dependence for the Mo 6+ oxide formation is shown in Figure 4e, which shows that the Mo 6+ formation occurs first at pH 7 and then at pH 2, which indicates that this is coupled to a pH-dependent process such as the OER.
So far, it has been shown that at potentials ≈700-900 mV, the current density is higher at pH 7 due to the OER, which also causes a dramatic change of the surface chemistry evidenced by the growth of a Mo 6+ /Cr(OH) 3 layer.However, from the Mott-Schottky plots in Figure 2c, it is seen that the oxide film is a p-type semiconductor typical of Cr 2 O 3 .In contrast, Mo 6+ oxide should show an n-type semiconducting behavior. [6]This suggests that the Mo 6+ /Cr(OH) 3 layer is not part of the solid passive film but a precipitated layer of corrosion products formed on top of the thin oxide layer.
Figure 5 shows the quantification of the metal dissolution as well as the chemical state of the dissolved products.At potentials above which the dramatic surface chemistry change occurred, pronounced metal dissolution is observed in the electrolyte by in situ XRF measurements, as shown in the top panel of Figure 5a.The data analysis is described in the Supporting Information.
Detectable dissolution of Ni is seen at 900 mV, while Cr and Mo dissolution is observed at potentials of 1000 mV and above.This demonstrates that passive film breakdown occurs at 900 mV and above for pH 7 and pH 2. The dissolution rate increases at higher anodic potentials, and Ni shows the highest dissolution rates because it is the base metal in the alloy.The chemical state of the dissolved species was investigated with in situ XANES as shown in Figure 5b, where the experimental data from the electrolyte is shown as a thick black line and measured references for comparison are shown in thin colored lines.The experimental spectra of the Ni K edge correspond to Ni(OH) 2 as seen by comparison to the reference spectra, consistent with data for hydrated Ni 2+ ions in an aqueous solution. [62]NiO can be ruled out due to the absence of the peak at ≈8450 eV.Cr is dissolved as Cr(OH) 3 , consistent with data for hydrated Cr 3+ ions in an aqueous solution. [62]Cr 6+ species in the solution can be ruled out due to the absence of the characteristic pre-edge peak of Cr 6+ .Mo dissolves as MoO 3 , as confirmed by the characteristic pre-edge peak of Mo 6+ compounds and ions.This is also consistent with previously reported XANES data of dissolved MoO 3 and with data of dissolved Na 2 MoO 4 in acidic solutions. [63]he XANES data overlayed with the references are shown in the Figure S20 (Supporting Information).The potential for the onset of dissolution is close to the transpassive potential for Cr 3+ containing passive films, [64][65][66] where stable Cr 3+ oxide species can be further oxidized to Cr 6+ species that are soluble.However, the observed onset of dissolution occurs at much lower potentials than the experimentally observed breakdown on highly alloyed steels [67] and other Ni alloys with lower Mo content, as shown in Figure S2 (Supporting Information).Since no change in the oxidation state was observed, as indicated by the absence of Cr 6+ in the XANES spectra, the classical transpassive breakdown mechanism for Cr-containing alloys can be ruled out.This suggests that another mechanism contributes to the observed lowpotential metal dissolution coupled to the OER, as will be discussed in more detail below.
Further proof that the OER contributes significantly to the electrochemical current can be found when comparing the dissolution current density to the total measured current density, presented in the bottom panel of Figure 5a.The discrepancy between the total current and dissolution current is due to the current associated with OER, and the OER current density was calculated as the difference between the total measured current and the dissolution current extracted from the in situ XRF data (described in more detail in the Supporting Information).
The OER current is higher than the dissolution current at the potentials investigated here for the Ni-Cr-Mo alloy.In the Figure S2 (Supporting Information), data for a Fe-containing Ni alloy with lower Mo content is shown, which exhibits much lower OER current densities and onset of dissolution at higher potentials to further illustrate and highlight the effect and presence of OER for the studied Ni-Cr-Mo alloy.Another key finding is that at higher potentials above 1000 mV, the dissolution behavior in the two pH values starts to converge in the same way as was observed in the polarization curves and EIS data in Figure 2, which is explained by the local acidification of the solution near the electrode surface during OER as will be further discussed below.
Figure 6 shows the post characterization of the sample surface after the experiment.When taking the samples out from the electrolyte solution once the experiments had been terminated, scales of dried and cracked corrosion products were observed on the surface, as can be seen in the SEM images in Figure 6a,b.The drying and cracking of the corrosion product film are likely a consequence of exposure to UHV.The elemental composition (only considering the metallic components and ignoring oxygen) reveals that the scales are rich in Mo and Cr and depleted in Ni relative to the bulk substrate composition, as seen in Figure 6c.This aligns with the AP-XPS results, which also showed a large contribution from Mo 6+ oxide and Cr(OH) 3 at potentials above 800 mV.The chemical state of the species in the scales was determined using grazing-incidence XANES (GI-XANES), and the results are presented in Figure 6e.Only the metallic component is seen at high incidence angles, but the chemistry of the scales can be observed at low incidence angles due to the increased surface sensitivity, as shown in the Figure S21 (Supporting Information).The chemical states of the scales were determined qualitatively by comparing the experimental GI-XANES spectra to the reference spectra of compounds.The scales consisted of Ni(OH) 2 , Cr(OH) 3, and a mix of MoO 3 and molybdate ions, as shown in the Figure S22 (Supporting Information).The chemical states determined with GI-XANES align with the AP-XPS and EDS results.The fact that GI-XANES detected Ni(OH) 2 but not AP-XPS can be due to the large difference in penetration depth where AP-XPS is orders of magnitude more surface sensitive.The corrosion process also changes the sub-surface metal lattice underneath the oxide film, as seen in Figure 6d.From the GI-XRD data, the lattice parameter was calculated, which increases at potentials ≈900 mV, where the pronounced dissolution starts.It is noted that the lattice parameter values start converging for pH 7 and 2 at higher potentials.The change in the lattice parameter during the dissolution is explained by the preferential dissolution of Ni, as seen in Figure 5.This results in an enrichment of Mo and Cr in the lattice, and since Mo has a much greater atomic radius, this results in an increase in the lattice parameter, according to Vegard's law. [68]
Discussion
The comprehensive data presented above demonstrate that the Ni-Cr-Mo alloy is active toward the OER and that the onset of OER correlates with a dramatic change in surface chemistry associated with the formation of Mo 6+ oxide, breakdown of passivity, metal dissolution, and the build-up of corrosion products.Since Cr 3+ was the only oxidation state of dissolved Cr ions, as seen from the XANES data in Figure 5, metal dissolution does not occur by the so-called transpassive mechanism where Cr 3+ is oxidized to Cr 6+ , which is soluble, [6,64,69] hence leading to a breakdown of the passive oxide film and severe metal dissolution.A significant difference in material behavior in the two bulk electrolyte pH values is only observed at the onset of OER.Subsequently, the behavior in bulk electrolyte pH values of 7 and 2 converge at higher potentials.The unusual behavior can be explained by considering the side products of the OER reaction, which are protons, as shown in Reaction (1).
Proton (H + ) concentration determines the pH value and affects the stability of oxides in a solution through a chemical dissolution mechanism shown in reaction 2. At large current densities generated by the OER, H + will accumulate locally in the electrolyte close to the surface, resulting in a lower pH and, thus, a more corrosive solution.The charge of the generated H + near the surface must be balanced by counterions, in this case, Cl − ions present in the solution.This results in the acidification and enrichment of Cl ions near the metal surface during OER.In other words, it is the catalytic activity of the material that locally changes the solution chemistry by generating H + that is charge balanced by Cl − , which causes the behavior in the two different bulk pH values to converge at high potentials due to the converging low local pH.
Figure 7 shows simulations of the H + concentration above the electrode surface and the pH decrease at the onset of OER.As the potential increases and the OER current density becomes suf-ficiently high, the pH further drops near the electrode surface and the local pH becomes independent of the bulk electrolyte pH, which explains the convergence between the results obtained with the bulk pH value of 7 and 2 at high potentials (the surface pH when considering full mixing of the H + concentration in the cell volume is shown in Figure S25c (Supporting Information), but there is no significant difference as compared to the situation without mixing).The fact that the near-surface pH drops below one can partly explain the pronounced metal (Ni, Cr, and Mo) dissolution that occurs below the transpassive potential for highly alloyed stainless steel (containing Cr). [67]It is known that the rate of metal dissolution increase at low pH, which can lead to the breakdown of passivity. [69]However, the change in the pH near the surface cannot explain the drastic change in surface chemistry seen from the AP-XPS data in Figure 4, where a large increase in the Mo 6+ intensity was observed at the onset of OER.The near-surface pH simulations show that the pH is ≈3 (for a bulk pH value of 7) at 800 mV, where a drastic increase in Mo 6+ was observed.At the same potential in bulk pH 2, the system was still stable, which indicates that the drastic change in surface chemistry revealed by the detection of a thick layer of Mo 6+ oxide cannot be explained by the local pH effects.This suggests that the OER reaction mechanism influences the oxide layer's stability and degradation.
It is well established in the literature that an inverse relationship exists between the stability of metal oxide catalysts and their activity toward OER, suggesting that the reaction mechanism of OER is coupled with metal dissolution. [41,43,70,71]Theoretical predictions also suggest a universal correlation between OER activity and dissolution. [40][74][75][76][77] During OER, the surface atoms of Ir or Ru oxide that take part in the reaction are further oxidized from the 4+ oxidation state to a 6+ oxidation state complex in the OER catalytic cycle.This higher oxidation state complex can then be dissolved through a coordination inversion process. [76]This can explain the drastic change in surface chemistry observed for the Ni-Cr-Mo alloy at the onset of OER where Mo 4+ oxide, which is catalytically active [36] and present in the oxide film as shown in Figure 3, takes part in the OER mechanism similar to that of IrO 2 and RuO 2 and is dissolved and redeposited as MoO 3 on the surface as observed in the AP-XPS data shown in Figure 4.This explanation is also in line with the chemical state of the dissolved Mo species, which was found to be MoO 3, as shown in Figure 5, as well as the detected chemical state in the layer of corrosion products in Figure 6, where MoO 3 was detected.The fact that Mo exists partly as MoO 2 in the passive oxide film, as revealed by the AP-XPS data, but only in the Mo 6+ oxidation state in both the solution and on the surface after the onset of OER, strongly suggests that further oxidation of Mo 4+ to Mo 6+ occurs during the catalytic cycle of OER and that Mo 6+ can be dissolved similar to the coordination inversion mechanism proposed for RuO 2 .At the onset of OER, before severe acidification near the surface has occurred, this proposed degradation mechanism is mainly responsible for the loss of passivity and the observed re-deposition of Mo 6+ as a corrosion product on the surface.
Another mechanism of degradation during OER is from the participation of lattice oxygen in the reaction shown through isotopic labeling studies. [41,42,44,78]Some reports claim that oxygen vacancies are left at the oxide surface when lattice oxygen is involved in the OER, which means breaking metal-oxygen bonds. [38]These vacancies can be filled by oxygen atoms from water, essentially healing the oxide, or react with other anions present, such as Cl, further weakening the oxide structure.Suppose the rate of vacancy generation during OER is higher than that of replenishing the vacancies by oxygen from the water.In that case, it will eventually lead to the dissolution of the cation without a change in its oxidation state, [38] leaving behind a cation vacancy.This aligns with the observed chemical states of the dissolved Ni and Cr metal ions, as shown in Figure 5, which have the same oxidation state as the Ni and Cr cations in the oxide film determined with AP-XPS.Furthermore, the generation of cation vacancies through dissolution during OER could lead to passivity breakdown, as described by the PDM, where passivity breakdown occurs due to the condensation of cation vacancies at the metal-oxide interface. [6]If lattice oxygen participation during OER results in cation dissolution and the generation of cation vacancies which diffuse to the metal-oxide interface, this could drive the breakdown of the passive oxide film.
All mechanisms of OER induced passivity breakdown mentioned above are related to the dissolution of cations from the oxide surface.If this rate is slow, as in the passive state, it can be balanced by continuously reforming the thin surface oxide.8][9] OER always takes place on the oxide surface and leads to cation dissolution through degradation of the oxide surface; this is in turn how OER leads to severe dissolution of the metal substrate.
As seen from the Mott-Schottky analysis in Figure 2c, the negative slope at higher potentials suggests that the passive film is still present after polarization to 900 mV where OER is taking place.The negative slope indicates p-type semiconducting properties characteristic of Cr 3+ oxide, not MoO 3 .This suggests that the Mo 6+ species observed with AP-XPS on the surface after the onset of OER are redeposited corrosion products, as discussed above, and are not part of the solid oxide film.The observed layer of corrosion products at the end of the experiment is simply the continuation of the redeposition of species dissolved during OER conditions.The presence of MoO 3 , Cr(OH) 3 , Ni(OH) 2, and water suggests an amorphous hydrous hydrogen-bonded network in the precipitated corrosion product film.A picture of the sample can be found in the Figure S26 (Supporting Information).Precipitation of Mo 6+ rich compounds on the surface during corrosion has been discussed as a potential passivation property of Mo, [20,30,79] which explains why even small additions of Mo can substantially increase the corrosion resistance of stainless steel and Ni-based alloys. [26,80]The dissolved molybdate ions can also counterbalance the positive charge of the protons generated during local corrosion, which otherwise would attract Cl ions and further make the local solution more aggressive.This could explain why we observe no local corrosion of the Ni-Cr-Mo alloy even at such high potentials.
However, at these high potentials, Mo also plays another role in this material system as it is active toward OER.The observed OER-coupled passivity breakdown of the Ni-Cr-Mo alloy in this study is very different from the passivity breakdown of duplex stainless steel [67] or even other Ni-based alloys with a low Mo content, as discussed in the supplementary information in Figure S2 (Supporting Information).For those alloys with low Mo content, passivity breakdown, i.e., enhanced metal dissolution, is observed at higher anodic potentials, and the observed current increase is mainly due to metal dissolution and not OER.However, for Ni alloys with high Mo content, the OER plays an important role in the metal dissolution and thus passivity breakdown.The effect of OER, therefore, cannot be ignored in the study of passivity breakdown if the material is catalytically active toward OER since it is not only a by-standing side reaction but closely coupled to the dissolution of the material.Industrial electrochemical tests using current density as a criteria for judging corrosion rate are not applicable for this class of Ni-Cr-Mo alloys, showing an apparent catalytic activity toward OER.A more direct measure of the true material degradation is needed to judge the behavior of these alloys properly, and the OER reaction and the subsequent degradation should be considered in the interpretation of electrochemical measurement results.
The solid-liquid interface is not easy to access, and few surface science techniques are suitable for this purpose.Here we combine several state-of-the-art in situ techniques that give unique and comprehensive insights into the chemistry and structure of the surface region immersed in an electrolyte and the chemistry of dissolved species during the passive film growth, corrosion initiation, and progression.This powerful combination of techniques provides a detailed understanding of the passivity breakdown of the Ni-alloys and opens the possibility of shining new light on complex processes in the field of corrosion as well as other fields of electrochemistry, such as batteries, fuel cells, and electrocatalysis.
Conclusion
The solid-liquid interface is notoriously difficult to study in situ.Combining synchrotron-based techniques and electrochemical methods, we demonstrate that the Ni-Cr-Mo alloy is active toward OER, where a current increase and associated bubble formation were observed at relatively low overpotentials compared to other highly corrosion resistant alloys.The studied Ni-Cr-Mo alloy exhibits a stable passive film in the NaCl solution until the onset of OER.At the onset of OER, the passive film starts to degrade, which is associated with the OER-induced mechanism where catalytically active Mo 4+ oxide sites in the oxide film are further oxidized into Mo 6+ complexes that are dissolved and partly redeposited on the surface during the catalytic OER cycle.This results in the breakdown of passivity and the dissolution of Ni and Cr ions without a change in their oxidation state compared to that in the oxide film.This observed mechanism is different from the traditional transpassive corrosion mechanism of Cr-containing alloys where Cr 3+ is oxidized to soluble Cr 6+ at sufficiently anodic potentials.Our comprehensive experimental results provide a detailed understanding of the passivity breakdown of Ni-Cr-Mo alloys, which is associated with the onset of OER.The OER results in the acidification of the solution near the surface, which further facilitates the dissolution of the protective oxide.The concentration of protons near the surface also has to be counterbalanced by Cl ions or dissolved molybdate ions.This interplay between OER and material degradation makes simple electrochemical assessment and accelerated industrial tests of Ni alloys problematic, and the role of OER must be taken into account when considering the degradation of catalytically active alloys.
Experimental Section
The material used in the present study was an industrial-grade Ni-Cr-Mo alloy (Ni alloy 59, UNS no: N06059) containing 62.3 at% Ni, 26.3 at% Cr, and 9.7 at% Mo (minor alloying elements are shown in the Table S1, Supporting Information) provided by Alleima (former Sandvik Materials Technology).The typical microstructure of the material is shown in the Figure S1 (Supporting Information).After polishing, the samples were stored in air for several weeks, allowing the native oxide layer to form.Before the experiments, the samples were cleaned by sonication in acetone for 5 min, later ethanol for 5 min, and then rinsed in ethanol and dried using N 2 gas before being mounted in the electrochemical cell.
In situ, GI-XRD, XRF, and XRR measurements were performed at the Swedish Materials Science beamline P21.2 at DESY, Hamburg, Germany.An X-ray energy of 38 keV and a beam size of 50 × 500 (V × H) μm 2 were used.The sample was mounted in a dedicated in situ electrochemical flow cell (described below) on the surface diffractometer (see photos in Figure S7, Supporting Information).The sample surface was aligned parallel to the X-ray beam, with the surface normal in the vertical direction.GI-XRD was measured with an incidence angle of 0.3°and recorded with a VAREX flat panel detector.The detector distance and position were calibrated using a CeO 2 sample.The GI-XRD data were integrated using pyFAI, [81] and Le Bail refinements were performed using the GSAS-II software [82] to extract the lattice parameter of the surface region at each polarization step.XRR was measured using a Cyberstar X2000 scintillator on a motorized stage which was scanned at angles of 0.02 to 3°.The measured XRR data was modeled using GenX. [83]XRF was measured using an Amptek FAST SDD ultra high-performance silicon drift detector.The XRF detector was mounted perpendicular to the incoming X-ray beam and positioned 10 cm away from the beam.An acquisition time of 60 s was used to detect the fluorescence from the dissolved metal ions in the electrolyte.The energy scale was calibrated using fluorescence of Cu, Rb, Mo, and Ag excited by a radioactive Am source.The XRF intensities were calibrated using a series of reference solutions of known concentrations (0.1 m, 0.01 m, and 0.001 m) of Ni, Cr, and Mo salts.[86][87] The synchrotron measurements were performed in a sequential manner at stepwise increased potential while the current was recorded and electrochemical impedance spectroscopy was measured.The electrolyte was not flow-ing during the in situ measurements while the potential was applied.Instead, the cell was used in batch mode.Between each potential step, the cell was flushed with fresh electrolyte solution.The treatment of the GI-XRD, XRR, and XRF data and the whole measurement procedure are further described in the Supporting Information.
The in situ XANES measurements of the chemical state of the dissolved metal ions and the ex situ XANES measurements of the chemical state of the corrosion products left on the surface were performed at the advanced XAFS beamline P64 at DESY, Hamburg, Germany. [88]The in situ determination of the chemical state of the dissolved metal ions in the solution was performed using the electrochemical cell described below.The Ni, Cr, and Mo K edges were measured in fluorescence mode using a passivated implanted planar silicon (PIPS) detector after polarization at 1200 mV versus AgCl for 30 min.The ex situ determination of the chemical states of the corrosion products was performed in a grazing incidence geometry using the PIPS detector in fluorescence mode while varying the incidence angle between 0.2 and 5°while recording the Ni, Cr, and Mo K edges separately.Self-absorption corrections were necessary to perform on the GI-XANES as well as for the in situ Ni K edge data measured in the electrolyte, as described in more detail in the Supporting Information.The solid references of compounds were made as pellets of powders with cellulose as a binder and the metallic components were measured from a Ni alloy foil of similar composition (Ni62Cr22Mo9Fe5 and Ni62Mo28Fe5Cr5).The solution references were made as 2 wt% dissolved in 1 m NaCl to mimic the solution used for the in situ measurements.All references were measured in transmission mode using ion chambers, while the XANES from the dissolved species in the electrolyte and the scales were measured in fluorescence mode.For the Cr and Ni K edge, 100% N 2 gas was used in the ion chambers.For the Mo K edge, 10% Kr and 90% N 2 were used in the ion chambers.All XANES spectra were background subtracted and normalized using the ATHENA XAS data processing software. [89]The data treatment and further experimental details are provided in the Supporting Information.
For the in situ measurements at both P21.2 and P64, a custom-made electrochemical flow cell dedicated to in situ synchrotron studies was used (shown in the Figure S8, Supporting Information). [67,84,85,90]A Pt rod was used as the counter electrode, a mini leakless Ag/AgCl reference electrode from eDAQ (calibrated against a large Ag/AgCl reference electrode from GAMRY) was used, and the sample acted as the working electrode.The cell, tubing, and counter electrode were cleaned by flowing 25% nitric acid for 30 min and flushed with Milli-Q water.The samples for the P21.2 and P64 experiments were polished by SPL (Surface Preparation Laboratory in the Netherlands) to a mirror finish.For the in situ experiments, a 1 L solution of 1 m NaCl prepared using Milli-Q water was used, and for the pH 2 solution HCl was used to adjust the pH.The electrolyte solution was degassed using N 2 before and during the experiments.An Autolab PG-STAT204 potentiostat was used for the electrochemical experiments.
The in situ AP-XPS measurements were performed at the HIPPIE beamline at MAX IV, Lund, Sweden, using their electrochemical end-station. [91]he experimental details are described in refs.[92,93].The sample, reference electrode (Ag/AgCl eDAQ leakless mini electrode), and counter electrode (Pt foil) were mounted in a special holder allowing electrical connection and the possibility to ground the sample, which acts as the working electrode (Figure S3, Supporting Information).The sample and electrodes, mounted on a manipulator, could then be submerged and retracted into a glass beaker placed on a water-cooled copper plate at the bottom of the vacuum chamber.The experiment was performed with a background pressure of 17 mbar in the entire chamber, equal to the electrolyte's vapor pressure.To avoid rapid evaporation of the electrolyte due to continuous pumping of the chamber during the experiment, the copper plate supporting the electrolyte beaker was cooled to 10 °C.AP-XPS was measured in normal emission, and an excitation energy of 1600 eV was used to measure the Ni 2p, Cr 2p, and O 1s core levels, and an excitation energy of 1400 eV was used to measure the Mo 3d core level.A slit of 30 μm and a pass energy of 200 eV were used for all core levels measured at ambient pressure.Measuring all core levels took ≈25 min.The beam size was 25 × 60 μm 2 (V × H).The sample was translated between the measurements at each potential step to avoid beam-induced damage or effects on the sample surface.Peak fitting and quantitative analysis were performed using Python and the LMFIT package. [94]An asymmetric Voigt line shape [95] was used to fit the metal components of the Ni 2p, Cr 2p, and Mo 3d core levels, which all display asymmetry toward higher binding energy.All other peaks were fitted using Voigt line profiles.All spectra were background-subtracted using a Shirley background. [96]The AP-XPS spectra of Ni 2p were background subtracted with a linear model plus a Shirley background to compensate for the nonflat background before and after the peak (The survey shown in the Figure S6 (Supporting Information) illustrates the large nonflat background caused by inelastic scattering in the water vapor environment between the sample and the analyzer).The chemical shifts of the metal oxide components were calibrated relative to the well-known binding energy of the metal components.The fitted AP-XPS spectra, peak fitting parameters, and details of the quantitative analysis are given in the Supporting Information.
In-house electrochemical measurements were performed with a GAMRY eurocell cleaned with 20% nitric acid, an Ag/AgCl reference electrode from GAMRY mounted in a lugging capillary, and a coiled Pt wire as a counter electrode.The samples were polished to 600 grit before each measurement and allowed to reach a stable OCP value for 30 min.The electrolyte volume was 200 mL of 1 m NaCl, which was prepared using Milli-Q water, and for the pH 2 solution HCl was used to adjust the pH.The electrolyte solution was degassed using N 2 before and during the experiments.An Autolab PGSTAT204 potentiostat was used for the electrochemical experiments.Polarization curves were measured from −400 to 1100 mV with a scan rate of 0.5 mV s −1 and 100 mV s −1 .Mott-Schottky analysis was performed after polarization at 900 mV (formation potential) for 30 min.The potential was then reduced in steps of 50 mV from 900 to −300 mV, the impedance was measured using 10 and 100 Hz, and the capacitance was calculated using an Rs value of 10 Ω.A 10 mV amplitude was used for the EIS and Mott-Schottky measurements.The linear part of the negative slope of the Mott-Schottky plot was fitted.From the slope, the cation vacancy density was calculated using the equation [97] shown below.
where C is the capacitance, q is the charge of the electron, ɛ 0 is the vacuum permittivity, ɛ is the dielectric constant of the oxide, and E is the applied potential.A value of 15.8 was used for the dielectric constant of the oxide layer, taken from refs.[98,99].Simulations of the local pH near the electrode surface during OER were performed using COMSOL Multiphysics with the transport of dilute species module. [100]A time-dependent solution (with a time frame of 30 min) was used.A rectangular model with sides of 30 mm was used to mimic the volume of the in situ electrochemical cell.No flow was used to reproduce the static conditions used in the study, as described in the Supporting Information.The initial concentration of the dilute species (H + ) was defined by the bulk pH of either 2 or 7.The flux of species from the bottom surface of the cell volume representing the sample surface was defined based on the electrochemical current at each potential step.Further details are given in the Supporting Information.
Ex situ electron microscopy and EDS were performed at the DESY Nano lab [101] using a high-resolution field emission SEM (Nova Nano SEM 450, FEI Thermofisher) equipped with an X-Max 150 EDS silicon drift detector (Oxford) for elemental analysis.For imaging, an acceleration voltage of 5 keV was used.An acceleration voltage of 10 keV was used for the EDS analysis.
Figure 1 .
Figure 1.Experimental techniques.A schematic representation of the combination of experimental techniques used during this work.The orange atoms represent the metal, blue the metal cations, and red the oxygen anions in the oxide layer.XRR, XRF, and GI-XRD were integrated into one experimental setup.XANES and AP-XPS were measured in separate experiments.
Figure 2 .
Figure 2. Electrochemical behavior.a) Polarization curves of the Ni-Cr-Mo alloy in 1 m NaCl at pH 7 and pH 2 measured with a sweep rate of 0.5 mV s −1 .Thermodynamic potentials for OER are indicated.b) Polarization curves measured at 0.5 mV s −1 and 100 mV s −1 , respectively, in 1 m NaCl at pH 7 and pH 2, plotted on a linear current scale.c) Mott-Schottky plots of Ni alloy measured after polarization at 900 mV for 30 min.d,e) Nyquist plot of EIS data for Ni alloy in 1 m NaCl at pH 7 and pH 2 at 800 mV, 900 mV, and 1000 mV vs Ag/AgCl, respectively.
Figure 3 .
Figure 3. Passive film growth.a) Oxide and hydroxide thickness in 0.1 m NaCl at pH 7 and pH 2, calculated from the AP-XPS data.b) Oxide composition extracted from the AP-XPS data with an uncertainty of ≈2%.c) Fitted in situ XRR data obtained at OCP and under polarization at 400 and 600 mV vs Ag/AgCl in 1 m NaCl at pH 7 and pH 2. d) Oxide thickness extracted from the XRR data.e) Schematic model of the surface region.The fitting procedure and data analysis are described in the Supporting Information.
Figure 4 .
Figure 4. Passive film breakdown.a) In situ AP-XPS spectra of Ni 2p 3/2 , Cr 2p 3/2 , and Mo 3d measured at pH 7 for the potential range 700-900 mV.b) In situ AP-XPS spectra of Ni 2p 3/2 , Cr 2p 3/2 , and Mo 3d measured at pH 2 for the potential range 700-900 mV.c) Mo 6+ oxide content and current density after 10 min vs potential at pH 7. d) Mo 6+ oxide content and current density after 10 min vs potential at pH 2. Arrows indicate the axis corresponding to the data.e) Comparison of Mo 6+ oxide content at pH 7 and pH 2 vs potential.
Figure 5 .
Figure 5. Metal dissolution.a) (Top) Metal dissolution rate calculated from in situ XRF data.(Bottom) Dissolution current density and OER current density, compared to the total current density.b) Chemical state determination of the dissolved species using in situ XANES (only for pH 7).Spectra of references are also shown for qualitative comparison.
Figure 6 .
Figure 6.Corrosion products.a) SEM image of sample surface after polarization up to 1200 mV vs Ag/AgCl in pH 7. b) SEM image of sample surface after polarization up to 1200 mV vs Ag/AgCl in pH 2. c) EDS analysis of the substrate and scales seen in (a) and (b); the measurement positions are shown in the Supporting Information.d) Lattice parameter of sub-surface alloy extracted from GI-XRD.e) GI-XANES from the corroded sample measured ex situ at large and small incidence angles to be surface or bulk sensitive.Spectra of references are also shown for qualitative comparison.
Figure 7 .
Figure 7. Simulation of local pH.a) Local pH near the electrode surface as a function of potential extracted from COMSOL simulations.b) Simulated pH profiles as a function of height and potential for a bulk electrolyte of pH 7. c) Schematic of OER at the electrode-electrolyte interface giving rise to the production of H + that causes a decrease of the local pH. | 2023-07-14T06:17:21.343Z | 2023-07-12T00:00:00.000 | {
"year": 2023,
"sha1": "1dee148d94abedf274376bcb05a7e4b8b2e022e3",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adma.202304621",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "78e4c8f7dc01fc9fe62685ea185f2a2d33910b90",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9777903 | pes2o/s2orc | v3-fos-license | Price promotions on healthier compared with less healthy foods: a hierarchical regression analysis of the impact on sales and social patterning of responses to promotions in Great Britain
Background: There is a growing concern, but limited evidence, that price promotions contribute to a poor diet and the social patterning of diet-related disease. Objective: We examined the following questions: 1) Are less-healthy foods more likely to be promoted than healthier foods? 2) Are consumers more responsive to promotions on less-healthy products? 3) Are there socioeconomic differences in food purchases in response to price promotions? Design: With the use of hierarchical regression, we analyzed data on purchases of 11,323 products within 135 food and beverage categories from 26,986 households in Great Britain during 2010. Major supermarkets operated the same price promotions in all branches. The number of stores that offered price promotions on each product for each week was used to measure the frequency of price promotions. We assessed the healthiness of each product by using a nutrient profiling (NP) model. Results: A total of 6788 products (60%) were in healthier categories and 4535 products (40%) were in less-healthy categories. There was no significant gap in the frequency of promotion by the healthiness of products neither within nor between categories. However, after we controlled for the reference price, price discount rate, and brand-specific effects, the sales uplift arising from price promotions was larger in less-healthy than in healthier categories; a 1-SD point increase in the category mean NP score, implying the category becomes less healthy, was associated with an additional 7.7–percentage point increase in sales (from 27.3% to 35.0%; P < 0.01). The magnitude of the sales uplift from promotions was larger for higher–socioeconomic status (SES) groups than for lower ones (34.6% for the high-SES group, 28.1% for the middle-SES group, and 23.1% for the low-SES group). Finally, there was no significant SES gap in the absolute volume of purchases of less-healthy foods made on promotion. Conclusion: Attempts to limit promotions on less-healthy foods could improve the population diet but would be unlikely to reduce health inequalities arising from poorer diets in low-socioeconomic groups.
INTRODUCTION
Price promotions are commonly used in store with the aims of boosting purchasing by reducing the price of products as well as possibly stimulating impulsive purchases by increasing the prominence of items in store (e.g., via tags and placement). There is a growing concern that such promotional activities by the food industry may contribute to poor dietary intake particularly in individuals who are more socially deprived (1)(2)(3). It was also suggested that price promotions on less-healthy products might lure consumers away from healthier, higher-priced options and that the industry has disproportionately promoted less-healthy but more-profitable options (4). If so, there might be a case for public policy to regulate the promotional activities of industries to help achieve, or at least not hamper, public health nutrition goals.
However, there is a paucity of empirical evidence available in the public domain, and the existing claims about a bias in the use of price promotions toward less-healthy items largely rest on anecdotal reports. Although the general responsiveness of consumers to price promotions received substantial attention in the marketing literature (5,6), and there is a fast-growing body of research on the effect of price per se on healthier compared with less-healthy purchasing or consumption (7)(8)(9)(10)(11)(12)(13), there has been relatively little research to consider whether consumer uptake of promoted products differs for healthier or less-healthy products. Moreover, it is unclear whether the impact of promotions varies by the socioeconomic characteristics of consumers. In this study, we sought to fill this evidence gap. We examined whether promotions on less-healthy products increased sales more than promotions on healthier products by using data from supermarkets across the United Kingdom.
We also sought to explore whether social disparities in the healthiness of food purchased were attributable to differences in responses to retail price promotions. In an earlier study, by using the same survey data, we showed significant socioeconomic patterning in the healthiness of food purchases (14). However, the mechanisms that could account for such patterns remain underexplored. In the current study, we tested one potential mechanism, i.e., price promotions; we investigated the differential use of price promotions across socioeconomic groups to explore whether this may be one contributor to diet-related health inequalities (15).
We designed our study to address the following 3 questions: 1) Are less-healthy foods more likely to be promoted than healthier foods? 2) Are consumers more responsive to promotions on lesshealthy products than promotions on healthier ones? 3) Are there socioeconomic differences in food purchases in response to price promotions?
Data
We used a secondary data source, the Kantar WorldPanel survey (14,16), which includes purchase records of 26,986 households in the United Kingdom throughout 2010. Households were recruited by a data company (Kantar WorldPanel), and the authors were not involved in the data collection. Data-collection procedure was as follows: the United Kingdom Office of National Statistics census information and the United Kingdom Broadcasters' Audience Research Board Establishment survey were used to determine quotas for recruitment. The data company purchased potential participant lists from another company and recruited participants by sending postal mails and e-mails. Recruited participants received vouchers for high-street retailers and/or vouchers for leisure activities [in total for an average monetary equivalent of £100 (;$160) per household per year]. Recruited households were nationally representative in terms of region, age group, and household size.
The survey includes purchase records of all foods and beverages that were taken home from supermarkets and similar stores in 2010. Sampled households were asked by the data company to record all purchases using barcode scanners and to send digital images of cashregister receipts to the company. The data contain rich information on purchases, including the price at which they were bought, whether they were bought on promotion, the number of packets purchased, and the retail chain from which the product (Stock Keeping Unit) was purchased. The data also include detailed information on product characteristics, including information on the brand, manufacturing company, and nutritional content.
We constructed a cross-sectional data set of 11,323 individual products in 135 food and drink categories, which were purchased by panel households in leading United Kingdom supermarket chains [i.e., the "main parties" as defined by the United Kingdom Competition Commission (17)]. The food and drink categories reflected those used in the retailing sector (Kantar WorldPanel; see Supplemental Table 1 for more information). With the use of transaction records in the data, we calculated the total number of units of each product sold to panel households across the country over 52 wk. In keeping with common practice in the related literature (13,(18)(19)(20), we restricted the set of products to the more-popular items; in particular, we included only products that were purchased at least once in each of the 52 wk by any of the panel households, irrespective of whether the products were on promotion or not.
Frequency of promotions
To assess consumer responses to price promotions, it is essential to measure the number of price promotions available for each product. However, in commonly available data sets (including ours), a price promotion is recorded for a given item in a given store only if that item has been purchased on promotion from that store by panel households. This purchase-based nature of existing data sets has thus far prevented researchers from measuring the frequency of price promotions at population level.
In the absence of a directly observable measure in the data, we estimated the frequency of promotions for each product by exploiting a particular feature of the retail policy in major United Kingdom grocery retailers. We focused on the following 11 "main parties" of United Kingdom multiple grocers: Tesco (sales market share in 2010 was 24%), Asda (sales market share in 2010 was 12.8%), Sainsbury (sales market share in 2010 was 12.5%), Morrisons (sales market share in 2010 was 9.8%), Waitrose (sales market share in 2010 was 3.1%), Iceland (sales market share in 2010 was 1.7%), Lidl (sales market share in 2010 was 1.6%), Aldi (sales market share in 2010 was 1.5%), M&S (sales market share in 2010 was 1.3%), Netto (sales market share in 2010 was ,1%), and Budgens (market share information not available), as defined by the United Kingdom Competition Commission (17). The total market share of these grocers in 2010 was ;70% (21). The Commission confirmed that the stores followed a national pricing policy according to which stores operated the same pricing (and, thus, the same price promotions) in all branches. This institutional feature provided us with the opportunity to estimate the number of promotions run in the country in a given time period; if we observed any transaction involving a product on promotion in a given store, we could assume that the product was also on price promotion in the other branches of the same supermarket chain.
The frequency of promotions for each product was defined by the number of branches (22) that ran a promotion on the product in a given week aggregated across the 11 supermarket chains and 52 wk. Each branch could run a promotion on a given product (Stock Keeping Unit) only once at a given point in time, and hence, the number of branches that ran a promotion on a product gave the number of promotions on the same product. See Supplemental Data section 2 for additional details.
Product and category healthiness
The nutrient profiling (NP) model developed by the United Kingdom Food Standards Agency was used to capture the healthiness of products (23). This method assigns a score for each food calculated from the energy density, saturated fat, sugar, sodium, fiber, and protein contents together with an estimate of the fruit, vegetable, and nut contents, thereby providing a unified measure of healthiness across all available food and drink products. The NP model applies to all food and drink products equally without exemptions or category-specific criteria (23). However, the definition of healthier and less-healthy products typically uses different cut points for foods and beverages, reflecting the very different energy densities of the 2 groups, and we adopted this convention. Note that, as the NP score increases, the healthiness of the product declines. Compared with other NP models, this NP score was shown to perform well when matched to a standard ranking of foods by .700 nutrition professionals (24). Intake of high-scoring foods was shown to act as a risk factor for obesity (25). Category-level healthiness was calculated by taking the mean NP score for products within the category.
Analytic framework
Are less-healthy foods more likely to be promoted than healthier foods?
To address this question, we set up a product-level regression model of sales and assessed the relation between the frequency of promotions and NP score. In supermarkets, each product was nested by product category (135 categories in our data), and hence, our product-level data set had a 2-level structure (i.e., between-category and within-category variations in healthiness). First, we estimated the association between the frequency of promotions and NP score of various food categories (i.e., between-category differences). Next, we estimated the relation between promotions and the NP score at the product-level separately by the food category (i.e., within-category differences). These 2 step estimations were conducted simultaneously via a hierarchical regression approach (26). For item j in category c, the following base model was specified: FoP jc refers to the frequency of promotion of item j in category c and NP represents the nutrient profile. The term e jc is the idiosyncratic error. This basic estimation was used to tell whether less-healthy items were more frequently promoted than healthier ones. We further specified that the baseline frequency of promotions (intercept: b 0c ) and the association with the NP score (slope: b 1c ) varied by dietary category, and this variation was a function of the genuine healthiness of each dietary category NP c is the mean NP score of products in category c. Now, the model could distinguish between product-level (i.e., withincategory) effects and category-level (i.e., between-category) effects of healthiness, which were estimated separately in the regression analysis. See Supplemental Data section 3 for full technical details.
Are consumers more responsive to promotions on less-healthy products?
To address this question, we investigated differential effects of the frequency of promotions on product sales by the NP score of products. The analysis assessed whether price promotions increased sales of less-healthy compared with healthier foods (between-category effect). The analysis also addressed whether sales of less-healthy versions within a given food category increased more in response to promotions than did healthier versions in the same food category (within-category effect). Again a similar hierarchical regression approach was used. The baseline product-level purchases equation is given by The outcome variable was the log of total number of products j in category c that were purchased by the panel households over 52 wk. The interaction term [log(FoP jc )3NP jc ] was used to measure whether and, if so, to what extent the effect of promotions varied by the healthiness of the product. The vector Z jc included a set of product-level covariates known to affect sales, including the reference price, average rate of price discount when promoted, and a set of indicators of brands (which captured the brand-specific features of each product). Similar to the previous analysis, category-specific coefficients were modeled as follows: The model nested the within-category and between-category sales effects of promotion by healthiness, which, again, were estimated separately in the regression analysis. All models were estimated via a restricted maximum likelihood technique (27). See Supplemental Data section 3 for full technical details.
Are there socioeconomic differences in food purchases in response to price promotions?
To address this question, we constructed 3 subsamples that focused on purchases that were made by 1) high-socioeconomic status (SES) households, 2) middle-SES households, and 3) low-SES households and repeated the previous analysis (on the basis of NP scores) for each group. The SES of the household was defined by the occupation of the household head using the United Kingdom Registrar General's classification [high: higher managerial and professional; middle: white collar and skilled manual; and low: semiskilled and unskilled manual (28,29)]. Other socioeconomic indicators such as household income and education were not used because of a substantial number of nonresponses. Observations with missing information (such as the NP score) were excluded from the analysis (6 cases, and no imputation was made for missing variables). Stata MP Version 12 software (StataCorp) was used for all analyses. Table 1 shows characteristics of participating households (main shoppers) by socioeconomic groups. The total number of households was 26,986 with 5667 households in the high-SES groups, 14,870 households in the middle-SES group, and 6449 households in the low-SES group. There were gradients in household income, education level of the main shopper, and BMI. Although all households were included to calculate product sales, note that there were substantial item nonresponses in the information on the above characteristics (household income, education, and BMI).
RESULTS
The mean (6SD) NP score for food products at a productcategory level was 4.54 6 6.96 and ranged from the healthiest at approximately 210 (fruit and vegetables) to the least healthy at ;22 (butter, margarine, and chocolate confectionery). At the individual product level, the mean was 3.72 6 9.17. The mean frequency of promotions (i.e., number of branches that ran a promotion on a product in a given week) for each product was 481.4 6 735.8 branches/wk. Table 2 presents descriptive statistics for the number of packs of each product purchased per 1000 households in 2010 separately by NP score and sales made on and off promotion. In this table, food categories that scored $4 in the category mean NP score, and beverages that scored one or higher were classified as less-healthy categories and, otherwise, as healthier ones. Products that scored above the median NP score within each category were classified as less-healthy versions and, otherwise, as healthier ones. In total, 6788 products were in healthier categories, and 4535 products were in less-healthy categories. Results were qualitatively similar when a different cutoff of healthiness was used (see Supplemental Table 2 for sensitivity checks).
As for healthier food categories, Table 2 suggests that higher-SES groups bought more products from healthier versions of healthier food categories than did lower-SES groups for purchases made both off and on promotion. In terms of less-healthy food categories, socioeconomic differences were predominantly shown in off-promotion sales; the sales of less-healthy foods off promotion were significantly greater for the lower-SES group than for the highest-SES group.
Frequency of promotions by NP score Figure 1 summarizes the estimation results of the hierarchical model (see Supplemental Table 3 for complete results and Supplemental Table 4 for sensitivity checks). Figure 1A illustrates the estimated frequency of promotions by food category and shows that the frequency of promotions varied substantially across categories. The estimated mean of the log frequency was 7.25 (i.e., 1405.3 branches running promotions per product per week), with an SD across categories of 1.40 (Supplemental Table 3). The straight line in the graph shows the overall relation between promotions and the category-level NP score (i.e., the between-category relation). The slope coefficient was 20.022 (P = 0.272), which was small and statistically indistinguishable from zero, implying that promotions were equally likely in healthier and less-healthy food categories. Figure 1B shows the relation between NP score and promotions within each category (i.e., the within-category relation). Gradients representing the association between frequency of promotions and NP scores within each category were plotted against the mean NP score of the category. A positive gradient implied that promotions were more frequent in less-healthy than in healthier versions within a given category. The horizontal line and associated dotted lines show the overall (average) gradient, which was 0.0165 (P = 0.462) and insignificant. Therefore, by looking at the within-food category variation, promotions were overall equally likely on healthier and less-healthy versions of the foods. At the individual category level, gradients were generally small and insignificant. However, there were a few cases in which price promotions were skewed toward less-healthy versions (e.g., cakes, cheese, and sauces; Supplemental Table 5). Data are from the Kantar WorldPanel Survey 2010. Households: n = 26,986. There were substantial numbers of item nonresponses in the following variables: ethnicity (1513 cases), household income (6512 cases), education (1617 cases), and BMI (14,978 cases). Therefore, the information for these variables is for reference only. 2 SES, socioeconomic status. 3 Mean 6 SD (all such values). 4 Household income was adjusted for household size and composition.
The overall result was also replicated when applied to the following 2 specific types of promotion separately: simple price reductions and multibuys (e.g., "buy-one-get-one-free" and "X for $Y" (Supplemental Tables 6 and 7). However, promotions on less-healthy versions of foods were characterized by a bigger discount rate than were those for healthier foods (gradient: 0.00163; P = 0.058; Supplemental Table 8).
Differential consumer responses to promotions by NP score Figure 2 summarizes key results of the regression analysis regarding the association between unit sales and the frequency of promotions by NP score (see Supplemental Table 9 and Supplemental Figure 1 for complete results and additional technical details).
A 10% increase in the frequency of promotions was associated with an increase in sales of 27.3% (95% CI: 20.6%, 33.9%; P , 0.01) for the whole population (average effect). The sales uplift from price promotions was significantly larger for less-healthy than for healthier food categories. An SD point increase (6.96 points) in the category mean NP score (implying that the food category became less healthy) was associated with, all else being equal, an additional 7.7-percentage point increase in sales (P , 0.01; Supplemental Table 9) (i.e., the overall effect increased from 27.3% to 35.0%).
The sales uplift was also shown within each SES group. However, the magnitude of sales uplift was greater in higher-than for lower-SES groups for both healthier and less-healthy food categories (Supplemental Table 9 and Supplemental Table 10). Moreover, SES differences in the sales uplift were more marked in healthier than in less-healthy food categories; for less-healthy food categories, the sales uplift for high-, middle-, and low-SES group was 39.5%, 35.1%, and 31.5%, respectively, whereas in healthier food categories, it was 29.7%, 21.1%, and 14.7%, respectively.
By contrast, within a given category, the NP score of the product did not uniformly or significantly moderate the effect of promotions, although for some categories, a moderation effect did exist (see Supplemental Table 9 and Supplemental Table 11 for separate regressions by product category).
Price elasticity
Effects of the reference price (or nonpromotional price) and price discount associated with a price promotion were also estimated as control variables (Supplemental Table 9). The elasticity of the reference price within category was 20.64 (95% CI: 20.67, 20.61; P , 0.01), which implied that a 1% increase in the reference price led to a decrease in sales by 0.64% within a given category. The elasticity was larger for lower-than for higher-SES groups; the elasticity equaled 20.47 (95% CI: 20.51, 20.43; P , 0.01) for the high-SES group, 20.63 (95% CI: 20.66, 20.60; P , 0.01) for the middle-SES group, and 20.82 (95% CI: 20.86, 20.78; P , 0.01) for the low-SES group. The within-category elasticity of the price discount was 1.44 (95% CI: 1.32, 1.55, P , 0.01); a 1% increase in the depth of price discount led to a sales uplift by 1.44% within a given category. The effect was similar in size across SES groups, whereby it was 1.44 (95% CI: 1.31, 1.57; P , 0.01) for the high-SES group, 1.44 (95% CI: 1.32, 1.56; P , 0.01) for the middle-SES group, and 1.43 (95% CI: 1.29, 1.58; P , 0.01) for the low-SES group. Our results for the price elasticity between categories were nonsignificant for both the reference price and price discount. Food categories that scored $4 nutrient profiling score and beverages that scored $1 were grouped in the less-healthy category and, otherwise, in the healthier category. 3 Mean 6 SD (all such values).
DISCUSSION
Despite earlier anecdotal evidence to the contrary, we showed that, overall, less-healthy items were no more frequently promoted than were healthier ones. However, after controlling for the price, price discount, and brand-specific effect, the sales uplift associated with price promotions was larger in less-healthy than in healthier food categories, which confirmed our main hypothesis. Products from less-healthy food categories are often nonperishable, whereas those from healthier food categories (in particular fruit and vegetables) are perishable. Therefore, FIGURE 2 Effects of price promotions on sales by category-level NP score and socioeconomic group. Effects represented were predicted from the hierarchical regression analysis (see the regression model in the Analytic framework section and Supplemental Table 9). The gray bar shows the average percentage increase in sales when the frequency of promotions was raised by 10% [the bar corresponds to 10 times the coefficient of log(FoP)] presented separately by socioeconomic groups. Black and white bars show effects on less-healthy and healthier food categories, respectively, in which the category-level NP score was greater or smaller, respectively, than the mean by 1 SD point, whereas other factors remained fixed. The effect size corresponds to the coefficient of log(FoP)3NP) multiplied by the SD. The figure shows the between-category effect only. Within-category effects were indistinguishable from zero for all groups (Supplemental Table 9) and, therefore, are not visualized. See Supplemental Data sections 3 and 6 for additional technical details. FoP, frequency of promotion; NP, nutrient profiling. Table 3 for complete regression results). For both panels A and B, 95% CIs of predictions are presented. The coefficient of the slope in panel A was 20.022 (P = 0.272; z test; n = 11,323; Supplemental Table 3). A positive gradient in panel B meant that promotions were more frequent in less-healthy than in healthier versions of foods within the category. The horizontal line and associated dashed lines show the overall size of effects with 95% CIs (0.0168; P = 0.462; z test; n = 11,323). NP, nutrient profiling. stockpiling during a promotion may be more likely to happen for less-healthy food categories, which could explain the finding.
Higher-SES groups were more responsive than lower SES groups to promotions for both healthier and less-healthy foods. The reasons for this could not be determined from these data but may have been because the ability to respond to promotions is a function of shopping-related cognitive abilities, information, and skills [all of which have been shown to correlate with SES (30)] rather than the need to make monetary savings (31). In addition, making the most effective use of promotions may involve stockpiling items while they are on promotion, thereby requiring financial and spatial resources, which may also have contributed to the observed social patterning in the use of promotions.
These SES differences in the responsiveness to promotions were more pronounced in healthier than in less-healthy categories ( Figure 2). Table 2 also revealed that there was a significant SES gap in the sales of healthier foods on promotion, whereas there was no such gap in the sales of less-healthy foods on promotion. These results suggested that the socioeconomic gap in the onpromotion sales was driven by differences in purchases of healthier rather than less-healthy foods.
There was also an SES gap in the sales of both healthier and less-healthy foods that were made off promotion ( Table 2). This result was broadly in line with the SES patterning in terms of overall purchasing shown in the previous literature (14,32,33). Hence, SES differences in the larger proportion of off-promotion sales laid the foundation of the SES gap in food purchasing, which was exacerbated by promotional activities. Furthermore, elasticities of both the reference price and price discount were larger for the low-than for the high-SES group.
Strengths and limitations
To our knowledge, the current study provides the first population-level quantitative assessment of the relation between the frequency of price promotions and healthiness of food purchases in the main supermarket chains in the United Kingdom. The analysis involved a considerably larger sample size than in existing studies on price promotions. Our study focused on temporary price changes that often augment the prominence of items in the store through tags and placement. Hence, the research usefully complemented existing studies on the role of price in healthy food purchasing more generally, which have had implications mainly for taxation or subsidization (i.e., permanent price changes).
To our knowledge, we also provided the first systematic assessment of a channel through which social disparity of food purchases and intake may occur. Although we and others previously showed social patterning of diet quality (proxied, for instance, by the proportion of less-healthy foods in total intake of energy) (14,(32)(33)(34), the underlying mechanisms as well as potential policy implications have rarely been tested, except for studies on food price (35)(36)(37)(38)(39)(40). Although we had hypothesized that price promotions on less-healthy foods may be a plausible mechanism, our findings led us to reject this hypothesis.
In interpreting the findings, several limitations need to be borne in mind. First, our measure of the frequency of promotions was inevitably limited. The construction of the variable relied on the national pricing policy operated in leading United Kingdom supermarkets. However, the policy is known to be imperfectly adhered to in places characterized by a highly competitive market, such as in central London (17). Moreover, because the original data were purchase based, we did not cover all products that were available in the market, which was a feature that could have biased our estimate of the distribution of the availability of price promotions.
Second, the current study highlighted differential responses to price promotions by social groups, with the assumption that different social groups were exposed to the same promotional environment at the national level. However, United Kingdom supermarket chains have different main target consumers and operate in different parts of the country, and hence, the promotional environment may be segmented by social groups. Our sensitivity analyses that looked at shoppers' exposure to promotions (by taking into account the usual shopping environment for different socioeconomic groups) showed similar socioeconomic patterning in responses to promotions (Supplemental Table 4, Supplemental Table 12). Moreover, we did not address potential differences in purchasing across social groups within a given store.
Third, we restricted our sample to sales data from the 11 main parties of the United Kingdom grocery retail market (which account for ;70% of the total grocery market share), thereby excluding relatively smaller grocery chains and privately owned stores. Purchasing patterns of consumers as well as marketing strategies in those stores may have been different from that of the main parties. Therefore, our findings may not be entirely generalizable.
Implications for future research
Our measure of the frequency of price promotions provided an indication of variability of price promotions for individual products at the national level, but more-refined ways to measure the frequency of price promotions (e.g., via direct routine observations) should be developed in future research (41). Moreover, retailers and manufacturers have their own target customers. Hence, their strategies of operationalizing price promotions differ according to the social characteristics (including income and food goals) of their target consumers (15). Future studies should fully take into account the potentially different food environments provided by different retailers.
In the current study, we examined overall differences in responses to price promotions by SES groups at the population level. Future research could investigate the issue at the store level so that consumers' responses to price promotions are analyzed within the same marketing strategy and variety of food products.
Detailed analysis of various types of price promotion (e.g., simple price reductions and multibuys) could be valuable. Recent evidence showed that restricting multibuys has failed to change the overall volume of alcohol purchased (42). However, it would be worthwhile to investigate if this finding applies to a broader set of (healthier and less-healthy) dietary categories.
Finally, the effects of price promotion on the whole food basket purchased (rather than individual products) should be evaluated. A sizeable proportion of total purchasing involved foods on promotion (Table 2), which made it at least conceivable that price promotions could have affected the overall food basket. Future analyses should involve a shopper level analysis of purchasing and diet quality (11,43) in response to price promotion.
Implications for policy
Our findings suggest that policies that restrict price promotions on less-healthy food categories could help achieve healthier nutrient profiles of shopping baskets for the population on average, which is likely to lead to improvements in the nutritional value of food consumed. However, we did not find evidence that restricting promotions on less-healthy versions of products within a given category would achieve a similar benefit.
In conclusion, our results imply an intriguing effect in relation to socioeconomic inequality. The SES difference in the responsiveness to promotions was more marked in healthier rather than less-healthy food categories. Moreover, the SES gap in the sales of less-healthy foods was predominantly driven by differences in off-promotion sales. Hence, the restriction of price promotions on less-healthy food categories would be unlikely to reduce the SES gap in the healthiness of food purchasing. The quest continues for measures to improve diet quality for the population as a whole while simultaneously decreasing health inequalities.
The authors' responsibilities were as follows-RN and MS: conducted the research; RN: performed the statistical analysis; RN and MS: wrote the manuscript; TMM: had primary responsibility for the final content of the manuscript; and all authors: designed the research, critically revised the manuscript, and read and approved the final manuscript. None of the authors reported a conflict of interest related to the study. | 2018-04-03T00:33:49.555Z | 2015-02-11T00:00:00.000 | {
"year": 2015,
"sha1": "252717249af606c112167e351a408def8f6b885a",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/ajcn/article-pdf/101/4/808/23755582/ajcn094227.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "140416bdf800d21ab9d370b7dc447af40aaad658",
"s2fieldsofstudy": [
"Economics",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9282808 | pes2o/s2orc | v3-fos-license | Pretreatment with β-Boswellic Acid Improves Blood Stasis Induced Endothelial Dysfunction: Role of eNOS Activation
Vascular endothelial cells play an important role in modulating anti-thrombus and maintaining the natural function of vascular by secreting many active substances. β-boswellic acid (β-BA) is an active triterpenoid compound from the extract of boswellia serrate. In this study, it is demonstrated that β-BA ameliorates plasma coagulation parameters, protects endothelium from blood stasis induced injury and prevents blood stasis induced impairment of endothelium-dependent vasodilatation. Moreover, it is found that β-BA significantly increases nitric oxide (NO) and cyclic guanosine 3’, 5’-monophosphate (cGMP) levels in carotid aortas of blood stasis rats. To stimulate blood stasis-like conditions in vitro, human umbilical vein endothelial cells (HUVECs) were exposed to transient oxygen and glucose deprivation (OGD). Treatment of β-BA significantly increased intracellular NO level. Western blot and immunofluorescence as well as immunohistochemistry reveal that β-BA increases phosphorylation of enzyme nitric oxide synthase (eNOS) at Ser1177. In addition, β-BA mediated endothelium-dependent vasodilatation can be markedly blocked by eNOS inhibitor L-NAME in blood stasis rats. In OGD treated HUEVCs, the protective effect of β-BA is attenuated by knockdown of eNOS. In conclusion, the above findings provide convincing evidence for the protective effects of β-BA on blood stasis induced endothelial dysfunction by eNOS signaling pathway.
Effects of β-BA on in vitro endothelial function. Mesenteric artery rings from model animals showed weakened endothelium-dependent vasodilator response to acetylcholine in arteries stimulated by phenylephrine compared with control mesenteric artery rings 23 . Mesenteric artery rings from blood stasis treated animals showed reduced endothelium-dependent vasodilator responses to acetylcholine in artery rings stimulated by phenylephrine compared to control aortic rings ACh-mediated vessel relaxation was significantly improved in β -BA (100 mg/kg/d) and β -BA (200 mg/kg/d) groups compared with blood stasis rats (Fig. 1a). β -BA prevented the blood stasis induced impairment of endothelium-dependent vasodilatation. No differences were found among all experimental groups in the aspect of concentrationcontractile response induced by phenylephrine in aortic rings without endothelium (Fig. 1b). However, the vasoconstrictor response to phenylephrine in intact mesenteric artery rings was increased by β -BA (Fig. 1a).
Effects of β-BA on vascular endothelium of carotid aortas and HUVECs. The H&E staining revealed that blood vessel endothelium in the normal group was integrated, but was not integrated in the model group. β -BA protected endothelium from the injury (Fig. 2). In addition, counts of circulating endothelial cells was performed. The results clearly showed that β -BA treatment significantly diminished circulating endothelial cells (CEC) count in blood compare to model (see Supplementary Fig. S1 online). The levels of NO and cGMP were determined in rats' carotid aortas. β -BA significantly increased both the product of NO and cGMP in a dose-dependent manner (Fig. 3a,b). Moreover, NO production was directly investigated in cultured HUVECs by referring to NO indicator DAF-FM DA (Fig. 3c,d). Application of β -BA triggered a progressive rise in intracellular NO production in cultured HUVECs, as reflected by the increase of fluorescence intensity. The present results strongly indicate that β -BA could dose-dependently elevate the NO production in HUVECs. β-BA enhanced the phosphorylation of p-eNOS (Ser1177) in carotid aortas and HUVECs. As a key regulator of NO production, eNOS was investigated in terms of activity. Immunohistochemical analysis showed staining intensity of p-eNOS (Ser1177) in endothelium of carotid aorta. A significant reduction of p-eNOS (Ser1177) expression was displayed at the outer vascular endothelial cells in the model group, while β -BA markedly increased such expression, comparatively (Fig. 4a). Firstly, it has been demonstrated that 6 h OGD is enough to cause endothelial cell barrier dysfunction in HUVEC cells (see After oral administration with β -BA (100 mg/kg/d or 200 mg/kg/d) for 7 times, NO (a) and cGMP (b) production in carotid arotas of rats were examined by Griess reaction and enzyme-linked immunosorbent assay. All data represent the results (Mean ± SD, n = 8). ( # P < 0.05, ## P < 0.01 versus the control group. *P < 0.05, **P < 0.01 versus the model group). (c,d) HEUVCs were pretreated with β -BA for 24 h before being subjected to 6 h OGD then incubated with β -BA for an additional 24 h. NO production was measured by DAF-FM DA. The amount of NO was evaluated by measuring the fluorescence intensity excited at 495 nm and emitted at 515 nm. Representative images were taken by the confocal microscope (bar: 20 μ m). All data represent the results (Mean ± SD) of triplicate independent experiments. ## p < 0.01 versus the control group, # p < 0.05 versus the control group, **p < 0.01 versus OGD group, *p < 0.05 versus OGD group).
Phosphorylation of eNOS is essential for β-BA mediated protection of endothelium function.
In the presence of L-NAME, the relaxation observed in response to the β -BA was significantly smaller than under control and β -BA (200 mg/kg) groups (Fig. 5a). Pretreatment of NO synthase inhibitor L-NAME reduced basal NO formation in the rats of the model group, treatment showed better contractile response in aorta compared with the model group, suggesting a higher NO formation in the vessel. eNOS phosphorylation and cell viability were increased by β -BA under OGD treatment in HUVECs (Fig. 5b), and the protective effect of β -BA was attenuated by knockdown of eNOS ( Fig. 5b) (P < 0.01). All the aforementioned results indicate that eNOS is essential for β -BA mediated protection of endothelium function.
Discussion
Boswellia serrata's gum resin is one such plant used in Indian Ayurvedic and folk medicine to treat blood disorders and curtail inflammatory diseases like rheumatoid arthritis and to promote cardiac health 24,25 .
Present study aims to investigate the mechanism of β -BA, an active triterpenoid compound from the extract of boswellia serrate, to protect endothelial function from blood stasis. Here, β -BA's effective protection of endothelial function against blood stasis insult is firstly explained. The blood stasis model was built during the time interval of two injections of adrenaline hydrochloride into the rats placed in ice-cold (0-2 °C) water. These data of blood coagulation parameters suggested that the injection of adrenaline hydrochloride and the exposure to ice-cold water could induce blood stasis 26,27 . The possibility to cause endothelial cell barrier dysfunction 28 by OGD was demonstrated. HUVECs are suitable for studying endothelial barrier function because of their defined tight junction pre-contracted aortic rings. β -BA induced relaxation could be significantly attenuated when the endothelium intact specimens were exposed to L-NAME (10 −4 M). The values of results are expressed as Means ± SD (n = 8). Experimental groups: control (■), model (®), model + β -BA 200 mg/kg (▲), model + β -BA 200 mg/kg + L-NAME (). ## p < 0.01 versus the control group, # p < 0.05 versus the control group, **p < 0.01 versus the model group, *p < 0.05 versus the model group. (b) si-con and si-eNOS were transfected to HUEVCs for 12 h, then the cells were pretreated with β -BA for 24 h before being subjected to 6 h OGD then incubated with β -BA for an additional 24 h. Expression of eNOS was determined. Cell viability was measured with MTT assay. The data represent the results of (Mean ± SD) of triplicate independent experiments. ## p < 0.01 versus the control group, **p < 0.01 versus OGD group, ※※p < 0.01 versus β -BA group.
proteins 29,30 . Thus, in vitro endothelial barrier breakdown models were established in endothelial cell lines under OGD conditions. Since 6 h OGD destroys endothelial barrier function 28 according to the results of relevant experiments, thus, 6 h OGD was selected to build endothelial barrier disruption models (Fig. 4).
As the key element in the interaction between blood flow and blood vessels, endothelial cell could modify a number of functions, such as vascular tension, platelet activity, tendency to thrombosis and fibrinolysis 31 . After being stained by H&E, the microscopic structures of rats' carotid aortas were observed. Vascular endothelial cells of all administration treatment groups were protected (Fig. 2). Endothelial dysfunction characterized by a decrease in the bioavailability of vasodilator, like NO, as well as vascular complications have been observed in individuals 32 . In particular, as a cerebrovascular protector, endothelium-derived NO is considered as an important endogenous mediator of vascular homeostasis and blood flow 33 . The loss of endothelial NO impairs vascular function, partially by promoting vasoconstriction, platelet aggregation, smooth muscle cell proliferation, and leukocyte adhesion 34 . NO and cGMP jointly comprise a special wide-ranging signal transduction system when the multi roles of cGMP in physiological regulation are considered, including smooth muscle relaxation, visual transduction, intestinal ion transportation, and platelet function 35 . For example, increased cGMP in vascular smooth muscle cells underlying the endothelium activates GMP-dependent kinases that decrease intracellular calcium and producing relaxation 36 . β -BA treatment significantly increases NO and cGMP levels in both carotid aortas of blood stasis rats and OGD treated HUEVCs (Fig. 3).
Increased cGMP in platelets through action of NO released into the blood vessel lumen can decrease platelet activation and adhesion to the surface of endothelium 37 . NO can also regulate the cellular environment within the vessel wall by inhibiting the activity of growth factors released from cells within the vessel wall and from platelets on the endothelial surface 38 . Both water and hydroalcoholic extracts of boswellia serrata's gum resin enhance PT and APTT coagulation time periods 39 . Extracts of boswellia serrata's gum resin can be considered as an effective antiatherogenic resource for preventing coronary artery diseases and may serve as ideal source to isolate lead compounds of antiplatelet and anticoagulant therapeutics 39 . All the evidences raise the necessity to investigate the effects of β -BA on blood coagulation. As is manifested by the results, β -BA can significantly prolong TT, PT and APTT, and decrease FIB (Table 1). PT is referred to evaluate the overall efficiency of extrinsic clotting pathway, and prolonged PT indicates a deficiency in coagulation factors V, VII and X. On the other hand, APTT indicates the intrinsic clotting activity, and prolonged APTT usually represents a deficiency in factors VIII, IX, XI, XII and Von Willebrand's factor 40 . According to the results, β -BA improves blood coagulation through extrinsic and intrinsic pathways. Additionally, it has been reported that β -BA induces release of arachidonic acids from platelets 41 , which in turn can induces endothelin-1 (ET-1) expression in endothelial cells 42 , Which has been identified as a key player of endothelial dysfunction. Pretreatment of β -BA results in a significant decrease of blood ET-1 level compared to model group (see Supplementary Fig. S3 online), which provides a better insight of β -BA's protective mechanism.
In endothelial cells, NO is synthesized from substrate L-arginine via eNOS, and the phosphorylation of specific serine residue (Ser-1177) in eNOS is significant for its enzymatic activity 43 . Endothelium eNOS is the predominant isoform of NO synthase in vasculature and catalyzes the generation of NO 44 . Hallmark of a dysfunctional endothelium is an impaired action of the enzyme eNOS 45 . Western blot and immunofluorescence as well as immunohistochemistry revealed that β -BA could increase phosphorylation of eNOS at Ser1177 (Fig. 4). Endothelial dysfunction was mainly demonstrated by reduction of NO bioavailability 46 . In an isolated aortic ring, acetylcholine (ACh) induced endothelium-dependent relaxations, and the relaxations were abolished by eNOS inhibitor L-NAME 47 . NO formation was markedly reduced by L-NAME in blood stasis rats. β -BA treatment showed better contractile response in aorta compared with the model group, suggesting a higher NO formation in the vessel (Fig. 5a). Pretreatment with β -BA before OGD damage could significantly increase cell viability (Fig. 5b). However, the protective effect was reduced by knockdown of eNOS, which suggests eNOS is required for β -BA mediated endothelial protection.
In blood stasis rats, in OGD treated HUEVC cells, the protective effect of β -BA was attenuated by knockdown of eNOS. In conclusion, the findings convincingly support the protective effects of β -BA on blood stasis induced endothelial dysfunction by eNOS signaling pathway.
In summary, present study elucidates the cellular and molecular mechanisms of β -BA in blood vessels and human endothelial cells. Specifically, it is firstly demonstrated that β -BA can attenuate endothelial cells injury in blood stasis model, and protect HUVECs against OGD-induced cell death by activating the eNOS/NO/cGMP pathway. Collectively, it is proved that the unexplored potential of β -BA for the treatment of blood stasis damage and pharmacological activation of NO/cGMP pathway can ensure endothelial protection. Animal Care. β -BA (purity > 98%) were purchased from the Chinese National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China).
Methods
The Blood Stasis Syndrome model was produced as described previously 48 . Briefly, rats were kept in plastic cages at 22 ± 2 °C with free access to pellet food and water and on a 12 h light/dark cycle. Rats were randomly divided into four groups (control group, model group, model +β -BA100 mg/kg and model +β -BA 200 mg/kg group) with eight animals in each. Rats were given blank solvent as the vehicle at the same volume for control group. Rats were given blank solvent at the same volume for model group. In the model +β -BA (100 mg/kg) group, rats were given 100 mg/kg β -BA. In the model +β -BA (200 mg/kg) group, rats were given 200 mg/kg β -BA. All treatments were performed by gavage and were administered seven times with an interval of 12 h. After the fifth administration, the model rats except those in control group with blood stasis were established by being placed in ice-cold water (0-2 °C) for 5 min, during the interval between two injections of adrenaline hydrochloride (0.8 mg/kg). Rats were fasted overnight and administration continued after performing the model. Blood samples and carotid artery were collected 30 min after the last administration on the following day. Cell Viability Assay. Cell viability was determined by a MTT [3-(4, 5-dimethylthiazol-2-yl) -2, 5-diphenyl-tetrazolium bromide] (Jiancheng, Nanjing, China) assay. Cells were seeded at a density of 1 × 10 4 cells/well in 96-well cell culture plates. After treatment, 20 μ l of the MTT solution (5 mg/ml) was added to each well (0.5 mg/ml final concentration in medium), and then the plates were incubated for an additional 4 h at 37 °C. Afterward the medium was removed and the metabolized MTT was solubilized with 150 μ l DMSO. The absorbance of the solubilized blue formazan crystals was read at 490 nm. The percent viability was defined as the relative absorbance of treated versus that of untreated control cells.
Oxygen glucose deprivation (OGD). OGD was achieved using methods published 49 . Briefly, 24 h after HUVECs were seeded in different culture plates and the culture medium was changed to the glucose-free DMEM containing either β -BA at different final concentrations in 0.2% (w/v) DMSO in the β -BA -treated groups or 0.2% DMSO in the model-treated groups for 24 h. Then cells were placed into an anaerobic chamber that was flushed with 5% CO 2 and 95% N 2 (v/v). The cell cultures within the anaerobic chamber were kept in a humidified incubator at 37 °C for various time intervals in different experiments. To terminate the OGD, the culture medium was changed to normal medium containing the same concentration of β -BA in DMSO or DMSO alone before returning to the normoxic incubating conditions. In the control groups, the cell cultures were subjected to the same experimental procedures with vehicle only and without exposure to the glucose-free DMEM or anoxia. Plasma anticoagulation assay. Thrombin time (TT), prothrombin time (PT), activated partial thromboplastin time (APTT) and fibrinogen content (FIB) were examined with commercial kits following the manufacturer's instructions by a coagulometer (Jiancheng, Nanjing, China). TT was determined by incubating 50 μ l plasma solution for 3 min at 37 °C, followed by addition of 100 μ l thrombin agent. PT was determined by incubating 50 μ l plasma solution for 3 min at 37 °C, followed by addition of 100 μ l thromboplastin agent. APTT was determined by incubating 50 μ l plasma with 50 μ l APTT activating agent for 3 min at 37 °C, followed by addition of 50 μ l CaCl 2 . FIB was determined by incubating 10 μ l plasma with 90 μ l imidazole buffer for 3 min at 37 °C, followed by addition of 50 μ l FIB agent. The anticoagulation activity was assessed by assaying the prolongation of the plasma clotting time of TT, PT, APTT, and reduction of FIB content. Vascular functional studies. One-millimeter ring segments of the mesenteric artery were dissected and mounted in individual organ chambers filled with Krebs buffer (composition in mM: NaCl 118, KCl 4.75, NaHCO 3 25, MgSO 4 1.2, CaCl 2 2, KH2PO4 1.2, glucose 11). The Krebs solution was continuously gassed with a 95% O 2 and 5% CO 2 mixture and kept at 37 °C. Rings were stretched to 2 g of resting tension by means of two L-shaped stainless-steel wires, which were inserted into the lumen and attached to the chamber and to an isometric force-displacement transducer, as previously described 50 . Rings were equilibrated for 60 to 90 min, and during this period, tissues were restretched and washed every 30 min with warm Krebs solution. Endothelial-dependent relaxation were assessed in response to increasing doses of acetylcholine (ACh) endothelium-dependent, after precontracted by 10 −6 M phenylephrine. To evaluate the formation of basal NO, the contraction induced by 10 −6 M phenylephrine was obtained in rings incubated for 30 min with the NOS inhibitor N G -nitro-L-arginine methyl ester (L-NAME, 10 −4 M).
Measurement of cGMP in carotid arteries.
At the end of the experimental period, the carotid arteries was immediately isolated from a rat, and cut into segments of about 20 mg/tissue. The homogenate was centrifuged at 10,000 × g for 5 min, and the supernatant was removed and extracted three times with 1.5 ml of water-saturated diethyl ether. cGMP content was measured by the equilibrated radioimmunoassay as described previously 51 . In brief, standards or samples were introduced in a final volume of 100 μ l of 50 mM sodium acetate buffer (pH 4.8). Then, 100 μ l diluted cGMP antiserum and iodinated cGMP were added in succession and incubated for 24 h at 4 °C. The bound form was separated from the free form by charcoal suspension. Results were expressed as nanomole cGMP generated per milligram of protein (nmol/mg of protein).
Measurement of NO in carotid arteries.
NO Assay Kit (Jiancheng, Nanjing, China) was used to measure newly synthesized NO from L -arginine by the action of eNOS in the presence of essential cofactors, according to the manufacturer's instructions. The final products of the reaction were nitrates, measured by colorimetric method (540 nm), which represented indirectly eNOS activity. Nitrate concentrations were determined via the standard curve.
Measurement of NO in HUVECs.
HUVECs (5 × 10 5 cells/well) in 6-well plates were incubated with or without various concentrations of β -BA. The stimulated NO production was confirmed by laser confocal fluorescent microscopy using a specific dye: 4-amino-5-methylamino-20, 70-difluoro-fluorescein diacetate (DAF-FM DA) (Beyotime, Haimen, China). Optical density was read in a micro plate reader at 540 nm. Each experiment was performed in triplicate. Micrographs were taken by the confocal microscope.
Histological and morphometric evaluations. Rats' carotid aortas were isolated, fixed in formalin (10%), processed for paraffin sectioning (3 mm thick) and stained with hematoxylin-eosin (H&E). The lumen of blood vessels, vascular walls and vascular endothelial cells of the carotid aortas were observed with microscope.
Immunohistochemical staining of rat' carotid aortas endothelium. At the end of the experiments, rats' carotid aortas were sampled and fixed in 4% phosphate buffered formaldehyde for 2 to 3 days 20 . After paraffin embedding tissue blocks were cut in 4 μ m slices and tissue sections collected on poly-L-lysine-coated glass slides were treated by microwave for antigen unmasking. anti-eNOS (phospho Ser1177) antibody (Abcam, cambridge, UK) were used as primary antibodies at dilutions of 1:100 (eNOS) incubated overnight at 4 °C, followed by incubation with the appropriate secondary horseradish-peroxidase labeled antibodies in accordance to the instructions of the LSAB + System HRP kit (DAKO, Hamburg, Germany) and development using DAB as chromogen. The sections were examined by light microscopy (Zeiss Axioscop 40, Jena, Germany).
Immunofluorescence. Cells grown in six-well slide chambers, after two washes with PBS, cells were fixed in 100% ethanol for 30 min. After 30 min of blocking of nonspecific binding with PBS containing 3% BSA, cells were incubated for 2 h at room temperature with a 1:100 dilution of the anti-eNOS (phospho Ser1177) antibody. After accurate washings, cells were further incubated for 1 h at room temperature with goat anti-rabbit IgG conjugated to Dylight 549 (diluted 1:100). After washing with PBS, Nuclei were incubated for 2 min at room temperature with DAPI (5 μ g/ml). At the end, cells were washing twice with PBS, and cells were mounted in aqueous mounting medium and covered with coverslips. Specimens were evaluated with a microscope, and the images were captured using a Spot charge coupled device camera system.
Western blot analysis.
For western blot, equal amounts of protein lysates were separated using 10% sodium dodecyl sulfate-polyacrylamide (SDS-PAGE) gel electrophoresis. The gels were blotted onto a nitrocellulose membrane and incubated with the primary antibodies of phosphorylated-eNOS (Ser1177) and total eNOS (Abcam, cambridge, UK). Binding of primary antibody was detected with a secondary anti-rabbit antibody and visualized by the enhanced chemiluminescence method. β -actin was used as a loading control.
Measurement of plasma endothelin-1 (ET-1) levels. The plasma of ET-1 were examined by ELISA (Enzyme-Linked Immunosorbent Assay) kit (Abcam, cambridge, UK). Blood samples were collected into plastic tubes containing EDTA (ethylenediaminetetraacetic acid), which were centrifuged at 3000 × g for 15 min, and the supernatant was assayed for the protein concentrations of ET-1 in accordance with the manufacturer's instructions. The concentrations (pg/ml) were determined based on a standard curve, prepared using a known set of serial dilutions of standard proteins. Statistical Analysis. The statistical analyses were performed using SPSS 16.0 (SPSS Inc., Chicago, IL, USA). The results were expressed as Mean ± standard deviation (SD), and differences between groups were compared with one-way ANOVA or t-tests as appropriate. P-value less than 0.05 presented statistical significance. | 2018-04-03T00:00:34.947Z | 2015-10-20T00:00:00.000 | {
"year": 2015,
"sha1": "4f0598ec94037218ee3f5c3d15f3e10c9b108a0e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep15357.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f0598ec94037218ee3f5c3d15f3e10c9b108a0e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
216127128 | pes2o/s2orc | v3-fos-license | Deterioration Prediction Modelling and Inspection Schedule Estimation for Concrete Bridge Decks
Accurate and reliable deterioration rate estimates for concrete bridge decks are an important part of the overall bridge condition assessment. The main objective of this paper is to determine the time in condition ratings (TICRs) of concrete bridge decks and assess the impact of average daily traffic (ADT), age, and deck area on the bridge deck condition. Condition ratings of bridge decks over 24 years for Michigan state were collected from the National Bridge Inventory (NBI) data. The Anderson-Darling statistical test was used to evaluate and rank five practical probability distribution functions to select the best fit for Michigan state data. The results indicate that the best statistical model for Michigan state data is the lognormal function. It was illustrated that the TICR decreases when the condition rating decreases. When a concrete bridge deck condition is rated at 8, it can take 11.29 years to drop to the lower rating of 7. However, when the concrete bridge deck condition is rated at 4, it may take 6.64 years to drop to the lower condition rating of 3. It was also observed that on average, bridge decks in Michigan stay much longer than the typical inspection interval (i.e., 2 years), suggesting that inspection intervals can be longer than 2 years for bridges in good condition ranges. The results also show that ADT, age, and deck area are important factors in the deterioration rates of concrete bridge decks.
Introduction
National Bridge Inspection Standards (NBISs) were established by the Federal-Aid Highway Act of 1968 and the Surface Transportation Assistance Act (STAA) of 1978 directly after the collapse of the Ohio River Bridge (also known as the Silver Bridge) in 1967 [1]. These standards describe the requirements for regular periodical inspection of bridges. The standards also outline the professional qualifications necessary to serve as an inspector.
Concrete decks are among the most susceptible parts of bridges and their service lives are typically shorter than those of other components because they are exposed to deterioration produced by direct contact with traffic and other environmental factors such as freeze/thaw cycles or deicing materials in cold weather regions [2,3]. Basically, the resources needed for rehabilitation, replacement, and repair of concrete bridge decks are typically inadequate [1,4]. The Federal Highway Administration (FHWA), therefore, is continuously working to support scientific and technological research to achieve both short and long-term results for required enhancements [4]. One of the significant issues that face transportation agencies is how to reduce the cost of bridge deck maintenance [5]. Researchers have suggested the use of accurate and reliable condition assessment techniques can assist in reducing the costs and increasing the efficiency of concrete bridge deck maintenance and repair [5,6].
Since routine inspection requirements were first established in the early 1970s, the regular two-year interval assigned by the NBIS has been effective in guaranteeing an acceptable level of protection and serviceability for highway bridges [7]. Currently, regardless of condition rating, most bridges in the United States are scheduled for inspection at a uniform calendar interval of two years. However, because of the fixed two-year inspection schedule of both newly constructed bridges with little or no deterioration along with old bridges with more deteriorated components, inefficiencies in the allocation of inspection resources are observed [8].
The fixed inspection intervals (i.e., the time between two consecutive inspections) can be reduced or increased based on conditions established by the bridge owner for bridges in certain condition rating ranges. Inspection intervals of up to six years can be acceptable for certain bridges that meet condition criteria. Commonly, bridges with low ADT and short spans that are in good condition can be qualified for the extension of inspection intervals [8,9]. An earlier study used statistical models to analyse Oregon bridge condition data extracted from the National Bridge Inspection Standard (NBIS) [10]. The authors concluded that bridges with good condition ratings tend to stay in those ratings longer than two years and therefore suggested a possible extension of the inspection intervals.
Condition assessment of concrete bridge decks and causes of deterioration
Maintaining the safety and serviceability of concrete bridge decks is very important because bridges are a critical part of the transportation networks. FHWA has developed condition assessment processes for all bridges in the United States where they undergo inspection every two years based on NBISs to determine the condition ratings for each major element in a bridge [11]. In contrast, the inspection intervals in Europe may reach up to five or six years based on inspector qualifications and experience, and can go as long as nine years as in the case of France [12]. Mathematical and statistical models have been developed to assess and predict the condition of bridge elements using the NBI database [13]. Developing optimum prediction models is crucial for making maintenance, repair, or replacement decisions.
According to the NBI, there are more than 600,000 bridges in service around the United States. Half of these bridges were constructed before the year 1970, and 25% of the total bridges require rehabilitation, repair, and/or reconstruction. The available resources compared to the enormous quantity of work required for accomplishing the repair, rehabilitation, and reconstruction are often very limited [1]. Based on the reports available from the FHWA, more than 100 million m 2 of the entire 360 million m 2 U.S. concrete bridges are either structurally deteriorated (SD) or functionally obsolete (FO). The decision to rehabilitate or replace a bridge is influenced by technical and economic factors. A comprehensive inspection is needed for all types of bridges to provide appropriate information about the general condition of the bridges. This information includes measurements and the accompanying defects that are discovered in bridges under examination. Once the inspection phase is completed, analysis of collected data commences. These data are used to determine the best maintenance or rehabilitation method to maintain the service life of the bridge [14].
Deterioration of concrete bridge decks occurs due to many reasons. Therefore, it is very important to study the most expected factors impacting on a bridge condition [15]. Old bridges may deteriorate faster than newer ones [16,17]. Additionally, as the concrete bridge deck is the main component that provides the riding surface, it is exposed to deterioration more than the other parts of a bridge, making ADT as an important factor for consideration [18]. Furthermore, large areas of bridge decks can be exposed to random types of defects that may result in inaccurate condition rating [6]. For example, if a relatively small part of a large area of a bridge deck has a defect level matching to a rating of 5, an inspector may rate the entire deck as 5, possibly resulting in a non-optimal maintenance decision. In this study, the impact of these three factors (i.e., ADT, age, and deck area) on the deterioration rate will be investigated to help in developing future deterioration prediction models.
Currently, there are several methods that have been developed for bridge condition assessment, including the fuzzy based analytic hierarchy approach [19], deterministic methods [17], probability distribution methods [6,10], and Markov chain models [13]. These methods can be used to measure the deterioration rates of bridges. Consequently, an enhancement in the prediction of the remaining service life of bridges has resulted due to using these methods.
Condition ratings of concrete bridge decks
The NBI database includes condition ratings for the major bridge elements, including the deck, superstructure, and substructure for a period of 24 years (i.e., from 1992 to 2015) (Federal Highway Administration [20,21]. According to NBIS, condition ratings for each major element in a bridge varies from 0 to 9. Typically, a bridge is considered structurally deficient if the deck receives a condition rating of 3 or less, while a condition rating of 7 or more is very desirable [22]. Overall, these condition ratings are indicators of the level of bridge deck performance and required maintenance actions [23].
The condition ratings of bridges typically decrease over time due to the deteriorations that occur in the major elements. In other words, the major elements are rated at 9 when bridges are newly constructed. They stay in the same condition rating for a while and then start to deteriorate with time and drop down to the next condition rating. This process is repeated again, and the condition rating drops further [17]. In this study, the change in the condition rating over time will be investigated in an attempt to identify and model deterioration trends to develop prediction models.
Goals and significance
The goal of the study described in this paper is to estimate the TICRs in each condition rating of Michigan bridge decks and investigate the impact of ADT, age, and deck area on bridge condition and on the inspection intervals. A concrete bridge deck is typically the primary load path, and its condition is a significant aspect of the integrity and serviceability of the bridge [14]. In Michigan, there are around more than 10,000 bridges that require inspection every two years. The objective is to develop condition assessment schedules for bridge decks that address the maintenance needs of the more critical bridges within the resource constraints of transportation agencies. The study described in this paper will analyse condition data of Michigan bridges available through NBI to determine how many years, on average, a bridge deck remains in a certain condition rating and to evaluate the impact of ADT, age, and deck area on the deterioration rates of bridge decks. Additionally, condition ratings data over a 24 year were analysed under the Anderson darling test to determine the best statistical model that can represent Michigan state data.
Data sources and handling
Two main issues will be explored and discussed in this study: 1) The data needed to track bridge deterioration rates, and 2) Factors that affect bridge deterioration rates. NBI condition ratings are used to rank bridges and can be used to track concrete bridge deck deterioration rates [20,21]. Data from 1992 to 2015 was used in this study. Since inspections are performed yearly or biennially, NBI records of condition ratings are available for each of the main bridge elements. These condition ratings are then transformed into consistent NBI condition codes, which are also identified by FHWA.
Data collection methods
The inspection and maintenance of bridges have become a priority for U.S. departments of transportation. Currently, visual inspection and chain drag are the main methods to inspect bridge decks and can detect cracks, spalls, and delaminations. They are essentially the main technique used to collect data about bridge condition. However, they are considered subjective and ambiguous because they depend on the experience of the inspector, definition of the deteriorations, the condition level of the defects, and other factors. Non-destructive testing methods are beginning to gain acceptance, but are still under research. Many studies have been conducted to determine if non-destructive techniques are adequate for bridge inspection. Some of these non-destructive techniques include Ground Penetration Radar (GPR), Impact Echo, and infrared thermal imaging [3,[24][25][26][27].
Data pre-processing
For an accurate evaluation of deterioration rates, inspection data must be treated to eliminate the effects of issues other than uniform maintenance that may result in an increase or decrease in condition ratings. These issues include repair, miscoding [17]. This study is based on the NBI database, which includes numerical ratings of bridge decks for the period of 1992 to 2015. The treatment of data was performed in several steps: 1) Due to the fact that ratings of 0-3 are considered severe conditions that require immediate attention, such bridges typically undergo significant rehabilitation or replacement rather than staying in their current condition. Therefore, these condition ratings were not included in this study. It is also worth noting that there are few bridges with condition ratings of 0-3 as compared with other condition ratings. Additionally, data with condition ratings of 9 were removed from the analysis because they represent new bridge construction with no deterioration [10].
2) Data with condition ratings of N (for not applicable data) and inspection data with unusual rating drop were removed from the data set. For example, if the bridge was in the condition rating of 8 for 3 years and then dropped to a condition rating of 6 or 5 for 2 years and then jumped to a condition rating of 7 or 8 for 2 or 3 years and this process was replicated every 2 or 3 years for the period from 1992 to 2015 without any records about rehabilitation or repair, then these data were eliminated [1].
3) Bridges that did not have key parameter records such as deck rating, year built, ADT, deck width, and structure length were eliminated [17]. 4) All bridges that had recently undergone rehabilitation, repair, or reconstruction of the deck were clipped [6]. 5) If the time in condition rating (TICR) was 3 years or less for sequential inspection cycles at the beginning or the end of the study period interval (1992-2015), data were clipped from the original record. For example, if the bridge was in the condition rating of 6 in the 1992 and 1994 (which is 3 years (TICR)) and then dropped to a condition rating of 5 or jumped to a condition rating of 7 due to rehabilitation or repair in the next inspection cycle in 1996, then the data from 1992 to 1994 were removed. Similarly, if the condition rating of the bridge changed to 6 in 2013 and stayed at 6 through 2015, this data was also clipped. The 3-year threshold for clipping was based on a sensitivity analysis of different probable trimming values ranging from 3 to 7 years. The results showed that TICR of more than 3 years at the beginning or end of the available data set had the largest impact on the final analysis results. Consequently, 3 years was selected as a suitable value. A similar analysis was performed on the Oregon data set that resulted in a 5-year clipping threshold, suggesting that this criterion would depend on the data set and the time interval under investigation [10].
6) Some of the NBI records for the concrete bridge deck showed an increase or decrease in the condition rating for 1 or 2 years, and then a return to the same condition rating without any recognized rehabilitation. These increases and decreases are considered inspector errors and the condition rating is manually revised to be identical to the previous value. For example, if the concrete bridge deck was at condition rating of 7 for 5 years, then dropped to a condition rating of 6 for 2 years, and then returned to the initial value of 7 for 4 years, we would consider this as an error and would correct the results to display the concrete bridge deck at condition 7 for 11 years [10,16].
Assumptions
A few assumptions were made to simplify the analysis as follows: 1) Only bridges with concrete decks were included in the analysis (item 107 in NBI data).
2) The original construction date of the bridge is determined from the year built (item 27 in NBI data). This is used as the base when determining if the overlay and the deck are part of the original construction.
3) The value of ADT is determined from the total ADT (item 29 in NBI data). Bridges with ADT of 0 were eliminated from the data set and the data of this factor were divided into three categories: less than 4,000, 4,000 to10,000, and more than 10,000 (vehicles/day).
4) The age of the deck is determined by comparing the current year to the year built (item 27 in NBI data). Then, the year-built data for bridge decks have been updated with the new dates of reconstruction as listed in the NBI (item 106) when bridges have been reconstructed.
5) The deck areas were calculated from the structure length (item 49 in NBI data) and the deck width (item 51 in NBI data).
6) The age values in years were classified into two ranges: less than or equal to 25 and more than 25 years. (The lifespan of the most bridges were designed to be 50 years [28]. Therefore, bridges within this range were analysed in this study. 7) The deck area values in square meters (m 2 ) were divided into two groups: less than 500 m 2 and more than 500 m 2 .
To verify whether the bins for the above factors are appropriate for the study, Kruskal-Wallis test was performed. This test was chosen because condition rating data are not normally distributed. Therefore, the bins for each factor were analysed under this test to evaluate if there was significant statistical difference between the factor groups. Table 1 shows the parameters of the Kruskal-Wallis test. Statistical significance between bins exists when the pvalue is less than the 0.05 [29]. As shown in the table, all p-values were less than 0.05, suggesting that all bins were classified appropriately. Additionally, the chi square values further demonstrate that the TICR data vary between bins for each factor and support the appropriateness of the bin selections.
Analysis of national bridge inspection condition data
The study described in this paper is based on the NBI bridge inspection data for the period from 1992 to 2015. In this section, determining the best statistical method for Michigan state will be investigated. Then, the best statistical model for Michigan concrete bridge deck will be used to evaluate the TICRs. Table 2 shows the description of condition ratings for Michigan concrete bridge decks in addition to their ages in 2015.
Analysis of NBI condition data for Michigan bridge decks
In this study, statistical analyses were performed to determine the distribution model that best fits the TICR values for Michigan bridge deck condition rating data. There are three most commonly used goodness-of-fit tests available for evaluating and ranking the most commonly used statistical models: Anderson-Darling (AD), Chi-Square, and Kolmogorov-Smirnov (K-S) [30]. The Anderson-Darling is the most appropriate goodness-of-fit test for the bridge condition data at hand. It is performed to examine data derived from a population with a particular distribution and produces more weight to the tails than does the K-S test. The AD test is also more precise at the tails data than the Chi-Square and K-S tests. Since AD uses the maximum difference between the cumulative distribution function curves, it gives more weight to outliers than KS. Thus, the test considers all the differences at the tail end that may be neglected or removed by the other test methods [31]. The most frequently used statistical models under the Anderson-Darling test are the exponential, Weibull, lognormal, normal, and gamma distributions [32]. In this study, these models were investigated.
The Michigan concrete bridge deck condition ratings that were used in this investigation ranged from 4 to 8 over the past 24 years (1992-2015). Table 3 summarizes the Anderson-Darling test values of the five commonly used statistical models. In the table, the statistical analyses are grouped by condition rating values. The lognormal probability distribution function had the smallest Anderson-Darling test value for most of the CR values, making it the best statistical model for the TICR data set. The Gamma distribution function was a better fit than (but fairly close to) the lognormal distribution method for CR values of 6 and 8, but was fairly behind for the other three CR values, resulting in the choice of lognormal as the best fit. It is also clear from Table 3 that the exponential distribution method is the least desirable model with the highest Anderson-Darling test values. Moreover, goodness-of-fit tests were done for each group of factors and the lognormal distribution function was the best distribution for each factor in the Michigan dataset.
Condition prediction modelling for Michigan bridge deck data
The lognormal distribution function is considered one of the most appropriate and flexible models commonly used to determine failures produced by deterioration processes because of certain features of the lognormal random variable such as the non-negative values and the skewness [33]. The lognormal cumulative distribution function is expressed as [34]: where t is independent positive random variable. σ is the shape parameter and exp µ is the scale parameter. σ, µ can be calculated as m is the mean, V is the variance and S is the standard deviation. m, V can be calculated as m = S can be calculated from the equation (4).
In this study, t is a variable that represents the time interval in which concrete bridge decks can remain in a condition rating before dropping to a lower one (i.e. TICR). For example, if the concrete bridge deck is changed from a condition rating of 8 to 7 and stays at the new rating for 9 years before dropping to 6, then the time interval is 9 years in this equation. The shape parameter σ (1a) is always more than 0. The relationship between the value of σ and the skewness of the lognormal distribution is positive. An increase (or decrease) in the value of σ increases (or decreases) the skewness of the distribution. Skewness is the scale of the symmetry of the distribution. Additionally, when the value of σ is less than 1, the lognormal distribution becomes very close to the normal distribution model. The scale parameter exp (µ) (1b) represents the average of TICRs of concrete bridge decks [35].
Evaluation of the Michigan bridge deck condition prediction model
The lognormal distribution method was used to model the deterioration rates of Michigan concrete bridge decks using historic data for the period from 1992 to 2015. The data records of concrete bridge decks are distributed on condition ratings (CRs) from 4 to 8. The results obtained from this study can be used to predict the time needed for periodical inspection of concrete bridge decks.
The values of the shape parameter for all CRs were less than 1, suggesting that the behaviour of the lognormal distribution model was very close to that of a normal distribution. Table 4 shows the values of this parameter in the lognormal distribution model for the different CRs of Michigan concrete bridge decks. As shown in Table 4, all of the values of the scale parameter (i.e., TICRs) are greater than 2 years (the fixed inspection schedule for bridges according to NBI). The TICR decreases when the condition rating decreases. For example, when a concrete bridge deck condition is rated at 8, it can take 11.29 years to drop to the lower CR by 7. However, when the concrete bridge deck condition is rated at 4, it may take 6.64 years to drop to the lower condition rating of 3. Essentially, concrete bridge decks that are in the same condition (i.e., CRs of 7 and 8) tend to stay longer in good condition as compared to those that are in poor condition (i.e., CRs of 5 or less). Figure 1 shows an example of the lognormal probability density function for the Michigan concrete bridge deck condition rating of 5. Figure 2 shows the probability of deterioration rates of different concrete bridge deck condition ratings based on the cumulative distribution functions for CRs ranging from 4 to 8. Specifically, the Figure shows the probability of a concrete bridge deck remaining in its condition rating before dropping to a lower CR. For example, there is a 10% probability that the time that a deck at CR of 4 stays in its condition rating will be less than 3.84 years, which is longer than the 2-year inspection schedule. For the same probability (i.e. 0.1), a CR of 8 will take 5.74 years before dropping to a CR of 7. As another example, consider that there is an interest in estimating how long it will take for a concrete bridge deck to deteriorate from a CR of 8 to a CR of 4. Looking at the 0.05 probability (the probability of failure 5%; drop from a certain level to the next low level) in Figure 2, dropping from a CR 8 to CR 4 will take around 16.55 years (4.74 + 4.07 + 4.21 + 3.53). This study, therefore, can support decision-makers as they examine the possibility of changing inspection intervals for concrete bridge decks in Michigan, especially for those that are in good condition.
The impact of factors on the TICR
The impact of each factor (ADT, age, and deck area) on the TICR was investigated. Since these factors are not independent from each other, the investigation was performed by keeping two factors constant and evaluating the effect of the third factor. The following sections will discuss the impact of each factor on the deterioration rates.
The impact of ADT
The ADT factor has a major effect on the deterioration rates of concrete bridge decks, as shown in Table 5. Specifically, it was found that ADT can significantly impact a concrete bridge deck when it is still in good condition, and has a slight effect on bridge decks that are in poor condition. For example, the concrete bridge decks in CR of 8 can stay in this condition for 13.35 years before dropping to a condition rate of 7 when the ADT is less than 4,000 vehicles/day with age is less than or equal to 25 years and deck area is less than or equal to 500 m 2 ; but the concrete bridge decks for the same condition rating may just stay 9.68 years before dropping when the ADT is more than 10,000 vehicles/day. This conclusion can be seen in the rest of condition ratings (Table 5).
Previous studies have recommended using the probability of failure (transitioning from one rating to next lower one) that does not exceed 5%, a risk threshold that can be accepted by transportation authorities (Nasrollahi, & Washer, 2015). Figure 3 is the 5% probability chart for TICR under the effect of ADT. This figure, for example, can show how many years it can take for concrete bridge decks to deteriorate from 8 to 4 under the effect of ADT. It may take around 20.49 years to deteriorate from 8 to 4 if the ADT is less than 4,000 vehicles/day with age is less than or equal to 25 years and deck area is less than or equal to 500 m 2 , while it may take just 16.71 years to deteriorate from 8 to 4 if the ADT is more than 10,000 vehicles/day for the same conditions.
The impact of deck age
The effect of age on the deterioration rates of bridge decks is similar to the effect of ADT. The progress of age showed a significant effect on the deterioration of the bridge decks as shown in Table 5. In fact, with the same ADT, age had an effect on the deterioration rate of concrete bridge decks that are in good condition more than those in poor condition. For example, bridge decks within 25 years of service life, ADT is less than 4,000 vehicles/day, and deck area is less than or equal to 500 m 2 can stay for 13.35 years in the condition rating of 8, while for those that are more than 25 years in service for under the same ADT and area may stay just 8.20 years in the condition rating of 8 (see Table 5). This conclusion can be noticed in the other condition ratings (7 to 4). Figure 4 shows that bridges within 25 years, ADT is less than 4,000 vehicles/day, and deck area is less than or equal 500 m 2 will have a period of 20.49 years to deteriorate from a condition rating of 8 to 4, while it can just take 16.9 years for bridges more than 25 years of service with the same conditions under the probability of failure 5%.
The impact of deck area
Similar to ADT and age, the size of the bridge deck area can have a considerable effect on the deterioration rate for all condition ratings of concrete bridge deck (Table 5). Essentially, there is a direct relationship between the deck area and ADT that causes the deck area to have an impact on the deterioration of concrete bridge decks. For example, bridges with deck area is less than 500 m 2 with ADT is less than 4,000 vehicles/day and age is less than or equal to 25 years can stay in the condition rating of 8 for 13.35 years, while bridges with deck areas are more than 500 m 2 will stay just 9.78 years. Also, bridges with deck area less than 500 m 2 with ADT is more than 10,000 vehicles/day and age is more than 25 years can stay in the condition rating of 8 for 7.74 years ( Table 5). The same approach can be seen in all condition ratings (condition rating of 4 to 7). Figure 5 shows the time intervals wherein concrete bridge decks can move from good condition to poor condition (i.e., from a condition rating of 8 to 4) under the effect of deck area for the probability of failure 5%. It may take around 20.49 years for a concrete bridge deck to deteriorate from 8 to 4 if the area of the bridge decks is less than or equal to 500 m 2 with ADT is less than 4,000 and age is less than or equal to 25 years, while it may take just 16.79 years to deteriorate from 8 to 4 if the area is more than 500 for the same conditions.
Concluding remarks
Bridge condition data from 1992 to 2015 were analysed to determine the TICR of Michigan concrete bridge decks and investigate the impact of ADT, age, and deck area on the deterioration rates of concrete bridge decks. The Anderson-Darling statistical test was used to assess and rate five practical probability distribution methods to choose the best fit for Michigan state in the USA. The results revealed that the lognormal distribution function was the best model for the Michigan data. This paper illustrated that Michigan concrete bridge decks that are in good condition can stay in those conditions longer than the typical two-year inspection schedules as recommended by the NBISs. Additionally, this study revealed that concrete bridge decks in good conditions deteriorate at a slower rate than those decks in poor conditions. Consequently, inspection schedules for concrete bridge decks can be extended beyond the two years, especially for those decks that are in good condition (CRs of 7, and 8). or for those that are recently constructed. However, if such an action is to be adopted, it must initially be at a slow rate and be carefully monitored before fully extending inspection schedules to ensure bridge safety and to guarantee that proper and timely maintenance actions are not compromised. Essentially, while inspection intervals can be extended up to 6 years in Michigan, which parallels the conclusions of others, this study revealed that there is currently no universal statistical prediction model that can be developed in one state and used by others. However, using the probability of failure 5% has reduced the extension of inspection intervals up to 4 years. Thus, more studies are required to select the acceptable threshold for this purpose to support decision-makers as they study the possibility of extending inspection intervals.
The results in this paper demonstrated that the ADT, age, and deck area factors have significant impact on the deterioration rates of concrete bridge decks. Future studies are needed to further evaluate each of these factors in more details. For example, more bins for ADT, age, and deck area can be established to provide a more detailed impact analysis for the deterioration rates of concrete bridge decks. | 2020-04-16T09:13:10.741Z | 2020-05-15T00:00:00.000 | {
"year": 2020,
"sha1": "c042960ede7589f204e31b72394e78cba5056da1",
"oa_license": "CCBY",
"oa_url": "http://www.xpublication.com/index.php/jcec/article/download/350/224",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "259d3b7b6b75add7ac4ff3a9442a6fc65847acf1",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
238770213 | pes2o/s2orc | v3-fos-license | Music emotion recognition using recurrent neural networks and pretrained models
The article presents conducted experiments using recurrent neural networks for emotion detection in musical segments. Trained regression models were used to predict the continuous values of emotions on the axes of Russell’s circumplex model. A process of audio feature extraction and creating sequential data for learning networks with long short-term memory (LSTM) units is presented. Models were implemented using the WekaDeeplearning4j package and a number of experiments were carried out with data with different sets of features and varying segmentation. The usefulness of dividing the data into sequences as well as the point of using recurrent networks to recognize emotions in music, the results of which have even exceeded the SVM algorithm for regression, were demonstrated. The author analyzed the effect of the network structure and the set of used features on the results of the regressors recognizing values on two axes of the emotion model: arousal and valence. Finally, the use of a pretrained model for processing audio features and training a recurrent network with new sequences of features is presented.
Introduction
Music is an organization of sounds over time, and one of its more important functions is the transmission of emotions. The music created by a composer is ultimately listened to by a listener. The carriers of emotions are sounds distributed over time, their quantity, pitch, timbre, loudness, and their mutual relations. These sounds in music terminology are described by melody, instruments, dynamics, rhythm, and harmony. Before a person notices the emotions in music, he/she must have some time to analyze the listened to fragment (Bachorik et al., 2009); depending on the changes in melody, timbre, dynamics, rhythm, or harmony, we can notice different emotions, such as happy, angry, sad, or relaxed.
The aim of this paper was to imitate the time-related perception of emotions in music by humans through the construction of an automatic emotion detection system using recurrent neural networks (RNN). Just as the human brain is "fed" with subsequent sound information over time, on the basis of which it perceives the emotions in music, similarly, the neural network downloads subsequent information vectors in subsequent time steps to predict the emotion value of the analyzed musical fragment.
Related work
Division into categorical and dimensional approach can be found in papers devoted to music emotion recognition (MER). In the categorical approach, a number of emotional categories (adjectives) are used for labeling music excerpts (Lu et al., 2006;Grekow, 2015;Patra et al., 2017). In the dimensional approach, emotion is described using dimensional space, like the 2D model proposed by Russell (1980), where the dimensions are represented by arousal and valence (Weninger et al., 2014;Coutinho et al., 2015;Grekow, 2016;Delbouys et al., 2018;Grekow, 2018b).
MER task can also be divided into static or dynamic, where static MER detects emotions in a relatively long section of music of 15-60 s (Delbouys et al., 2018;Patra et al., 2018;Chowdhury et al., 2019), and dynamic MER examines changes in emotions over the course of a composition, for example, every 0.5 or 1 s. Dynamic MER task was conducted by MediaEval Benchmarking Initiative for Multimedia Evaluation, the results of which were presented by Aljanaki et al. (2017).
A comprehensive review of the current emotionally-relevant computational audio features used in MER was presented by Panda et al. (2020). They show the relations between eight musical dimensions (melody, harmony, rhythm, dynamics, timbre, expressivity, texture, and form) and specific emotions.
Long-short term memory recurrent neural networks were used in dynamic MER task by Coutinho et al. (2015). Low-level acoustic descriptors extracted using openSMILE and psychoacoustic features extracted with the MIR Toolbox were used as input data. A multivariate regression performed by deep recurrent neural networks was used to model the timevarying emotions (arousal, valence) of a musical piece (Weninger et al., 2014). In this work, a set of acoustic features extracted from segments of 1 s length were used. Delbouys et al. (2018) used mel-spectrogram from audio and embedded lyrics as input vectors to the convolutional and LSTM networks. Chowdhury et al. (2019) used VGGstyle convolutional neural networks to detect 8 emotional characteristics (happy, sad, tender, fearful, angry, valence, energy, tension). For network training perceptual mid-level features (melodiousness, articulation, rhythmic stability, rhythmic complexity, dissonance, tonal stability, modality) were used, and spectograms from audio signals were used as input vector for neural networks. Deep signal processing architectures and feature learning that can be used in content-based music informatics retrieval (MIR) challenges were presented by Humphrey et al. (2012).
The use of pretrained models in MIR classification tasks was presented in (Hamel et al., 2013;Oord et al., 2014). (Choi et al., 2017) used a pretrained convolutional neural network for music classification and regression tasks. A pretrained on mel-spectrograms model is used as a feature extractor in six music information retrieval and audio-related tasks. The proposed approach uses features from every convolutional layer after applying average pooling to reduce their feature map sizes.
What distinguishes this work from others is that it uses a different segment length (6 s) than the standard static MER, as well as proposes a method of preparing data for recurrent neural networks, which it tests with various low and mid-level features. Due to the fact that the studied segment is relatively short, a solution of using a sliding window also allows to study changes in emotions throughout the entire composition, i.e. similar to dynamic MER. This article is an extension of a conference paper (Grekow, 2020) where the problem was preliminarily presented. In the presented article, the emotion detection method has been expanded to include the use of a pretrained model for processing audio features.
The rest of this paper is organized as follows. Section 3 describes the music data set and the emotion model used in the conducted experiments. Section 4 presents the tools used for feature extraction and preparation of data before building the models. Section 5 describes the details of the built recurrent neural networks. Section 6 presents the results obtained while building the models using the features obtained from audio tools. The use of pretrained models as feature extraction in connection with the recurrent neural network is described in Section 7. Finally, Section 8 summarizes the main findings.
Music data
A well-prepared database of learning examples affects the results and the correctness of the created models predicting emotions. The advantages of the obtained database are welldistributed examples on the emotion plane as well as congruity between the music experts' annotations. The data set consisted of 324 six-second fragments of different genres of music: classical, jazz, blues, country, disco, hip-hop, metal, pop, reggae, and rock. The tracks were all 22050 Hz mono 16-bit audio files in .wav format. The training data were taken from the publicly available GTZAN 1 data collection (Tzanetakis & Cook, 2002). After the selection of samples, the author shortened them to the first 6 seconds, which is the shortest possible length at which experts could detect emotions for a given segment. Bachorik et al. (2009) investigated the length of time required for participants to initiate emotional responses to musical samples. On average, participants with varying musical training required 8 seconds of music before initiating emotional judgments. In our experiment, we used five music experts, thus it was decided that the samples will be shortened to 6 seconds. Data annotation was done by five music experts with a university musical education. The musical education of the experts, people who deal with the creation and analysis of emotions in music on a daily basis, allows us to trust the quality of their annotations. Each annotator annotated all records in the data set -324 six-second fragments. Each music expert had heard all the examples in the database. As a result during the annotation each annotator was able to see all the shades of emotions in music, which is not always the case in databases with the emotions determined. This had a positive effect on the quality of the received data, which was emphasized by Aljanaki et al. (2017).
During annotation of music samples, we used the two-dimensional arousal-valence Russell's model ( Fig. 1) to measure emotions in music, which consists of two independent dimensions of arousal (vertical axis) and valence (horizontal axis). Each music expert making annotations after listening to a music sample had to specify values on the arousal and valence axes in a range from −10 to 10. Russell's circumplex model (Russell, 1980) Value determination on the arousal-valence axes (A-V) was clear with a designation of a point on the A-V plane corresponding to the musical fragment. The data collected from the five music experts were averaged. Figure 2 presents the annotation results of a data set with A-V values. The amount of examples obtained in the quarters on the A-V emotion plane is presented in Table 1.
Fig. 2 Data set on A-V emotion plane
A well-prepared database, i.e. one suitable for independent regressors predicting valence and arousal, should contain examples where the values of valence and arousal are not correlated. To check if valence and arousal dimensions are correlated in our music data, the Pearson correlation coefficient was used. The obtained value of r = −0.03 (i.e. close to zero) indicates that arousal and valence values are not correlated and the music data are a well spread in the quarters on the A-V emotion plane.
All examples in the database were marked by five music experts and their annotations had good agreement levels. A good level of mutual consistency was achieved, represented by Cronbach's α calculated for the annotations of arousal (α = 0.98) and valence (α = 0.90). We can see that the experts' annotations for the arousal value show greater agreement than for the valence value, which is in line with the natural perception of emotions by humans (Aljanaki et al., 2017). Details on creating the music data were presented in a previous paper (Grekow, 2018a). The collected music data set is available on the web site. 2 4 Audio feature extraction
Tools for feature extraction
For feature extraction, tools for audio analysis and audio-based music information retrieval, Essentia (Bogdanov et al., 2013) and Marsyas (Tzanetakis & Cook, 2000), were used. Marsyas software, written by George Tzanetakis, has the ability to analyze music files and to output the extracted features. The tool enables the extraction of the following features: Zero Crossings, Spectral Centroid, Spectral Flux, Spectral Rolloff, Mel-Frequency Cepstral Coefficients (mfcc), and chroma features -31 features in total. For each of these basic features, Marsyas calculates four statistic features (mean, variance and higher-order statistics over larger time windows). The feature vector length obtained from Marsyas was 124.
Essentia is an open-source library, created at the Music Technology Group, Universitat Pompeu Fabra, Barcelona. In the Essentia package, we can find a number of executable extractors computing music descriptors for an audio track: spectral, time-domain, rhythmic, and tonal descriptors. Extracted features by Essentia are divided into three groups: lowlevel, rhythm, and tonal features. A full list of features is available on the web site. 3 Essentia also calculates many statistics over values collected in array: the mean, geometric mean, power mean, median of an array, all its moments up to the 5th-order, its energy, and the root mean square (RMS). The feature vector length obtained from Essentia was 529.
Preparing data for RNN
Recurrent neural networks process sequential data and find relationships between the input data sequences and the expected output value. To be able to train the recurrent neural network, it is necessary to enter sequences of the feature vectors. In this paper, to extract correlations with time in the studied music fragments, they were segmented into smaller successive sections. The process of dividing a fragment of music (6 s) into smaller segments of a certain length t (1, 2 or 3 s) and overlap (0 or 50%) is shown in Fig. 3. To split the wav file, the sfplay.exe tool from Marsyas toolkit was used. From the created smaller segments of music, feature vectors were extracted, which were used to build a sequence of learning vectors for the neural network. A program was written that allows to select the segmentation option for a music fragment, performs feature extraction, and prepares data to be loaded to a neural network.
Recurrent neural networks
Long short-term memory (LSTM) units, which were defined in Gers et al. (2000), were used to build recurrent networks. LSTM units are special kinds of memory blocks that solve the vanishing gradient problem occurring with simple RNN units. Each LSTM unit consists of a self-connected memory cell and three multiplicative regulators -input, output, and forget gates. Gates provide LSTM cells with write, read, and reset operations, which allows the LSTM unit to store and access information contained in a data sequence that corresponds to data distributed over time. The weights of connections in LSTM units need to be learned during training.
Implementation of RNN
The WekaDeeplearning4j package (Lang et al., 2019), which was included with the Weka program (Hall et al., 2009), was used to conduct the experiments with recurrent neural networks. This package makes deep learning accessible through a graphical user interface. The WekaDeeplearning4j module is based on Deeplearning4j, 4 which is a widely used opensource machine learning workbench implemented in Java. Weka with WekaDeeplearning4j package enables users to perform experiments by loading data in the Attribute-Relation File Format (ARFF), configuring a neural network, and running the experiment.
To predict emotions in music files, a neural network was proposed with the structure shown in Fig. 4. Input data were given to the network in the form of a sequence set of feature vectors, and then processed by a layer consisting of LSTM units (LSTM1-LSTMn). The
ARFF data for RNN
The Weka program allows to load learning data in the ARFF format. During training, the recurrent neural network from WekaDeeplearning4j package needs sequential data and output training values. The prepared data are slightly different from typical ARFF data because they contain a relational attribute that specifies the set of features found in each step of the sequence (code below). The definition of the feature set ends with the @end keyword, which in our case refers to @attribute bag relational. The data at each time step is separated by \n. In the data section, the entire sequence of one example is written on one line enclosed in quotation marks ("") and terminated with the output value. To prepare data for the neural network implemented using the WekaDeeplearning4j package, the author wrote a script that converts vectors obtained during feature extraction into sequences saved in one @relation Arousal_Sequential_Data @attribute bag relational @attribute Mean_MFCC0 numeric @attribute Mean_MFCC1 numeric @attribute Mean_MFCC2 numeric ... @attribute feature_no_124 numeric @end bag @attribute output numeric @data "-48.145309,5.329454,-0.679031, ... 1.027434,\n -50.730044,6.186828,0.431127, ... 0.435338,\n -47.743233,6.319406,-0.482212,... 0.505049,\n",0.29 "-55.545411,6.869730,1.128843, ... 0.106391,\n -55.178950,9.128733 [Example of learning data with a 3-step sequence]
Parameters of the RNN
The structure of the neural network was built once with one LSTM layer, once with two layers, and with different amounts of LSTM units (124, 248). A tanh activation function was used for LSTM units. For our regression task (prediction of continuous values of arousal and valence), the identity activation function for a dense layer was used, in conjunction with the mean squared error loss function. For weight initialization, the Xavier method was used and the Nesterov updater helped to optimize the learning rate. The network was trained with 100 epochs and to avoid overfitting an early stopping strategy was used. The training process was stopped as soon as the loss did not improve anymore for 10 epochs. The loss was evaluated on a validation set (20% of the training data).
Experiments and results
During the conducted experiments, regressors for predicting arousal and valence were built. As baseline for comparing the results of the obtained regressors a simple linear regression model (lr) was chosen. The data were also tested with second baseline SMOreg algorithm with polynomial kernel, which is an implementation of the support vector machine for regression. The author also tested the usefulness of SMOreg on the same database in previous papers (Grekow, 2016;2017). In our experiments, both baseline algorithms (SMOreg, lr) were tested on the same music fragments as the neural networks but on non-segmented fragments. These two algorithms were trained using data obtained from the whole (6 s) music samples.
The regression algorithms were evaluated using the 10-fold cross validation technique (CV-10). The coefficient of determination (R 2 ) and mean absolute error (MAE) were used to assess model efficiency. Before constructing regressors arousal and valence annotations were scaled between [−0.5, 0.5]. Before providing input data to the neural network, the data was standardized to zero mean and unit variance.
Tables 2 and 3 present the coefficient of determination (R 2 ) and mean absolute error (MAE) obtained during building regressors using chroma and mfcc features. The best results for each regressor type (arousal, valence) are marked in bold. From the obtained results, we can see that the usefulness of the chroma features is small compared with the Table 4 presents the results for all Marsyas features. A simple linear regression model and support vector machine for regression (SMOreg) were outperformed by the RNN models, in two cases RNN2, RNN4 for arousal and valence. The best results were obtained with RNN4 (2 layers x 248 LSTM): R 2 = 0.67 and MAE = 0.12 for arousal, R 2 = 0.17 and MAE = 0.15 for valence. We see that RNN with two LSTM layers gives better results for both arousal as well as valence. As expected, the results show that the sequential modeling capabilities of the RNN are useful for this task.
The use of all features gives the best results; however, in the case of arousal, the set of mfcc features gives quite comparable results, similar to the whole set of features (R 2 = 0.66 and MAE = 0.12, Table 3). The best results were obtained at a segment length of 2 s and without overlap, and those are presented here.
RNN with Essentia features
Experiments with the features obtained from the Essentia package were also conducted. These features include the mfcc and chroma features, which are also in the Marsyas tool, but also contain many higher-level features such as rhythm or harmony. Table 5 shows the results of the experiments. The experiments were expanded by two networks with an increased number of LSTM units, similar to the number of features in the sequence: -RNN5 -1 layer x 529 LSTM units; -RNN6 -2 layers x 529 LSTM units each. (Table 5) for the Essentia feature set, we can see a significant improvement of the results compared with the baseline algorithms (RNN1-RNN6 for arousal, RNN2-RNN6 for valence). Better features from the Essentia toolkit give better neural network results. The best results were obtained with RNN4: R 2 = 0.69 and MAE = 0.11 for arousal, R 2 = 0.40 and MAE = 0.13 for valence. The improvement is also significant for regressors for valence, compared with the results from Marsyas features (Table 4), where the best result was: R 2 = 0.17 and MAE = 0.15.
In regard to the different numbers of layers and LSTM units, the best results were obtained using the RNN4 network (2 layers x 248 LSTM) for both arousal and valence. Two-layer networks recognized emotions better than one-layer networks.
What is quite interesting in the case of arousal (R 2 = 0.69, MAE = 0.11), the results are comparable with the results obtained from the Marsyas package (R 2 = 0.67, MAE = 0.12, Table 4). Mfcc features are quite good for detecting arousal, and adding new features improved the results only slightly. A significant result of these experiments is that features from the Essentia package, like rhythm and tonal features, significantly improved the detection of valence. In the case of arousal, it is not necessary to use such a rich set of features, which is why the model for arousal is not so complex.
Using pretrained models as feature extraction
The results obtained in previous experiments with Essentia features were not bad, but one could always find a method that would improve these results. As we have noticed, the set of features describing our data (music files) has a significant impact on the learned model results. The better the features we take, the more satisfactory the results, an example of which was the use of features from Essentia. The efforts presented below to improve the results focused on the data that worked best in previous experiments, that is, the data obtained with Essentia as the audio feature extraction tool (Section 6.2).
A known method used in machine learning is feature selection (Witten et al., 2016), which finds the more suitable feature sets among the output features. The method used in the next experiment was using a pretrained model as a feature extractor, and then training RNN on a new set of features.
Pretrained models
To build the pretrained model, a simple neural network (NN) with a dense layer was used. The pretrained model was taught on a task slightly different than the target task because the training was on non-segmented music fragments, which described the entire length of the music segment (6 s), i.e. not on the same segment lengths as for RNN. During the construction of the pretrained model, use of one dense layer with 248 neurons was tested. The trained NN processed the features obtained from the audio feature extraction tool Essentia into a new set of features, which were the activation values of the dense layer.
The structure of the neural network (Fig. 5) was built once with one dense layer with 248 units. A ReLU activation function was used for neurons. For our regression task, the identity activation function for a dense layer was used in conjunction with the mean squared error loss function. The Xavier method was used for weight initialization and the Adam updater helped to optimize the learning rate. The network was trained with 50 epochs and an early stopping strategy was used to avoid overfitting. The training process was stopped as soon as the loss did not improve anymore for 10 epochs and the loss was evaluated on a validation set (20% of the training data). Due to the fact that regressors were built for two tasks, arousal prediction and valence prediction, separate pretrained models were created for each of them.
Model construction using a pretrained model
Feature vector transformation and the connection with the RNN were conducted in the Weka program using Dl4jMlpFilter (Lang et al., 2019). This tool enabled the use of a pretrained model as a feature extractor. Connecting the pretrained model with the RNN is shown in Fig. 6. Activations in the last layer of the pretrained model were used as input data in the RNN. The input data were given to the network in the form of a sequence set of feature vectors. The pretrained model was then used to transform the feature vectors into new feature vectors. Each vector from the input sequence was transformed separately, so that a sequence set of new feature vectors was obtained at the LSTM layer input. The new feature vectors were processed by a layer consisting of LSTM units (LSTM1-LSTMn). The last layer, built of densely connected neurons (1-n), converted the signals received from the LSTM layer and created an output signal. Just as in the previous experiment (Section 6.2), Fig. 6 Recurrent neural network architecture using the pretrained model the structure of the neural network was built once with one LSTM layer, once with two layers, and with different amounts of LSTM units (124,248,529), which resulted in 6 variants of RNN1-RNN6. Table 6 shows the results obtained during the experiments using the pretrained models as feature extraction. We can notice a significant improvement in the coefficient of determination (R 2 ) and a reduction of the mean absolute error (MAE) in the regressors for both arousal and valence compared with the experiments with the Essentia feature set and without the use of pretrained models (Table 5).
Results
The best results were obtained with RNN2 (2 layers x 124 LSTM) and RNN4 (2 layers x 248 LSTM): R 2 = 0.73 and MAE = 0.11 for arousal. For valence, the best results were obtained with RNN4, RNN6 (2 layers x 529 LSTM): R 2 = 0.46 and MAE = 0.12. We can see the advantage of RNN with two LSTM layers over networks with one LSTM layer. As in previous experiments, arousal regressors are more accurate than valence. By using feature vectors obtained from the pretrained model, we obtained 6% relative improvement of R 2 of the best models in the case of arousal, and 15% in the case of the valence regressor (Tables 6 and 5). Thus, it seems a better improvement in the valence regressor than the arousal regressor was obtained. The conducted experiments confirmed the point of using the pretrained model as a way to find even better combinations of features based on audio features for training RNN.
Conclusions
This article presents experiments using recurrent neural networks for emotion detection for musical segments. The sequential possibilities of the models turned out to be very useful for this type of task as the obtained results exceeded such algorithms as support vector machine for regression, not to mention the weaker linear regression. In all the built models, the accuracy of arousal prediction exceeded the accuracy of valence prediction. There was more difficulty detecting emotions on the valence axis than arousal. Similar difficulties were noted when music experts were annotating files, which was confirmed during annotation compliance testing. It is significant that the use of higher-level features (features from Essentia tool) had a very positive effect on the models, especially the accuracy of valence regressors. Interestingly, to predict arousal, even a small set of features (mfcc from Marsyas tool) provided quite good results, similar to those of the large feature set from Essentia. Low-level features, like mfcc, are generally sufficient for predicting arousal.
It appears that the use of pretrained models as feature extraction for the Essentia feature set creates a more favorable set of features that can be used for emotion detection by RNN. The obtained results confirm the positive impact of using feature extraction to create even more useful features. Adding new features, such as melody features, to the audio feature extraction tools in the future would be a way to get even better results in detecting emotions in music files, although it turns out that pretrained models also discover useful features for emotion detection in musical segments.
The shortcomings of using the pretrained models is connected with a more complicated analysis which of the input features were used during extracting features. Extracted features are activations from the last layer of the pretrained model, and to find out which input features were used to create the new features, one should analyze the weight of the layers in the pretrained model.
The experiment results presented in this paper can be used by building automated systems for music emotion recognition. Such systems are applied in all tasks connected with music file analysis in terms of emotions, such as searching files with a given emotion, tracking the emotional development of soundtracks, and comparing the emotional distribution of musical compositions. | 2021-09-27T20:02:24.499Z | 2021-08-08T00:00:00.000 | {
"year": 2021,
"sha1": "b067de7602484327e82b486c52b2a295719c8491",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10844-021-00658-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "0c10e0a42ef245a86e85ac88ef5d954785c433f9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
222173570 | pes2o/s2orc | v3-fos-license | Point of Care Diagnostics in Resource-Limited Settings: A Review of the Present and Future of PoC in Its Most Needed Environment
Point of care (PoC) diagnostics are at the focus of government initiatives, NGOs and fundamental research alike. In high-income countries, the hope is to streamline the diagnostic procedure, minimize costs and make healthcare processes more efficient and faster, which, in some cases, can be more a matter of convenience than necessity. However, in resource-limited settings such as low-income countries, PoC-diagnostics might be the only viable route, when the next laboratory is hours away. Therefore, it is especially important to focus research into novel diagnostics for these countries in order to alleviate suffering due to infectious disease. In this review, the current research describing the use of PoC diagnostics in resource-limited settings and the potential bottlenecks along the value chain that prevent their widespread application is summarized. To this end, we will look at literature that investigates different parts of the value chain, such as fundamental research and market economics, as well as actual use at healthcare providers. We aim to create an integrated picture of potential PoC barriers, from the first start of research at universities to patient treatment in the field. Results from the literature will be discussed with the aim to bring all important steps and aspects together in order to illustrate how effectively PoC is being used in low-income countries. In addition, we discuss what is needed to improve the situation further, in order to use this technology to its fullest advantage and avoid “leaks in the pipeline”, when a promising device fails to take the next step of the valorization pathway and is abandoned.
Introduction
Low-and middle-income countries (LIC/MICs) face severe challenges due to limited economic opportunities. In addition to the economic struggles, LICs also bear a large burden of transmittable diseases, posing severe risks to the population's wellbeing [1]. Healthcare systems and healthcare providers in LICs are often ill-equipped to treat the patients in the best possible way, especially in rural areas. Given the economical and infrastructural challenges in LIC, PoC diagnostics, which are often characterized by being independent of laboratory or medical infrastructure, as well as being highly affordable and holding considerable promise to improve the situation. Yet the actual commercialization of PoC diagnostic-tests lags well behind the innovative research and developments done in laboratories. [5] with permission under open-access copyright agreement.
This review aims to investigate the topic from a different angle. Here, we will look at the barriers of PoC diagnostics along the entire value chain, from the first idea in a laboratory to the use of the final product by a healthcare provider. With this review we aim to locate the "leaks in the pipeline" of PoC commercialization, that typically occur when PoC devices do not manage to proceed to a next step in the value chain. Such leaks include problems with funding that prevent the design of a [4] with permission under open-access creative commons copyright agreement.
Biosensors 2020, 10, x FOR PEER REVIEW 2 of 23 commercialization of PoC diagnostic-tests lags well behind the innovative research and developments done in laboratories. Due to this strange dichotomy between promising, innovative research and very limited valorization into real products, several review articles on the topic have been written in past years. However, most are specialized on one specific aspect-for example some authors looked in depth at logistical shortcomings [2]; others investigated funding and collaboration considerations [3]. Reviews also summarized the topic in consideration of different viewing angles, such as the technological aspects and their implications, as seen in Figure 1 [4]. Another approach is to distinguish between different usage profiles, from the use at home up to the use in a laboratory, and argued that what point-of-care means depends tremendously on how and where it is used, which is shown in Figure 2 [5]. [5] with permission under open-access copyright agreement.
This review aims to investigate the topic from a different angle. Here, we will look at the barriers of PoC diagnostics along the entire value chain, from the first idea in a laboratory to the use of the final product by a healthcare provider. With this review we aim to locate the "leaks in the pipeline" of PoC commercialization, that typically occur when PoC devices do not manage to proceed to a next step in the value chain. Such leaks include problems with funding that prevent the design of a This review aims to investigate the topic from a different angle. Here, we will look at the barriers of PoC diagnostics along the entire value chain, from the first idea in a laboratory to the use of the final product by a healthcare provider. With this review we aim to locate the "leaks in the pipeline" of PoC commercialization, that typically occur when PoC devices do not manage to proceed to a next step in the value chain. Such leaks include problems with funding that prevent the design of a prototype, or intellectual property (IP) considerations preventing market access. In this way, it may be possible to determine at which discrete steps the transition fails and a PoC device that was maybe once-promising becomes abandoned or underused. For this, the value chain of PoC devices is separated into three distinct domains, which themselves consist of separate steps. These domains are labeled research, market and usage, respectively, and any device has to achieve success in each of the steps within these domains to be able to reach the market and exert a benefit to the patient at the end of the value chain. These different domains in the value chain can be further subdivided into subdomains ( Figure 3) with research-e.g., consisting of "Fundamental Research" or "Proof of Concepts and Prototyping". In the second part of our review "The Market" is investigated, which looks at fundamental economic problems in the steps "Market Introduction" and "Market Penetration". Finally, the "Usage Environment" will give an insight into the last barriers for actual use by healthcare providers. In this way, we aim to incorporate the entire value chain by examining both original research and review articles.
Biosensors 2020, 10, x FOR PEER REVIEW 3 of 23 prototype, or intellectual property (IP) considerations preventing market access. In this way, it may be possible to determine at which discrete steps the transition fails and a PoC device that was maybe once-promising becomes abandoned or underused. For this, the value chain of PoC devices is separated into three distinct domains, which themselves consist of separate steps. These domains are labeled research, market and usage, respectively, and any device has to achieve success in each of the steps within these domains to be able to reach the market and exert a benefit to the patient at the end of the value chain. These different domains in the value chain can be further subdivided into subdomains ( Figure 3) with research-e.g., consisting of "Fundamental Research" or "Proof of Concepts and Prototyping". In the second part of our review "The Market" is investigated, which looks at fundamental economic problems in the steps "Market Introduction" and "Market Penetration". Finally, the "Usage Environment" will give an insight into the last barriers for actual use by healthcare providers. In this way, we aim to incorporate the entire value chain by examining both original research and review articles. Figure 3. Segmentation of stages a PoC device has to pass to be able to bring a benefit to the patient.
Fundamental Research: Funding Availability and Focus
Fundamental research is the first step in the development of a PoC device and concerns all the basic research that is not yet directly related to a prototype or a product. Historically, the research of PoC devices started with simple dipstick tests with immobilized reagents-for example, for the detection of glucose [6]. Later the laboratory use of immunoassays, especially the high sensitivity of radioimmunoassays and enzyme-linked immunoassays, created interest in improving those methods into rapid tests, which eventually became lateral flow immunoassay, the most abundant type of PoC device [7]. While lateral flow devices are the most common, the trend in research goes towards devices with higher complexity, which are able to handle more complicated samples, can multiplex and detect challenging analytes [6]. The greater complexity of these devices is shown in their mechanisms of action, which are often optical, electro(chemical) or magnetic [8], but also include other mode of actions-e.g., thermal sensor systems [9,10]. Optical systems include more straightforward UV/Vis or fluorescence sensors, as well as more complex systems using quantum dot and surface-plasmon-resonance technology or even genetically encoded biosensors [11][12][13][14]. Examples of electro-(chemical) systems range from basic ampere or voltametric systems over graphene-based field-effect-transistors to DNA-annealing-based redox-reporter assays [15][16][17][18]. However, compared to the abundance of new research, the actual commercialization lags behind and market examples welcoming these new innovations are rare [19].
Choosing a Suitable Design Philosophy
There are several factors that may misguide new developments, an important one being the design philosophy of new devices. With respect to this issue, a stark contrast from the research philosophy in high-income countries (HIC) and the needs of LICs is evident. Research in HICs is Figure 3. Segmentation of stages a PoC device has to pass to be able to bring a benefit to the patient.
Fundamental Research: Funding Availability and Focus
Fundamental research is the first step in the development of a PoC device and concerns all the basic research that is not yet directly related to a prototype or a product. Historically, the research of PoC devices started with simple dipstick tests with immobilized reagents-for example, for the detection of glucose [6]. Later the laboratory use of immunoassays, especially the high sensitivity of radioimmunoassays and enzyme-linked immunoassays, created interest in improving those methods into rapid tests, which eventually became lateral flow immunoassay, the most abundant type of PoC device [7]. While lateral flow devices are the most common, the trend in research goes towards devices with higher complexity, which are able to handle more complicated samples, can multiplex and detect challenging analytes [6]. The greater complexity of these devices is shown in their mechanisms of action, which are often optical, electro(chemical) or magnetic [8], but also include other mode of actions-e.g., thermal sensor systems [9,10]. Optical systems include more straightforward UV/Vis or fluorescence sensors, as well as more complex systems using quantum dot and surface-plasmon-resonance technology or even genetically encoded biosensors [11][12][13][14]. Examples of electro-(chemical) systems range from basic ampere or voltametric systems over graphene-based field-effect-transistors to DNA-annealing-based redox-reporter assays [15][16][17][18]. However, compared to the abundance of new research, the actual commercialization lags behind and market examples welcoming these new innovations are rare [19].
Choosing a Suitable Design Philosophy
There are several factors that may misguide new developments, an important one being the design philosophy of new devices. With respect to this issue, a stark contrast from the research philosophy in high-income countries (HIC) and the needs of LICs is evident. Research in HICs is rarely aimed at inexpensive technologies with wider impact [4,20], which are direly needed in resource-limited settings. Instead, research in HICs tends to focus on high-efficiency devices with even higher complexity, Biosensors 2020, 10, 133 4 of 23 which is not a problem for affluent countries with good infrastructure; however, due to their complexity, such devices are often not usable (or affordable) in LICs [21]. Several studies point out the differences in approach between high-complexity PoC diagnostic test platforms (HCTs), such as the GeneXpert and low-complexity PoC tests (LCTs), such as widely available lateral flow tests-for example, for pregnancy and malaria [4,20,21]. While HCTs can conduct more difficult and sensitive diagnostic tests, they are also more complex and thus require more training, maintenance and infrastructure. LCTs, on the other hand, lack sensitivity and diagnostic power, but are highly affordable and low maintenance. This is not the only example of different design philosophies and approaches. In the design of biosensors in HIC, the most suitable biomarker for a given illness is chosen without much regard for the needed infrastructure. The ability to perform a venipuncture, for example, is a given in HICs, which means tests developed have no strong restrictions for the needed sample size. This can pose a problem for LICs in rural settings without a trained phlebotomist as venipuncture is not possible and the test has to be able to work either with easier to acquire samples, such as sputum or urine, or the much smaller blood sample quantities of a finger prick or heel-stick samples. Heel-stick sample sizes are usually under 5% of the size of venipuncture samples [22]. This illustrates that the research approach for LICs needs to be different. Instead of building a system around the best biomarker, the system needs to be created around the available infrastructure first, which is a considerably different design philosophy [22,23].
Taking Aim: Proper Target Analytes
Next to the different design philosophies, the target of research is different in HICs and LICs. Resource-rich settings do not focus on research on neglected tropical diseases (NTDs), which is already evident from their name. The burden of NTDs is mostly nonexistent in affluent countries; instead, they predominantly affect the world's poor, and are therefore less interesting for commercial research, as the chance of return of investment is limited. There is little incentive for affluent countries to deal with many NTDs, mainly due to the low impact these diseases have on the population of these countries [24]. This is also visible in drug development where, from 1975 to 2004, only 1% of drugs were developed for NTDs (21 out of 1556) [3].
Other gaps in knowledge might be easier to miss-for example, local differences in diseases, such as geographical variability in antigen presentation and DNA/RNA signals. In addition, other diseases prevalent in LICs can have an effect on a target analyte. For example, immunosuppression due to HIV leads to lower host response signals and reduces the sensitivity of some nucleic acid tests for the detection of pulmonary TB [25,26]. Another report remarks that, even when there is willingness, funding and know-how about adequate research and design philosophy, there are practical problems as simple as not being able to acquire appropriate samples for assay development [26]. In general, research can only make an impact in LICs if a bottom-up approach is used that takes the infrastructure and environment of LICs into account from the start [4].
Funding in LICs
Research in LICs themselves is beneficial as it focuses directly on the regional circumstances and problems, with a LICs-centered philosophy. However, PoC funding in LICs themselves is highly inconsistent [20,27]; this leads to dependence on other funding opportunities, such as NGOs and development partnerships. This is not only the case for funding of PoC-research, but also for treatment-research and treatment itself. For some countries in Sub-Saharan Africa, HIV expenditures are strongly reliant on external sources, despite this being one of the region's most important health risks. Kenya and Uganda contribute less than 15% of funding to their own national HIV-relief efforts, Mozambique even contributes only 3% [20,27]. In cases where there is secure funding through outside sources, it naturally tends to focus on treatment or prevention-e.g., vaccine development-while diagnostic research receives much less funding, leading to an over-reliance on clinical symptoms due to a lack of adequate diagnostics [4,28].
Incentives to Change Focus in HICs
Possible solutions are new programs that are being implemented to incentivize product development for neglected tropical diseases [3]. However, such programs are mostly aimed at drug development and include, amongst others, the "Priority Review Voucher (PRV)" program in the United States, which gives out transferable priority FDA-review vouchers. It is one example of an implemented "pull-mechanism" to motivate development in neglected areas [3,29,30]. Next to such pull mechanisms, "push-incentives", such as the Global Health Investment Fund (GHIT), a partnership that connects the Japanese Government, several NGOs, as well as large drug and diagnostic manufacturers, and targets poverty-connected diseases and NTDs [3,31]. Other specifically named initiatives are the "Global Health Investment Fund (GHIF)" and the "Wellcome Trust Pathfinder Award" [3]. Incentive programs such as this may have a large effect when used specifically to incentivize PoC diagnostics.
NGOs-e.g., the Bill & Melinda Gates Foundation-have contributed a substantial amount of funding for NTDs research and play an important part in integrating into such programs [32][33][34][35][36].
Having large industrial PoC developers on board may not only improve the device situation but may have further benefits in implementing the devices from a market penetration standpoint (vide infra). An increased cooperation due to the mentioned pull mechanisms can also help to prevent a "HIC-Bubble" when stakeholders from LICs are involved.
To change the philosophy of research, "Frugal Development" must be considered from the start to build devices with relatively low complexity that are not only affordable and usable in LICs settings but also maintainable and repairable [37][38][39][40]. This not only serves as a base requirement, but also makes several aspects, such as acquiring spare parts and stock management, more straightforward down the line. Additionally, 3D-printing and other new technologies might be of use to create spare parts on demand, as those new devices have already been shown to be able to create simple microfluidic parts, and were able to supply hospitals with respirator valves during the COVID-19 pandemic [41,42].
2.2.
Proof-of-Concept and Prototypes: The Importance of Appropriate Device Characteristics.
Device Characteristics
Engineers in affluent countries tend to design devices that assume HICs infrastructure standards, meaning well-funded laboratories in regulated environments with quality control. This can be problematic in LIC, and therefore special design considerations must be taken into account [20]. Devices that perform well in controlled settings, in which the prototype is tested, often fail when challenged with tropical conditions in LICs [26]. Especially in rural settings, where access to, for example, electricity, might be problematic in terms of powered PoC devices [22,26,43], but also a lack of cold storage options, can form a significant challenge [9,22,23,43]. Limited refrigeration and power supply therefore demands that the device and its disposables have to be stable in the long term, even at high temperatures, while powered PoC devices need to be able to run on battery or solar [4,26,44].
Being ASSURED
Many authors, agree that the device characteristics that are beneficial are following the WHO guidelines for PoC diagnostics, which are symbolized by the acronym, ASSURED; which stands for Affordable, Sensitive, Specific, User-friendly, Rapid and robust, Equipment-free and Delivered to those who need them. These requirements are regularly mentioned in publications [20,22,[45][46][47][48], but it also has to be noted that these are ideal and strong requirements that only a selected few devices can meet [21].
While progress has been made on PoC tests for syphilis, chlamydia as well as gonococcal infections, there is still not one test available that complies with all ASSURED criteria [46]. Especially good sensitivity and selectivity, which are two of the most important points, get increasingly more difficult to achieve the closer the system is to a "perfect" system with an accuracy of 100% [21,22,26]. In rural LICs settings, a PoC diagnostic tool needs to be small, portable and highly affordable [43]. While many authors put great emphasis on affordability, others take a different position and argue that Zero-Cost is not an important parameter; arguing that reliability and standardization are seen as more crucial, and remark that it might be misguided to assume that "poverty reduces the value individuals place on their well-being" and therefore focus should be put on efficiency and reliability before anything else [28].
How Necessary Is It to Be ASSURED?
Many scientists cite ASSURED as a necessity and a large consensus for the ASSURED criteria among healthcare workers in, for instance, Uganda, has been found [49], Pai et al., give another, more pragmatic and contextual viewpoint. ASSURED, they argue, imposes artificial restrictions that may not be necessary, depending on the context in which the devices are used. For example, a device that is used for first line screening with the aim of referral to another, more specialized, healthcare provider for further diagnosis and treatment, can have a lower specificity compared to a test that makes decisions about, or monitors, the treatment itself [5]. Therefore, seeing efficiency and reliability above all might be shortsighted. This is also indicated by other research; in a study that compared 12 different combinations of hepatitis C virus-diagnostics that were either PoC, lab-based or a combination. The cheapest strategy turned out to be a two-test combination system, first using a lower-specificity PoC antibody test followed by a confirmation via an RNA PoC test. All one-step strategies showed higher false-positive rates and were not cost-effective under base-case assumptions. However, two-step strategies are highly dependent on the loss-to-follow-up rate (LTFU) [50]. Gift et al., reported as early as 1999 on the "rapid test paradox" and the connection between LTFU and PoC tests with limited sensitivity that lead to better treatment outcomes [51]. Other authors also agree with this notion, and argue that a very affordable test and one with very good stability at high temperatures, but with suboptimal sensitivity, may still be extremely beneficial in tropical settings [26].
Another aspect, indicating that focus on raw diagnostic prowess might be overrated and the important part may be the accessibility to diagnostics in the first place, is found in the fact that only 28% of the inhabitants of Africa have access to advanced healthcare facilities. Tests that need only minimal infrastructure could give an additional 47% of the African population access to diagnostic tests. While improving the accuracy for bacterial pneumonia tests in advanced healthcare facilities only led to 119,000 more disability-adjusted life-years (DALYs) saved per year, 263,000 more DALYs could be saved annually if this test would be made available for rural sites with minimal resources [43]. This was also found in another study, using syphilis as an example. A PoC device, that would require minimal laboratory infrastructure, could prevent 138,000 congenital syphilis cases and 148,000 stillbirth per year. A PoC device that would not require any laboratory infrastructure at all would prevent 201,000 cases and 215,000 stillbirths [22]. In the end, a perfectly assured device might be unrealistic, as several of the ASSURED criteria work against each other and especially against the affordability. The higher the accuracy the less affordable the device will be. Therefore, it is important to choose the right battles.
Another aspect criticized is that PoC tests often focus on a single disease; however, healthcare workers in LICs are concerned with syndromes of unknown etiology [20]. Additionally, having one test measure several factors would give the healthcare provider more diagnostic security, and save time.
Steps towards an Effectively Usable PoC Device
There are several possibilities that can help tackle those challenges. Multiplexing might be a large step forward, and nowadays several methods for multiplexing are possible and being developed [4,22]. This would reduce the relative costs for each test since they can use the same framework and are regulated at the same time. Healthcare workers are often interested in many different factors connected to one disease. Putting them together in one test can show hugely beneficial they can be. Multiplexing also has logistical benefits, as there is less to track; associated equipment can be shared and several tests can be made while worktime is not increased.
Several authors suggest modern microfluidics to alleviate some of these problems. In sample preparation, for example, all the technology of DNA/RNA extraction and purification can be included in a small cartridge with very easy usability [43], reducing the need for external equipment and thus reducing cost [21]. The improvements in biosensor and microfluidic creation already helped make HTP devices possible due to the optimized microstructuring of transducers (nano-wires, nano-pores, nano-particles), microspotting of the sensitive element and integration in low volume microfluidic channels [4]. Many other authors agree and see great potential in microfluidics for LIC-PoC [52][53][54][55]. The widespread use of smartphones is considered in many publications as a possible readout-device for optical platforms, as well as an ICT-connector [56][57][58][59][60].
To prepare nucleic assays for LICs environments, one major problem is assay stabilization, as PCR mixes require cold storage. Lyophillization would be an option to create assays that do not require a cold-chain. The problematic part is the diverse array of compounds used that are incompatible with freeze-drying, such as glycerol. There are mixes that are possible to freeze-dry; however, the amplification efficiency will likely suffer [43]. To improve ruggedness, functions should be reduced to the essential and have integrated quality control, and local production may help with access to support and consumables [61].
Market Introduction
The step from prototype towards market introduction seems to be the most taxing step, as even low-cost PoC devices are being developed in many laboratories in HICs, but not being adopted as well in LICs [28].
Funding and IP
Funding is not only a consideration of research, but also of valorization. Even when funding for novel research is available, additional funding for economic aspects, such as manufacturing, distribution and maintenance are harder to find, resulting in the abandonment of projects by the original researcher due to cost, as well as deterring investment and interest from companies, which favor secure, established products [26]. Bringing new medical devices to the market is a costly enterprise. This is investigated more in depth in drug development where it is estimated that developing a new drug exceeds USD 1 billion in cost, takes more than 10 years and only 11.8-21% of new drugs are approved. There are no discrete numbers on this for medical devices, given that the field is highly diverse with an estimate of over 500,000 different types of devices, which span from X-Ray machines to hip implants with different diverse aspects that have to be taken into account for regulation, such as repair, maintenance and wear [62]. These processes can take years until a patient reaps the benefit of a new product, if it reaches the market after all [63]. This might make it unprofitable for a company to conduct fundamental research for LICs PoC if there is not an immediate benefit in reach [3].
Next to funding, intellectual property (IP) proves to be a large barrier for the development of systems upon existing technologies. Especially IP on molecules and genes make this increasingly difficult [4]. Existing patents covering biomarkers and even entire organisms exist, and there are large IP barriers in diagnostic platforms and/or components of diagnostic platforms [21,22]. One solution for IP considerations are new private-public partnerships, such as the World Intellectual Property Organization (WIPO) Re:search consortium. WIPO is the UN agency tasked with developing international IP systems fostering innovation for NTDs that benefits everyone. WIPO Re:search enables members royalty free use of infrastructure, compound libraries, IP assets and know-how. The WIPO consists of 107 members in 30 countries. IP Licenses can be used royalty free and new IPs generated are retained by recipient member, and not the member of the original IP asset. Often, WIPO Re:search members are from LICs. This creates benefits for everyone, and LICs contribute local know-how and have access to patient samples, while companies, often from HICs, contribute their research power and IP. The company benefits from heightened corporate social responsibility, access to other IP, business opportunities and access to other experts and know-how in the field; as well as networking possibilities for LICs, which are emerging markets [3,64,65].
Regulations
High regulatory barriers and strict healthcare standards are also a significant barrier to the introduction of innovations [4,5,20]. While good regulations are important, they are difficult and require substantial expertise to navigate them, which discourages innovation, especially in LICs where profitable returns are unsure. Clear and straightforward national policies for diagnostic evaluation and certification are key [20]. The pharmaceutical world is more advanced in setting up harmonization infrastructure, which is still lagging behind for diagnostics. In total, 23 countries in Africa banded together and pledged to harmonize the approval method for diagnostics, which is a huge step, as companies do not have to acquire approval for each country separately [66]. One other possible solution offers a WHO program that examines the quality and the safety of HIV and Malaria tests. This "prequalification" is a helpful guide for LICs, helping governments to speed up the approval process [26].
Integrated Market Expertise
Missing collaboration between academia and industry might be another bottleneck encountered when bringing inventions from the lab to the market. A multidisciplinary team for market introduction is, just as it is for research, an important part of the valorization trajectory [4,21]. A study of 358 medical devices for LICs found that only 134 met the study requirements to count as "commercialized" [61]. In addition, of the hundreds of devices beyond this particular study, many likely failed to commercialize because of, among other reasons, a failed transition from prototype to market introduction [61]. Studies have also indicated that good policy plans for quality systems and supply chains improves the accessibility if placed prior to the POCT program [20]. Those aspects of quality control and assurance and supply chain managements are what companies have strong expertise in. Those aspects are usually not considered enough in the prototyping phase, while scaling up is still not around the corner, this becomes a barrier later on into commercialization (vide infra). The expertise of companies can help to navigate the complexities of scaling and implementing a medical device and effective delivery mechanisms [21,61].
For this, it is not only necessary to have a research and valorization team, but also specialists for regulation, culture and policy [21]. Especially researchers in low-income-countries often lack the knowledge spanning all involved fields, from discovery and research to market introduction [3]. The device must consider local and regional constrains, involved stakeholders and their needs and the capacity of the local healthcare workforce, but also social and cultural contexts for things that can easily be overlooked-e.g., whether blood sampling is easily accepted [5,22,61]. One target group that should be especially focused on, but are often not included, are the end-users, who should be integrated in each step of the design process [20,28,45]. To this end, limited international (especially transcontinental) collaboration is problematic [21].
Device Quality
After the introduction of the product, other aspects become important for wider market penetration. One problem is a bad batch-to-batch reproducibility especially in lateral flow devices that hamper upscaling [21]. Low device quality and low reproducibility can lead to decreased trust by healthcare providers and thus affect the adoption of other devices negatively [28]. This is one of the greatest concerns of rural LICs healthcare practitioners. In interviews, key-stakeholders expressed the need for diagnostic scale-up, but they also had concerns about reliability, as well as supply chain management and staff training [45]. This was confirmed by another study, reporting doubts from healthcare practitioners regarding test trustworthiness with either the accuracy, the robustness or the clarity of results. Adding to distrust is the concern of counterfeit tests being delivered [49]. Participants of the study found PoC critical for improving healthcare but judged their current form as not suitable for the local context.
Economic and Social Placement
Even a theoretical PoC device that achieved all ASSURED criteria perfectly might not be sustainable and gain market acceptance if they do not have a viable business model attached to them [45]. It is important to consider that this model fits in with the real-life workflow patterns. This means the PoC devices need to be integrated into the existing healthcare settings and real-life contexts of LICs to be successful.
The large importance of proper economic placement is shown by the fact that even inaccurate devices can be a market success if the stakeholders along the value chain profit from their use. For example, inaccurate serological tuberculosis tests, which the WHO advises against, are very common in private healthcare facilities in India, as well as in 17 of the top 22 other countries most affected by the disease. Private doctors earn referral money and other incentives for ordered tests, leading to an overreliance on inaccurate diagnostics due to economic incentives [5,67]. While quality is of course of the utmost importance, this example shows that, for market penetration, the importance of correct economic placement and integration into existing value systems is a key factor, which sadly can even outweigh actual clinical performance. Therefore, when aiming at maximum market penetration, the whole market with all economic influences, costs and workflow has to be understood. This is just as important as delivering a device with the best quality possible [45].
Other examples of misplacement of PoC testing in the market can be found in India and South Africa, where many well-working and cheap (USD 1 per test) PoC tests are available for different diseases, such as HIV, Malaria, Dengue, Syphilis and Hepatitis. Yet many of those tests are still not commonly used at home, the physician's office or even in rural healthcare clinics-arguably the target environment for PoC. Instead, testing happens foremost in laboratories and hospitals, whereas small independent laboratories are the major users of PoC devices [5]. Lab personnel are often skeptical against testing outside of controlled settings, as they lose control over testing and quality assurance and it interferes with their business model [20,45]. This shows how devices that are meant to be PoC can, due to mismatching interests in the established market, end up in a position where they cannot fulfill their intended purpose.
The placement of a new PoC device into an already existing healthcare system requires satisfying challenges and questions raised by every stakeholder. Who bares which part of the financial cost? What economic incentives are offered to various stakeholders? How is the training handled and how are information and communication technology used for reporting? Answering these questions in a satisfying manner, might be as crucial as the device itself [5,45,68]. Clear national guidelines for essential steps-i.e., evaluation, certification, supply chain management, financing, training and expertise-need to be put in place [20]. It also has to be pointed out that healthcare providers often feel that they do not have any influence on the decision regarding the availability of PoC devices, but instead "use what they are given" [49]. This shows that they are often not taken into account in this process. Healthcare practitioners expect decision makers to lay out the plan and define the use of PoC devices in the context of local epidemiology. Decision makers also need to take care of training before deployment and give adequate guidelines on how to proceed after a positive or negative results [49].
Market penetration is also heavily dependent on price. Here, the willingness to pay is an important aspect. A guideline to evaluate the affordability is 1-3× the GDP per capita per quality adjusted life years, which is gained through the intervention [50]. However, this calculation is not commonly shared as a good estimate and it may not lead to the best investment if it means loss of funding in other areas. Society might not be willing to contribute the necessary sums just because of cost effectiveness [69,70].
Another factor might be that patients themselves are not willing to pay the suggested price as they lack money and cannot invest in the long-term benefits. In Kenya, 51% of healthcare expenses are payed out of pocket, healthcare costs are often covered through "harambees" fundraising events in the community; 46% of the population only has USD 1 or less to spend each day [28]. HCTs platforms have high fixed costs for the PoC device itself and, therefore, the cost per test is strongly dependent on the use case and the workload of the device to be economically justifiable [71]; therefore, those diagnostic devices, with their high implementation cost, are considered as too expensive for widespread use [70]. A good example for this is HIV diagnosis and monitoring. Since viral counts are difficult to do in the field, often a laboratory procedure is required. While PoC HCTs devices are established on the market, widespread use is limited by the initial high cost of the device. However, research showed that it can be viable for clinics that have a moderate or large amount of patients to have such PoC HCTs devices for HIV detection, as the initial high cost of the device can be distributed over more patients. This is also true for the usage time. The longer the device can be maintained, the more cost-effective it can be. With 50 patients per month, a reasonable assumption for a clinic in South Africa, the overall cost for anti-retroviral therapy (ART) monitoring would be only USD 45 higher (USD 210 compared to USD 166) than the laboratory procedure over a time span of 5 years. However, assuming only 10 patients per month, this would increase to a USD 183 additional cost over the same timeframe. Price is also dependent on which biomarker is analyzed, as viral load was, in the 50 patients per month case, just as cost effective as in the laboratory, while CD4+ count and creatinine test were more expensive [71]. The notion of cost effective PoC ART monitoring is supported by several mathematical simulations from South Africa, Zimbabwe and Mozambique, especially when all the costs are taken into account. A PoC test that enables better ART linkage can be tremendously more expensive per test, and still save follow-up costs in the long run, due to better immediate and consistent treatment, as well as greater reach [72][73][74].
While many simulations of cost effectiveness base their assumption on high-prevalence areas, others remark that especially for "the last mile", in areas with lack of infrastructure, PoC might be one of the only viable alternatives for the hardest to reach 10% of patients, as transport networks get more and more difficult to establish in remote areas [75]. Despite the low volume of patients making cost-effectiveness more difficult, it is estimated that an optimal placement of PoC viral load tests on-site and in PoC hubs still can reduce the price of a test by 6-35% by avoiding high transport cost in remote areas [76]. Finally, it has to be noted that, although PoC tests could significantly improve the healthcare system in LICs, their impact will depend on the specific disease or condition they are employed for. Therefore, the successful implementation of PoC will require a rigorous study of the overall cost-benefit ratio of any proposed PoC test, specifically addressing the disease it is meant to diagnose.
Product Distribution
Limited infrastructure in LICs not only results in low return-of-investments for companies, but also makes the distribution of the device and technical support more difficult, which might discourage companies or hamper the market penetration [26]. Stock outs and supply network problems are a massive obstacle to market penetration. Studies on PoC accessibility and supply chain management reported several stock outs of PoC devices. In a scale-up program for syphilis tests from an NGO-led pilot to a ministry of health operated large scale operation in Zambia, half of the pilot sites suffered at least one stock out. PoC for pregnant women also reported stock outs in several stages of the study, with up to 60% of sites reporting stock outs. The longest time the device was out of stock was a median of 6 weeks. In Uganda, malaria diagnostic tests were only available in 24% of 125 lower healthcare facilities, and 72% of community healthcare workers did not receive malaria testing kits for 6 month [2]. In antennal clinics in Guatemala, almost half of women could not be tested for HIV, Syphilis and Hepatitis B, in part because of stock outs [2,77]. Test kit stock outs are also reported from Uganda and Tanzania [78] and are a major concern to healthcare providers [49].
Here, the supply chain is a big point of failure, mostly due to irregular supply, poor forecasting, selection of diagnostics, insecure procurement systems, delayed distribution systems, poor quality assurance and inadequate stocks [20,79]. This has also been confirmed by healthcare workers who are concerned about the reliability of the supply chain [45]. Human resources are often not considered in the supply chain management, leading to bad planning and overstretched systems [66].
An innovative solution for the fast distribution of medical products in LICs is shown by the company Zipline, which uses remote drones to distribute blood preservations to hospitals in need all over Rwanda. Replacing the delivery from taking hours by Motorbike, to mere minutes via a Zipline drone, this innovation reduces the number of blood stocks the hospital needs and thus reduces waste due to expiry. In emergencies, matching blood can be delivered within minutes, something impossible with motorcycle rides that could take up to 5 hours [80,81]. In Zipline's system, the blood packs are dropped from the drone via a little parachute, hence a pickup of blood-samples from remote areas is difficult, since the drone cannot independently land. However in the recent COVID-19 pandemic, Zipline collected COVID-19 test-samples by car and sent them bundled together to large hospitals via their drones, while also distributing to hospitals other COVID-19 related necessities [82,83]. Innovations such as this might help to counteract shortcomings in delivery-planning and infrastructure or deliver tests and consumables that have limited shelf life or limited temperature resistance. A pickup service for test samples in rural healthcare clinics, for further diagnostic procedures-for example in a two-step process-might be possible with new developments of drones that are capable of vertically taking off and landing, and are currently developed, for example by DHL [84].
The Usage
Limited testing capabilities are often a bottleneck for adequate therapy. This is especially observable in HIV treatment, where CD4+ counts and viral load are used to monitor antiretroviral therapy (ART). In Sub-Saharan Africa, the median of patients that were retained between HIV diagnosis and CD4+ count was 59% [85]. HIV is a good example of a disease that is difficult to get a conclusive diagnosis on in the field as it needs a nucleic acid test that usually has to be performed in a laboratory with trained personnel [86][87][88]. While there are several HCTs PoC platforms available that can conduct HIV monitoring, such as the GeneXpert (Cepheid), the PIMA CD4+ (Abbot) or the Alere q (Abbot) [89][90][91], their high initial cost are still a problem. This makes ART therapy challenging to start and monitor in rural areas. Therefore, not only is supplementation with tests an important factor, but also the whole infrastructure of usage in combination with treatment, especially in rural healthcare settings where the different parts need to act together to create sensible plans for PoC testing and treatment delivery. Due to this interconnectedness of different factors surrounding the end-users, several bottlenecks can appear. HIV is, therefore, a good example of how healthcare, as well as patient management, are integral factors and can negate any positive effects that PoC can bring, if they are mishandled.
Healthcare Management
Political will towards PoC might be reduced when PoC tests lead to more demand for treatment, while at the same time treatment capabilities are scarce [5]. On the other hand, PoC diagnosis might not be feasible (for a test and treat scenario) when adequate treatment capabilities are not in place [70]. When treatment is available, patients might also just opt-out of tests in favor of direct use of medication, such as over-the-counter-antibiotics. This has been reported in Thailand, where missing information about disease origin among the public leads to a preference of medication instead of proper diagnosis, as medication is connected to symptoms instead of disease origin. For example, patients associate antibiotics with the symptoms of a bacterial infection instead of the infection itself and thus demand antibiotics even when the reason for the illness is not bacterial in nature but has similar symptoms [92]. PoC testing might offer proper diagnoses, preventing people from self-diagnoses and taking inadequate medicine. This also shows the importance of looking at PoC applications not independently but in the whole context of the healthcare system, that acts and is acted upon by various factors. Some researchers assess that the introduction of a PoC device into a system that is already in place changes the role from a technical to a social device. However, this also has its upsides-a more elusive benefit of why healthcare workers in rural areas might want to use PoC testing is the psychological effect it can have on patients, giving healthcare workers more certainty which, in turn, transfers to the patient and improves compliance. Patients also might overestimate the capabilities of the tests, which encourages compliance [92]. PoC can provide evidence without the need of laboratory infrastructure and highly trained lab technicians [21].
Patient Management
In addition to the infrastructural bottlenecks related to the availability of electricity and water, which were discussed in the research section, one of the most immediate problems is patient management. In rural areas, traditional laboratory diagnostics are limited by distance to central laboratories. For laboratory diagnostics, samples, for example, in the form of dried blood spots, have to be transported via motorcycle, creating problems of long turnaround times for test results of up to two weeks and the danger of sample damage or loss. Another problem is the loss-to-follow-up when patients have to either wait for their results or have to go to another facility. This may be a problem for two step diagnostic processes in which the second step is not PoC and the patient has to return or travel to a second facility [50]. Especially in rural LICs areas, there are large barriers to get to healthcare facilities, due to poor transport infrastructure and time constraints that might prevent a second visit [70].
For example, a third of the women in Ghana live further than two hours away from facilities with the capabilities of emergency obstetric and neonatal care [20,93]. Barriers such as these create gaps in the diagnostic and treatment pipeline and can lead to high levels of LTFU if patients do not return to collect their results and start treatment [44,88]. However, this also shows that highly effective laboratory diagnostics might not be suitable in LICs, even if their sensitivity and selectivity is far superior to PoC devices. If they pose the risk of LTFU, a one-step PoC device might be the more pragmatic and better solution [46]. In HIV diagnostic and treatment, the decentralization of diagnostics from large hospitals to rural healthcare centers (RHC) proved essential to give people outside of urban areas access to therapies, such as ART. Before the implementation of testing in RHCs, the rate of LTFU was unacceptably high [94] as over half of patients did not return to get their results [95].
The large potential of PoC diagnostics in this context is, in part, due to their ability to ensure the start of treatment in the same encounter, which is essential as the rapid initiation of treatment is immensely important in diseases such as aids and tuberculosis [45]. In India, just one round for combined screening and treatment of HPV reduced the cervical cancer rate and mortality for over 30-year-old women by 50% [96]. For this reason, the WHO recommends a screen and treat strategy for 30-39 year old women [70]. A PoC test-time of under one hour from test to result would be ideal, as treatment can follow in the same encounter [22]. In a healthcare worker survey, the participants argued that a sample-to-answer time of less than an hour is indeed optimal [43]. This forms a barrier for most nucleic acid based tests, which take several hours [43].
Some researchers therefore argue for more holistic thinking in terms of health services. There needs to be better linkage that connects testing, diagnosis and treatment [66]. Given the large impact LTFU has, resources could also be used in preventing LTFU instead of perfecting diagnostic devices [70].
For example, getting infants on ART could be achieved with combinations of tests of different sensitivity, but proper linkage. The initiation rate of 71% could be achieved with PoC devices with a limited sensitivity of only 72%, but a successful linkage rate of 99%, or with a test of 100% sensitivity and 70% successful linkage [86,88]. Some researchers suggest that PoC tests should be evaluated just as much on their ability to facilitate linkage as they are on their performance [88]. As demonstrated before, the cost-benefit also favors a two-step system. The absence of functional referral systems is seen as a huge roadblock by other authors as well [95]. Therefore, information technology plays a key role in the context of PoC, maybe even more so than new and better devices themselves. The rapid reporting of results and counseling via mobile phones are essential for a decentralized use of PoC. Mobile phone-linked PoC devices can also assist in data capture and quality control, medication distribution, PoC tracking and data storage [5,43,97]. The usage of mobile phones in this way is generally categorized as mHealth, an expanding subfield of eHealth, and concerns itself with the use of wireless technology instead of connection through ordinary landline infrastructure, such as in eHealth. This is especially interesting for LMICs, as mobile phone usage outperforms other communication infrastructure usage [98]. In total, 70% of the 7.4 billion users of cellular phones reside in LMICs, and especially in sub-Saharan Africa, mHealth had a rapid expansion, making this approach hugely promising [20,99,100]. The response to Ebola PoC in the last epidemic might serve as an example, as it was very fast as a result of the effective surveillance systems in place. However, healthcare systems are slow to use this connectivity to their full potential [66]. This is starting to change as mHealth is more utilized. In a review of 255 studies of mHealth applications, 93 studies fell into the realm of health monitoring and surveillance, the second largest group with 88 publications concerned themselves with raising health awareness [98]; another study found the most used context to be increased patient follow-up, as well as patient compliance [100].
Training
While large hospitals in central areas might also have an appropriate workforce, the staff in rural healthcare clinics, which are the main access points to healthcare for the rural population, consist mainly of untrained individuals or inadequately small workforces; often just one doctor, nurse or pharmacist, with the possible addition of lay healthcare workers (LHWs) [43,61,95,101].
Human resources are in surprisingly short supply when it comes to healthcare workers and may stretch out the system, especially in rural sites [26,66]. Additional onsite testing could put even more strain on the already overworked staff; as already mentioned, this might be a major problem if there are no additional staff or incentives available, and might discourage PoC use [20,43,102]. There is also a problem in the lack of educated healthcare personnel, especially in Africa, which has over 24% of the worldwide disease burden while only having 2% of physicians of the world [61,[103][104][105]. The shortage of skilled healthcare workers was seen as a problem by several authors [20,49].
Surveyed healthcare workers assessed PoC diagnostics as easy to use; however, they still expressed fear of knowledge gaps among their users and concern of incorrect use. For example, the use of a wrong buffer solution or no buffer solution at all was observed [49]. Other researchers also reported reluctance for PoC in LICs due to the need for training, and the costs associated with implementation, as well as diagnosis [45]. It is especially feared that lay healthcare workers will not have the adequate training or knowledge to conduct even simple PoC tests, which could lead to inaccurate results that could damage the perception of PoC in these settings [20]. Despite those fears, the world health organization recommends task shifting to LHWs to meet human resources needs [78,85] and it is a tool that is increasingly used to combat the estimate estimated shortage of 7.2 million healthcare workers, which is the most severe in Sub-Saharan Africa [106].
The question is, can task shifting from healthcare professionals to LHWs be achieved without a loss in reliability of test results. Lay health workers provide an important opportunity to give more people access to healthcare; this is especially the case for rural areas. However, medical devices are usually not designed to account for task shifting [61].
Some argue that task-shifting has to be accommodated when designing the PoC device. The interface has to be straightforward and user-friendly, even for laymen, which is often not considered in the design phase [21]. Ideally, the device should be a fully autonomous, robust, "black box" [4], which is fully automated with a simple interface and everything integrated into a simple "sample to answer" process [43]. Others suggest that the technological complexity must be as simple as a home pregnancy test [22], or assess modern PoC diagnostic platforms used for CD4+ testing as too sophisticated for usage in LICs [44]. Laboratory professionals also doubt that diagnostics can be performed by lay-healthcare-workers with appropriate quality assurance [5,45]. This viewpoint might be understandable, given the findings of doctors observing the wrong use of lateral flow devices [49]. The question, therefore, remains whether PoC devices demand usage by healthcare professionals.
Research is suggesting otherwise. A study about task shifting for the use of the HCTs Pima CD4+ Analyzer (Allere) in Namibia showed that lay-health-workers can produc valid tests as nurses.
In a large study of 1429 CD4+ tests, in which 500 were performed by nurses and 929 by LHWs, the reception of test results by the patients was in favor of LHWs, with 98.1% contrary to 95.6%. LHCs were only slightly slower, with a median turnaround time of 21 minutes compared to 20 minutes for nurses. However, both were a tremendous improvement from the turnaround time of a laboratory test, which had a median of 4 days (IRq 2-8). Therefore, task shifting to LHWs may be an appropriate choice, even for more complex tests [85]. Other studies agree that LHWs can perform rapid testing just as well as trained laboratory staff, if trained properly. However, when implementing a training program, it has to be considered that the training package is adapted to the local environment [78]. LHWs in Malawi have named a lack of disease-and job-specific training as a key problem hindering their role as TB care providers [106,107]. Lateral flow tests for HIV testing were so successful in their ease of use, that task shifting could be greatly implemented and the tests in LMIC are now often done by expert-patients or trained lay healthcare workers [95].
Use by Trained Doctors
In cases where trained medical professionals are available, the bottlenecks present themselves differently. One issue is time constraints. In India, doctors prefer clinical diagnoses coupled with empiric treatments over a higher diagnostic security. Broadband antibiotic prescription after only a short symptomatic observation is a common example; this is faster than doing an additional PoC test that might not even be necessary, just in order to make the diagnosis more secure [5]. The general overburden of doctors in India is one factor for this. Generally, visits only last a few minutes, which is generally not enough time for PoC testing. On the doctor's side, it is better for his reputation to treat several other waiting patients in this time and keep waiting-lines short. Many doctors only have a single room with one nurse as an assistant, even simple lateral flow tests are difficult to conduct under such conditions [5].
Awareness is another issue which is suggested by the literature. Healthcare providers might not be aware of PoC tests on the market [5]. Lack of knowledge about PoC testing among people living in rural areas could be addressed by, for example, advertising and explaining the use of PoC tests for rapid diagnosis of specific diseases. This could raise awareness and prevent last-minute visits to the doctor. However, general awareness of PoC devices in Kenya was surveyed and is high throughout high-, mid-and low-tier healthcare providers and seems to not be a critical barrier. In total, 95% of healthcare providers in his survey could name a disease that can be diagnosed with PoC tests (71% could name two and 24% could name three); only 5% were not able to name any disease which has PoC tests available. However, only 10% of healthcare workers which named more than three PoC devices had actually applied them in their practice [28]. This indicates that the bottleneck is systemic rather than knowledge-based. Higher knowledge and usage about HIV tests were shown by doctors in richer hospitals compared to rural doctors. For example, in malaria diagnostics there is a wide gap between the knowledge about (57%) and actual use of PoC diagnostics (36%), independent of socioeconomic factors. It is suggested that this is because PoC devices for malaria are seen as of limited usefulness as the disease has strong symptoms and the prevalence is foreseeable due to the seasons. Another possible reason might be the lack of availability, the devices not complementing other diagnostic methods or greater success with other diagnostic methods, such as a symptomatic approach. For other identified diagnostics, the knowledge about the device was 1-to 3-times higher than the actual use [28]. Healthcare workers in other studies could identify various PoC tests. The most known ones were for Malaria, HIV, Syphilis, Blood Glucose and Pregnancy. These findings concurred with other surveys [49,108].
View on PoC
With regard to the view of patients on PoC, it was found that, according to healthcare personnel, patients were satisfied with PoC results (97%) and would recommend them (96%). However, only half of clinicians thought they would give reliably accurate results; 46% were unsure and 4% considered them not accurate. In total, 65% of healthcare workers used medication even on a negative test, showing that trust in the test is limited. This concurs with only 20% of healthcare workers stating that they rely on the test alone. The majority sees the test as a complement to other means of diagnosis, such as symptoms [28], as 54% of surveyed encountered barriers preventing PoC use. The likelihood of encountering barriers correlated with hospital tier (45% in high-end hospitals and 53% in mid-tier). In total, 50% of personnel who encountered barriers named reliability issues as a large problem; the second largest obstacle named was availability, with 46%. Only a smaller percentage saw cost (14%) and awareness or training deficiencies (12%) as major obstacles. When asked about improvements to increase PoC use, the respondents replied with improved tests (44%), improved reliability (22%) and standardization (20%) which was specifically mentioned, even though it was not in the survey as an answer. Oddly, increased availability was named by just 22%, despite it being the second highest identified barrier. In total, 85% agreed that PoC is an opportunity for more affordable healthcare in Kenya [28].
From the first start of research in the beginning, to the view of end-users at the patient-side, the bottlenecks of PoC diagnostics along the value chain seem to be as diverse and as different from each other as the actors and circumstances that present themselves on the way, as is summarized in Figure 4. However, as Figure 5 shows, there are many possible solutions at each step as well, which are, next to technological advancements, often based upon the connection of different stakeholders.
Biosensors 2020, 10, x FOR PEER REVIEW 15 of 23 deficiencies (12%) as major obstacles. When asked about improvements to increase PoC use, the respondents replied with improved tests (44%), improved reliability (22%) and standardization (20%) which was specifically mentioned, even though it was not in the survey as an answer. Oddly, increased availability was named by just 22%, despite it being the second highest identified barrier. In total, 85% agreed that PoC is an opportunity for more affordable healthcare in Kenya [28]. From the first start of research in the beginning, to the view of end-users at the patient-side, the bottlenecks of PoC diagnostics along the value chain seem to be as diverse and as different from each other as the actors and circumstances that present themselves on the way, as is summarized in Figure 4. However, as Figure 5 shows, there are many possible solutions at each step as well, which are, next to technological advancements, often based upon the connection of different stakeholders.
Main Findings
Fundamental research always starts with funding, and it is, therefore, an obvious consideration. However, funding is not only needed here. Additional funding, as well as incentives for valorization is something direly needed to actually make the jump from a research principle to a medical device. Push-and pull-incentives are used with considerable success in drug development and might prove valuable if systems directed at PoC are in place. Connecting all stakeholders, such as research groups, companies, healthcare professionals, as well as governments and NGOs, is essential and enables IP considerations and licenses to be negotiated to everyone's benefit. Healthcare professionals' needs can be shared and taken into account, and company expertise in scale-up and distribution can be applied.
For the development of the device, one can ask the question of how important it is to be ASSURED. While many argue for sensitivity, specificity and reliability as a main point, there were other voices arguing for a more integrated view. For example, it can be argued that the needed criteria solely depend on the use-case. This notion has been supported by original research, showing that a two-step system can be cheaper, as well as more specific, despite the first stage not having optimal characteristics. However, for this to work, a proper integration into the healthcare system, with reasonable referral structures and minimal LTFU, has to be achieved. This might be realized by improved ICT structures and the use of mobile phones for diagnostic readouts. Tests can have shortcomings if proper linkage to further tests and treatment are in place and LTFU is at a minimum. However, practitioners need to be aware of the test shortcomings to have a safe basis for decision making. The perceived low device quality can, and does, hinder effective PoC usage due to mistrust by doctors. Clearer communication of what PoC can and cannot achieve might give healthcare providers more certainty in their decision-making and empower them to trust the device and correctly interpret the received results. Clear guidelines for healthcare workers on how to use the results of a PoC device in the grand scheme of things need to be in place, including what the next steps for patients are, either for treatment or further diagnosis. These guidelines need to be beneficial to the healthcare worker as well, instead of just adding additional work. At this moment, it is more in the interest of a doctor in India with 30 waiting patients to simply give the patient antibiotics and send him/her home after a brief symptomatic assessment than to perform a PoC diagnosis, which takes more time. If the patient does not come back, the doctor may assume him cured and treat it as a success. However, this is not in the benefit of society nor the patient. Therefore, test and referral systems must be integrated in a way to lift the burden from the practitioner instead of adding them.
On the market side, the tremendous influence of a proper placement into the healthcare system and its incentive structure is shown by the use of subpar-PoC devices in India and other countries. If malfunctioning PoC devices can achieve widespread use, surely working diagnostics can be used if they are properly integrated into an incentive structure. The importance of incentive structure is also shown by PoC use in diagnostic laboratories in India and South-Africa, where PoC devices are misguided as cheap laboratory alternatives, instead of their intended goal as a fast, patient-side, diagnostic instrument.
A social problem that was identified as a large bottleneck is the lack of available workforce for testing. While the opinion of researchers regarding task shifting varies from skeptical to enthusiastic, there are interesting insights into its feasibility, arguing that lay-healthcare-workers can even conduct tests on more difficult platforms if trained properly.
Outlook
As research on PoC diagnostics continues, we will get closer and closer to versatile, accurate and cheap detection methods that are more in line with the desirable ASSURED criteria. However, for the immediate success of PoC and for the benefit of patients in LICs, the research part might already be more advanced, compared the other aspects of the value chain. This is shown by the fact that already established PoC tests and resources are not nearly used to their full potential. A wide range of reasons can be identified, from missing funding for scale-up and lack of corporate incentives, as well as delivery problems and stock outs, to the problem of integration in healthcare systems and lacking trust by doctors. The lack of valorization of PoC in LICs seems to be a social and an economical problem, more than a problem of research.
Researchers will make progress for continuously improved devices, which will be easier to implement. However, the burden of implementation should not be solely put on the shoulders of scientists to discover novel advanced technical solutions for a perfectly ASSURED system. Tests that might not be perfect, but that are instead perfectly adequate for use are in reach, but missing incentive structures and lacking political attention will prevent their effective use.
While Zipline is a company concerned with infrastructure and not with diagnostics, it still shows, in an impressive manner, how new technology can innovate a whole market, if there is political will and no old stakeholders that benefit from the status quo. In the case of Zipline, there was no real alternative for emergency blood delivery in an appropriate amount of time. The stage was free for Zipline's new and improved technology. In diagnostics, old infra-and incentive-structures from centralized laboratories to skeptical or time-limited doctors have to be overcome. While developments in research and innovation in PoC diagnostics over recent years are even more impressive than Ziplines drone systems, they also have to conquer larger infrastructural barriers. Zipline's CEO, Keller Rinaudo, stated that the technology is the easy part. It is more difficult to improve regulatory issues, acquiring and training the necessary workforce locally and creating awareness of their services to doctors and healthcare workers [80]. Mabey et al. provided us with another success story that further demonstrates that PoC tests need to address the whole value chain in order to be successful [109]. They implemented PoC tests for syphilis diagnosis and were successful because they did not only addressed a need in the healthcare system and offered solutions that adhered to the ASSURED criteria, but also managed to impact health care workers training, ensured that effective treatment was available and improved the local medical supply chain. These requirements are further illustrated by the case of blood glucose testing, the most striking example of a PoC success story in HIC. Blood glucose meters and test strips are often unavailable in LICs and even when present, other factors hamper their implementation, such as poor diabetes education and economical constraint regulatory issues [110]. Therefore, sadly, one of the most impactful PoC tools in HICs has not yet been able to achieve the same impact in rural, low-income settings.
Faced with the healthcare challenges in LICs, a transformation to a smart healthcare system, with real-time information flow and referral structures, will be necessary to make the most out of the innovations that come out of the lab.
Thus, there is a case to be made that scientific progress and innovation may not be the limiting factor, and other limiting steps seem to be hindering valorization at least as much. Those factors might improve, together with the economic development of LICs, but for now researchers need to take these social factors just as much into account as the technical aspects of new devices. The research and implementation of a new device have to be designed in synergy with its target location, instead of merely adapting to it later on. Advances for fast throughput devices of stable and long lasting reagents with frugal design and the integration of mHealth capabilities, and devices that have task-shifting in mind, will be of great help in this challenge-as long as those technical advancements get connected to the local realities. mHealth connection will only be as potent as the referral structures that are in place to accept it. A task-shifting enabled device is only be as beneficial as the availability of LHWs. Therefore rollout, training and supply plans have to be integrated in and developed together with the device and in close connection to the target environment, which might be, at least in part, an aspect of a researchers work too. Research needs to break down as many barriers and enable the connection of as many stakeholders as possible in order for policy makers and companies to take the leap and bring PoC diagnostics to the patients.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-10-06T13:33:22.100Z | 2020-09-24T00:00:00.000 | {
"year": 2020,
"sha1": "20d23006a500e66ff53b8f8fc950ac9ab713c2a6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6374/10/10/133/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86716492093f9f6723dce1f946d0ea5b60ff15dc",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
234179319 | pes2o/s2orc | v3-fos-license | An Improved Cohesive Zone Model for Interface Mixed-Mode Fractures of Railway Slab Tracks
: The interface crack of a slab track is a fracture of mixed-mode that experiences a complex loading–unloading–reloading process. A reasonable simulation of the interaction between the layers of slab tracks is the key to studying the interface crack. However, the existing models of interface disease of slab track have problems, such as the stress oscillation of the crack tip and self-repairing, which do not simulate the mixed mode of interface cracks accurately. Aiming at these shortcomings, we propose an improved cohesive zone model combined with an unloading/reloading relationship based on the original Park–Paulino–Roesler (PPR) model in this paper. It is shown that the improved model guaranteed the consistency of the cohesive constitutive model and described the mixed-mode fracture better. This conclusion is based on the assessment of work-of-separation and the simulation of the mixed-mode bending test. Through the test of loading, unloading, and reloading, we observed that the improved unloading/reloading relationship effectively eliminated the issue of self-repairing and preserved all essential features. The proposed model provides a tool for the study of interface cracking mechanism of ballastless tracks and theoretical guidance for the monitoring, maintenance, and repair of layer defects, such as interfacial cracks and slab arches.
Introduction
Chinese railway track systems (CRTS) have successfully served for more than 10 years in China's high-speed railway (CRH) and have performed well during the period. However, with increasing operation time and the influence of complex temperature and environmental conditions, hundreds of interfacial cracks (as shown in Figure 1) between track slab and cement asphalt mortar (CA mortar) have appeared on the high-speed railway tracks [1,2]. Under extremely high temperatures in summer, defects of slab arching also occur.
Typical interlayer defects, such as slab arch [3], are closely related to the interfacial cracks between the track slab and the under-layer. During operation, the track directly undertakes the effects of the cyclic loads from the high-speed train and environmental temperature, which increases the possibility of interface cracking. As a vertical multilayer and longitudinally heterogeneous structure, the slab ballastless track has weak parts between the new and old concrete interface and composite connection surface. Therefore, a reasonable simulation of interlayer interactions is the key to studying the defects of track structures.
Cohesive zone model (CZM), an effective and favored crack model in interface fracture mechanics, has been widely used to simulate crack initiation and propagation in various materials, such as metals [4][5][6], polymers [7], ceramics [8], concrete [9][10][11], and fiber-reinforced Various cohesive zone models may have different applicable conditions due to different initial assumptions. For instance, the trapezoidal cohesion zone model proposed by Tvergaard [23] could not consider the situation where the mode I fracture energy is not equal to the mode II fracture energy. The exponential cohesive zone model proposed by Xu and Needleman [27] could consider the difference values of normal and tangential fracture energies, but when the two fracture energies are different, there is a "self-repairing" problem at the crack tip under mixed-mode loading and unloading.
The Park-Paulino-Roesler (PPR) model is a kind of polynomial traction-separation law for mixed-mode fractures that was proposed by Park et al. [32] in 2009. This model is versatile because it can consider different fracture energies with respect to fracture modes and can be applied to represent various material softening responses, i.e., ductile, brittle, and quasi-brittle, due to the controllable softening given by the shape parameters [13,32]. More significantly, the model guarantees the consistency of the cohesive constitutive relationship under mixed-mode conditions [30,33,34].
Due to the above advantages and the convenient implementation in commercial software ABAQUS as a user subroutine [34][35][36], the PPR model has been utilized to investigate a wide range of failure phenomena and cited in many papers. The model was found to still have limitations that need to be improved. Nguyen et al. [37] indicated that due to the different cohesive interaction regions between the normal and tangential tractions when fracture energies are different, one traction component might become zero while the other traction component had not yet vanished. This situation does not conform to reality in which normal and tangential tractions typically fail simultaneously when a fracture happens.
In addition, Spring et al. [38] noted that the unloading/reloading relationship, which was commonly utilized in conjunction with the PPR model, produced self-healing behavior when the crack underwent unloading/reloading. To address this issue, a new coupled unloading/reloading relationship, which maintained the thermodynamic consistency of the PPR cohesive model, was developed [38]. More recently, the research by Gilormini et al. [39] showed that the new unloading/reloading relationship prevented the questionable features that might appear when the original model [34,35] was used, but also bred a new issue regarding damage initiated from the very beginning of the loading process. This model ignores the initial elastic region.
In this paper, an alternative simplified PPR traction-separation law and an improved unloading/reloading relationship are developed and validated using multiple cases that could effectively eliminate the above issues and preserve all essential features of the original one. The modeling method of connections between the layers of the slab track as proposed in this paper can contribute to the mechanism of high-speed railway (HSR) interlayer defects, on-site monitoring, inspection, and maintenance. This paper is organized as follows. The review of the original PPR model (tractionseparation law) and unloading/reloading relationship are presented in Section 2. Section 3 shows the modification of the original PPR model and the comparison of the modified model and original through example cases. Section 4 introduces the improvement of the unloading/reloading relationship and demonstrates that the improved one is effective with the example used in [39]. Then, Section 5 presents the application of the proposed model to analyze interface damage of railway slab track. Finally, the paper is summarized in Section 6.
Original Models
The PPR model was designed for pure loading conditions and did not contain a built-in unloading/reloading relationship [38]. To simulate the fracture submitted to the general loading conditions, such as loading, unloading, and reloading, the PPR model was combined with an unloading/reloading relationship [34]. The original PPR model and unloading/reloading relationship are introduced shortly in the following subsections.
Original PPR Model
The fundamental issue in cohesive zone modeling is the definition of traction-separation law, which gives the constitutive behavior of the fracture. The original PPR model defines the traction-separation law by taking the derivative of the cohesive fracture potential. The potential consists of polynomials formulated in terms of a normal separation (∆ n ) and a tangential separation (∆ t ), and it is expressed as [32]: Therefore, the traction-separation law is calculated where · is the Macaulay bracket, i.e., if x ≤ 0, then There are eight basic parameters (φ n , φ t , σ max , τ max , α, β, λ n , and λ t ) involved in the PPR model [32]. The PPR model considers different normal and tangential fracture energies (φ n and φ t ), different cohesive strengths (σ max and τ max ), and controls the shape of the traction-separation law using the parameters α, and β and the initial slope indicators λ n , and λ t . The influence of α, β, λ n , and λ t on the material softening response were introduced in detail in [32].
These eight parameters could be obtained by fitting the interface stress-displacement relation measured in the splitting and shearing model test of concrete and mortar bonded composite specimens [40]. From these eight parameters, the following quantities can be deduced, which are used in (1), (2), and (3): where δ n and δ t are the normal final crack opening width and the tangential final crack opening width, respectively. If ∆ n ≥ δ n or ∆ t ≥ δ t , the tractions T n and T t are set to zero. Therefore, the traction-separation law is only valid in a region. To keep things simple, the separations (∆ n , ∆ t ) are assumed to be positive here. Then, the region can be expressed as Considering the region, the normal and tangential cohesive tractions of the PPR model are plotted in Figure 2 with different fracture energies (e.g., φ n = 100 N/m, φ t = 200 N/m, and other cases), cohesive strengths (e.g., σ max = 40 MPa, τ max = 30 MPa), shapes (e.g., α = 5, β = 1.3), and initial slope indicators (e.g., λ n = 0.1, λ t = 0.2). The normal cohesive traction (on the left in Figure 2) illustrates the fracture behavior of a typical quasi-brittle material, while the tangential cohesive traction (on the right in Figure 2) describes a plateau-type behavior. If φ n < φ t (Figure 2a,b), the tangential cohesive traction was properly defined in the rectangular region corresponding to the final crack opening widths (δ n , δ t ) as mentioned above, while in the same region, the normal cohesive traction T n (∆ n , ∆ t ) existed as negative (Figure 2a), which is contradictory to the nature of cohesive tractions. Similarly, if φ n > φ t , the normal cohesive traction was properly defined in the rectangular region, while the tangential cohesive traction was negative in some areas, as illustrated in Figure 2c,d. If φ n = φ t (Figure 2e,f), the normal and tangential tractions were non-negative in the same region.
To prevent the unphysical response, Park et al. [32] redefined the region by narrowing it to make the cohesive traction non-negative in new region, and the traction was set to zero if it was out of the new region. Taking φ n < φ t as an example, the change of region for the normal traction is demonstrated in Figure 3 (separations are assumed positive here). The parameter δ t in Figure 3 is the tangential conjugate final crack opening width, and it is [32]. For the new cohesive interaction region (on the right in Figure 3), one border of the new region is the normal final crack opening width δ n . The other border is the tangential conjugate final crack opening width δ t . Due to δ t < δ t , the new region was smaller than the original one [ (∆ n , ∆ t )|0 ≤ ∆ n ≤ δ n , 0 ≤ ∆ t ≤ δ t ] (on the left in Figure 3), whereas the region of the tangential traction is the original one as shown in Figure 2b when φ n < φ t . This means the cohesive interaction regions of the normal and tangential tractions are different, and the tangential traction may still be large, while the normal traction has vanished in some regions. In other words, when a fracture happens, the normal and tangential tractions will not fail simultaneously. This is unrealistic for most interfaces encountered in engineering practice.
Unloading/Reloading Relationship
The original unloading/reloading relationship, which was commonly used with the PPR model, was linear to the origin [35], and expressed as follows where ∆ max n and ∆ max t are the largest values of ∆ n and ∆ t reached so far. If That is to say, the original unloading/reloading relationship is activated when the normal or tangential separation is past the peak cohesive strength.
Spring et al. [38] found that the original unloading/reloading relationship was not thermodynamically consistent and produced self-healing behavior. To address this issue, a new coupled unloading/reloading relationship was proposed.
where ∆ max n and ∆ max t are updated as soon as ∆ n > 0 and ∆ t > 0. This means the linear unloading/reloading response applies even before any peak has been passed.
Gilormini et al. [39] compared the two unloading/reloading relationships. They demonstrated that the new unloading/reloading relationship performed better than the original one and did not have the above questionable features. However, they also indicated that the new one did not include an initial elastic region, since the energy was dissipated by increasing the damage from the very beginning of the loading process. To address this issue, our paper improves the unloading/reloading relationship (see Section 4).
Simplified PPR Traction-Separation Law
The traction-separation law of the PPR model is adjusted here to avoid the issues mentioned in Section 2.1. The modifications of the traction-separation law are interpreted below. Then, based on previous studies [32], the path dependence of work-of-separation is investigated with respect to proportional and non-proportional paths to demonstrate the consistency of the simplified PPR traction-separation law. Finally, the simplified model was verified by simulating a mixed-mode bending test and comparing with the original model.
Modification
From Figure 2, we concluded that the cohesive interaction regions for the normal and tangential tractions were the same only if φ n = φ t . Substituting φ n = φ t into Equations (2) and (3), we obtain the traction-separation law as follows.
The traction-separation law only depends on mode I fracture energy φ n . To account for different values of φ n and φ t , the mode II fracture energy φ t is substituted for φ n in the equation for the tangential traction. Therefore, the final form of simplified PPR traction-separation law is given by The simplified PPR traction-separation law is similar to the original PPR model and can also consider different fracture energies, cohesive strengths, and various material softening behaviors. The noteworthy merits of the simplified model are that the energy constants Γ n and Γ t are omitted (other parameters are the same as the original model), and the formulas are unified regardless of what the fracture energies are. Taking φ n = 100 N/m and φ t = 200 N/m as an example, the normal and tangential cohesive tractions of the simplified model are plotted in Figure 4. Figure 4 shows that the normal and tangential tractions are both properly defined in the same regions as expected. In the following section, the applicability of the simplified model is demonstrated using multiple cases. The traction-separation law only depends on mode I fracture energy . To account for different values of and , the mode II fracture energy is substituted for in the equation for the tangential traction. Therefore, the final form of simplified PPR traction-separation law is given by The simplified PPR traction-separation law is similar to the original PPR model and can also consider different fracture energies, cohesive strengths, and various material softening behaviors. The noteworthy merits of the simplified model are that the energy con- Figure 4. Figure 4 shows that the normal and tangential tractions are both properly defined in the same regions as expected. In the following section, the applicability of the simplified model is demonstrated using multiple cases.
Path Dependence of Work-of-Separation
The analysis of work-of-separation is a way to study the behavior of a coupled cohesive zone model [13,32,41]. In this paper, we compare the work-of-separation of the simplified PPR traction-separation law (SPPR) with the original PPR model for proportional separation paths and non-proportional paths. The fracture parameters in [32] were utilized in this investigation: = 100 N/m, = 200 N/m, = 3 MPa, = 12 MPa, = 3, = 3, = 0.01, and = 0.01.
Proportional Separation
The proportional separation path is shown in Figure 5. The variable in Figure 5 is the separation angle between the path direction and tangent, and Δ is the separation for the proportional path. With the increase in Δ , the interface gradually debonds. The workof-separation is calculated with the following expression [32].
Path Dependence of Work-of-Separation
The analysis of work-of-separation is a way to study the behavior of a coupled cohesive zone model [13,32,41]. In this paper, we compare the work-of-separation of the simplified PPR traction-separation law (SPPR) with the original PPR model for proportional separation paths and non-proportional paths. The fracture parameters in [32] were utilized in this investigation: φ n = 100 N/m, φ t = 200 N/m, σ max = 3 MPa, τ max = 12 MPa, α = 3, β = 3, λ n = 0.01, and λ t = 0.01.
Proportional Separation
The proportional separation path is shown in Figure 5. The variable θ in Figure 5 is the separation angle between the path direction and tangent, and ∆ r is the separation for the proportional path. With the increase in ∆ r , the interface gradually debonds. The work-of-separation is calculated with the following expression [32].
where δ r = δ 2 n + δ 2 t . The first term in the work-of-separation expression is the work conducted by the normal traction (W n ), and the second term in the expression is the work conducted by the tangential traction (W t ). W sep = W n = φ n when the separation angle θ is 90 • . When θ = 0 • , the work-of-separation W sep and W t are the same as the mode II fracture energy φ t . Figure 6 illustrates the variation of W sep , W n , and W t with respect to the separation angles. The results for the PPR model are on the left and for the SPPR model are on the right. The changing laws of W sep , W n , and W t , with respect to the separation angles for different models, are the same. Especially, when the separation angle is 0 • or 90 • , the curves for the SPPR model are exactly the same as the PPR model.
If θ is equal to 0 • , W sep and W t increase from 0 to the mode II fracture energy (200 N/m) with the increase in ∆ r , while W n remains zero. When θ is equal to 90 • , W sep and W n reach the mode I fracture energy (100 N/m), and W t stays at zero. For the intermediate angles (0 • < θ < 90 • ), the W sep , W n , and W t of both models change monotonically with respect to the increase in the separation angle θ. These verify that the PPR model and SPPR models both guarantee the consistency of the cohesive constitutive model.
There is a difference between the PPR model and the SPPR model. When 0 • < θ < 90 • , the work conducted by the normal traction W n for the PPR model only has a small change with increases in the separation angle. In contrast, the SPPR model has a more obvious and uniform change within the whole separation angles. This is due to the fact that the cohesive interaction region for normal traction of the PPR model is smaller than the SPPR model here (φ n < φ t ), leading to a smaller W n for the PPR model under mixed-mode fracture conditions.
where = √ 2 + 2 . The first term in the work-of-separation expression is the work conducted by the normal traction ( ), and the second term in the expression is the work conducted by the tangential traction ( ). = = when the separation angle is 90°. When = 0°, the work-of-separation and are the same as the mode II fracture energy . Figure 6 illustrates the variation of , , and with respect to the separation angles. The results for the PPR model are on the left and for the SPPR model are on the right. The changing laws of , , and , with respect to the separation angles for different models, are the same. Especially, when the separation angle is 0° or 90°, the curves for the SPPR model are exactly the same as the PPR model.
If is equal to 0°, and increase from 0 to the mode II fracture energy (200 N/m) with the increase in Δ , while remains zero. When is equal to 90°, and reach the mode I fracture energy (100 N/m), and stays at zero. For the intermediate angles (0°< < 90°), the , , and of both models change monotonically with respect to the increase in the separation angle . These verify that the PPR model and SPPR models both guarantee the consistency of the cohesive constitutive model.
There is a difference between the PPR model and the SPPR model. When 0°< < 90°, the work conducted by the normal traction for the PPR model only has a small change with increases in the separation angle. In contrast, the SPPR model has a more obvious and uniform change within the whole separation angles. This is due to the fact that the cohesive interaction region for normal traction of the PPR model is smaller than the SPPR model here ( < ), leading to a smaller for the PPR model under mixedmode fracture conditions.
Non-Proportional Separation
The non-proportional separation path is shown in Figure 7. Path 1 is that the interface is loaded in the normal direction until Δ = Δ , ; then, complete tangential separation occurs. Accordingly, path 2 is that the interface is first loaded in shear up to Δ , , and then completely broken in the normal direction [41]. The expressions of the work-of-separation for the two paths were given by [32]: For the first path (Figure 7a), Δ , = 0 represents the pure mode II fracture, while Δ , = describes the pure mode I fracture. Similarly, for the second path (Figure 7b), when Δ , is zero, the separation path illustrates the pure mode I failure, while Δ , = represents the pure mode II fracture. The change of Δ , from 0 to (resp. Δ , from 0 to ) demonstrates the gradual change of the mode mixity from the mode I fracture to the mode II fracture (resp. from the mode II fracture to the mode I fracture).
Non-Proportional Separation
The non-proportional separation path is shown in Figure 7. Path 1 is that the interface is loaded in the normal direction until ∆ n = ∆ n,max ; then, complete tangential separation occurs. Accordingly, path 2 is that the interface is first loaded in shear up to ∆ t,max , and then completely broken in the normal direction [41]. The expressions of the work-of-separation for the two paths were given by [32]: For the first path (Figure 7a), ∆ n,max = 0 represents the pure mode II fracture, while ∆ n,max = δ n describes the pure mode I fracture. Similarly, for the second path (Figure 7b), when ∆ t,max is zero, the separation path illustrates the pure mode I failure, while ∆ t,max = δ t represents the pure mode II fracture. The change of ∆ t,max from 0 to δ t (resp. ∆ n,max from 0 to δ n ) demonstrates the gradual change of the mode mixity from the mode I fracture to the mode II fracture (resp. from the mode II fracture to the mode I fracture). Based on Equations (18) and (19), the work-of-separation may change with the increasing of ∆ n,max or ∆ t,max . If the work-of-separation has a monotonic variation from one fracture mode to the other fracture mode, this demonstrates the consistency of the cohesive constitutive model [32,41]. Figure 8 shows the variation of W sep , W n , and W t with respect to the two paths, under the condition of φ n < φ t . The results for PPR model are on the left and for the SPPR model are on the right. W sep , W n , and W t all change monotonically for both models. For path 1 (Figure 8a,b), the curves of W sep , W n, and W t for the SPPR model are exactly the same as the PPR model. Figure 8a,b show that the work conducted by the tangential traction W t gradually decreases from φ t to 0, while the work conducted by the normal traction W n increases from 0 to φ n . The work-of-separation W sep is the sum of W n and W t , and this monotonically varies from the value of φ t to the value of φ n by increasing ∆ n,max from 0 to δ n . For path 2 ( Figure 8c,d), the change rules of W sep , W n , and W t are the exact opposite of those in path 1. There is a kink point on the curves of W n and W sep , as shown in Figure 8c, but not in Figure 8d.
The separation at the kink point corresponds to the border ∆ t = δ t of the original PPR model, where δ t is the tangential conjugate final crack opening width as previously described in Section 2.1. When ∆ t is smaller than δ t , the normal cohesive interaction is obtained based on Equation (2). When ∆ t is greater than δ t , the normal traction is set to zero. The normal cohesive interaction is then not smooth but piece-wise continuous at ∆ t = δ t . As a result, the W n and W sep also have the kink point at the same location. In contrast, the curves of W sep , W n , and W t for the SPPR model, as shown in Figure 8d, are continuous and smooth. This is because both the normal and tangential cohesive interactions for the SPPR model are continuous and smooth in the region This indicates that the SPPR model describes the mixed-mode fracture better.
Additionally, the same conclusion can be reached when the mode I fracture energy is greater than the mode II fracture energy as shown in Figure 9. For the PPR model, the kink point occurs in path 1 because the tangential cohesive interaction is piece-wise continuous while being continuous and smooth for the SPPR model. Figure 8d.
The separation at the kink point corresponds to the border Δ = ̅ of the original PPR model, where ̅ is the tangential conjugate final crack opening width as previously described in Section 2.1. When Δ is smaller than ̅ , the normal cohesive interaction is obtained based on Equation (2). When Δ is greater than ̅ , the normal traction is set to zero. The normal cohesive interaction is then not smooth but piece-wise continuous at Δ = ̅ . As a result, the and also have the kink point at the same location. In contrast, the curves of , , and for the SPPR model, as shown in Figure 8d, are continuous and smooth. This is because both the normal and tangential cohesive interactions for the SPPR model are continuous and smooth in the region [(Δ , Δ )|0 ≤ Δ ≤ , 0 ≤ Δ ≤ ]. This indicates that the SPPR model describes the mixed-mode fracture better.
Additionally, the same conclusion can be reached when the mode I fracture energy is greater than the mode II fracture energy as shown in Figure 9. For the PPR model, the kink point occurs in path 1 because the tangential cohesive interaction is piece-wise continuous while being continuous and smooth for the SPPR model.
Mixed-Mode Bending (MMB) Test Verification
The simplified PPR traction-separation law is verified here and compared to the original PPR model by simulating the mixed-mode bending (MMB) test. The MMB test has been widely used to validate the applicability of CZM for mixed-mode fracture [37]. The configuration of the test is shown in Figure 10. Following the geometry parameters of the MMB test, specimens were considered: L = 51 mm, h = 1.56 mm, a0 = 33.7 mm, c = 60 mm, and B = 25.4 mm.
Mixed-Mode Bending (MMB) Test Verification
The simplified PPR traction-separation law is verified here and compared to the original PPR model by simulating the mixed-mode bending (MMB) test. The MMB test has been widely used to validate the applicability of CZM for mixed-mode fracture [37]. The configuration of the test is shown in Figure 10. Following the geometry parameters of the MMB test, specimens were considered: L = 51 mm, h = 1.56 mm, a0 = 33.7 mm, c = 60 mm,
Mixed-Mode Bending (MMB) Test Verification
The simplified PPR traction-separation law is verified here and compared to the original PPR model by simulating the mixed-mode bending (MMB) test. The MMB test has been widely used to validate the applicability of CZM for mixed-mode fracture [37]. The configuration of the test is shown in Figure 10. Following the geometry parameters of the MMB test, specimens were considered: L = 51 mm, h = 1.56 mm, a 0 = 33.7 mm, c = 60 mm, and B = 25.4 mm.
Numerical simulations of the mixed-mode fracture were implemented using the commercial software ABAQUS with a user-defined element (UEL) subroutine, and such a subroutine for the PPR model was given in the work of [34]. In this paper, user element subroutines (UEL) were also utilized to implement the simplified PPR traction-separation law. Since the mesh, FE element type, boundary conditions, as well as the solving method are all the same as in [34], those items are not covered again here.
In this study, two cases were tested, one with the same fracture energy (φ n = φ t = 1 N/m) and another with different fracture energies (φ n = 1 N/m and φ t = 2 N/m). The cohesive strength σ max = τ max = 200 MPa, shape parameter α = β = 3, and the initial slope indicator λ n = λ t = 0.02 were the same for both cases. The numerical results were compared to the analytical solution given in [32].
For the same fracture energy, the computational results for different models are illustrated in Figure 11a. The results for the SPPR model and PPR model were the same, and coincided with the analytical solutions. For the case of different fracture energies (Figure 11b), the computational results for the SPPR model were in better agreement with analytical solution compared with the PPR model under the same conditions. The results for the PPR model were relatively small, as shown in Figure 11b. The reason is that the effective region for the PPR model is smaller than for the SPPR model when φ n = φ t , leading to the smaller tractions and energies under mixed-mode fractures as mentioned before.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 12 of 25 a subroutine for the PPR model was given in the work of [34]. In this paper, user element subroutines (UEL) were also utilized to implement the simplified PPR traction-separation law. Since the mesh, FE element type, boundary conditions, as well as the solving method are all the same as in [34], those items are not covered again here. In this study, two cases were tested, one with the same fracture energy ( = = 1 N m ⁄ ) and another with different fracture energies ( = 1 N m ⁄ = 2 N m ⁄ ). The cohesive strength = = 200 MPa, shape parameter = = 3, and the initial slope indicator = = 0.02 were the same for both cases. The numerical results were compared to the analytical solution given in [32].
For the same fracture energy, the computational results for different models are illustrated in Figure 11a. The results for the SPPR model and PPR model were the same, and coincided with the analytical solutions. For the case of different fracture energies (Figure 11b), the computational results for the SPPR model were in better agreement with analytical solution compared with the PPR model under the same conditions. The results for the PPR model were relatively small, as shown in Figure 11b. The reason is that the effective region for the PPR model is smaller than for the SPPR model when ≠ , leading to the smaller tractions and energies under mixed-mode fractures as mentioned before. a subroutine for the PPR model was given in the work of [34]. In this paper, user element subroutines (UEL) were also utilized to implement the simplified PPR traction-separation law. Since the mesh, FE element type, boundary conditions, as well as the solving method are all the same as in [34], those items are not covered again here. In this study, two cases were tested, one with the same fracture energy ( = = 1 N m ⁄ ) and another with different fracture energies ( = 1 N m ⁄ = 2 N m ⁄ ). The cohesive strength = = 200 MPa, shape parameter = = 3, and the initial slope indicator = = 0.02 were the same for both cases. The numerical results were compared to the analytical solution given in [32].
For the same fracture energy, the computational results for different models are illustrated in Figure 11a. The results for the SPPR model and PPR model were the same, and coincided with the analytical solutions. For the case of different fracture energies (Figure 11b), the computational results for the SPPR model were in better agreement with analytical solution compared with the PPR model under the same conditions. The results for the PPR model were relatively small, as shown in Figure 11b. The reason is that the effective region for the PPR model is smaller than for the SPPR model when ≠ , leading to the smaller tractions and energies under mixed-mode fractures as mentioned before.
Improved Unloading/Reloading Relationship
Previous studies [38,39] demonstrated that the original unloading/reloading relationship was not thermodynamically consistent and produced self-healing behavior. In addition, the new unloading/reloading relationship proposed by Spring et al. [38] did not include the initial elastic region. To prevent these issues, an improved unloading/reloading relationship was developed. The modifications of the unloading/reloading relationship are interpreted below. Then, the comparison of the three models is presented in Section 4.2. For convenience in the presentation of the results, the original unloading/reloading relationship is referred to as model (i) here. The new unloading/reloading relationship developed in [38] is referred to as model (ii), while the improved one proposed in this paper is referred to as model (iii).
Modification
The reason why model (ii) has a lack of an initial elastic region is that the variables Δ and Δ in Equations (11) and (12) are updated at the very beginning. Referring to the definition of model (i), Δ and Δ should not be updated unless certain conditions are met, for example, the peak cohesive strength should be passed. Therefore, how to determine the peak becomes the key. For model (i), and are used. However, and are the separations corresponding to the peak cohesive strength under mode I and mode II fractures, respectively. Under the conditions of mixed-mode fracture, the separations are not and , as illustrated in Figure 12. Figure 12 also shows that the peaks change with the variation of mode mixing. Thus, the separations corresponding to the peak under mixed-mode fractures are not convenient to obtain. For this, an alternative method is presented here to estimate the peak, which is based on the gradients of the tractions.
The improved unloading/reloading relationship (model (iii)) is expressed as
Improved Unloading/Reloading Relationship
Previous studies [38,39] demonstrated that the original unloading/reloading relationship was not thermodynamically consistent and produced self-healing behavior. In addition, the new unloading/reloading relationship proposed by Spring et al. [38] did not include the initial elastic region. To prevent these issues, an improved unloading/reloading relationship was developed. The modifications of the unloading/reloading relationship are interpreted below. Then, the comparison of the three models is presented in Section 4.2. For convenience in the presentation of the results, the original unloading/reloading relationship is referred to as model (i) here. The new unloading/reloading relationship developed in [38] is referred to as model (ii), while the improved one proposed in this paper is referred to as model (iii).
Modification
The reason why model (ii) has a lack of an initial elastic region is that the variables ∆ max n and ∆ max t in Equations (11) and (12) are updated at the very beginning. Referring to the definition of model (i), ∆ max n and ∆ max t should not be updated unless certain conditions are met, for example, the peak cohesive strength should be passed. Therefore, how to determine the peak becomes the key. For model (i), δ , as illustrated in Figure 12. Figure 12 also shows that the peaks change with the variation of mode mixing. Thus, the separations corresponding to the peak under mixed-mode fractures are not convenient to obtain. For this, an alternative method is presented here to estimate the peak, which is based on the gradients of the tractions.
The improved unloading/reloading relationship (model (iii)) is expressed as where ∆ χ n and ∆ γ t are state variables, and ∆ χ n = ∆ n and ∆ γ t = ∆ t by default. This indicates that T υ n (∆ n , ∆ t ) = T n (∆ n , ∆ t ) and T υ t (∆ n , ∆ t ) = T t (∆ n , ∆ t ), until the following conditions are met: the gradients of tractions Then, ∆
Comparison
In this section, comparisons of the three models are drawn using the example in [39], where the following set of parameters is used: = 100 N m ⁄ , = 300 N m ⁄ , = 2 MPa, = 4 MPa, = 3, = 5, = 0.20, and = 0.25. The loading process consists of three steps. First, a proportional mixed-mode loading where Δ = Δ is applied up to a predefined value Δ. Then, a proportional mixed-mode unloading where Δ = Δ is carried out down to 0. Finally, a mode I reloading (keeping Δ = 0) is conducted.
Comparison
In this section, comparisons of the three models are drawn using the example in [39], where the following set of parameters is used: φ n = 100 N/m, φ t = 300 N/m, σ max = 2 MPa, τ max = 4 MPa, α = 3, β = 5, λ n = 0.20, and λ t = 0.25. The loading process consists of three steps. First, a proportional mixed-mode loading where ∆ n = ∆ t is applied up to a predefined value ∆. Then, a proportional mixed-mode unloading where ∆ n = ∆ t is carried out down to 0. Finally, a mode I reloading (keeping ∆ t = 0) is conducted.
Unlike the work in [39], the simplified PPR traction-separation law proposed in this paper is used here instead of the PPR model. Therefore, δ t is not used, and the fracture occurs for either ∆ n = δ n or for ∆ t = δ t . Based on the parameters above, δ n = 0.099 mm, δ t = 0.171 mm, δ peak n = 0.020 mm, and δ peak t = 0.043 mm. Figure 13 shows how the dissipated energy for the three models change with the increase in the proportional loading amplitude ∆. The three models give the same energy values at the beginning and end. The variation of the energy value for model (i) is quite different from model (ii) and model (iii), while model (ii) and model (iii) are almost the same. The dissipated energy given by model (iii) is identical to model (i) when ∆ < 0.017 mm, which is a constant equal to the mode I fracture energy of φ n = 100N/m.
When ∆ ≥ 0.017 mm, the change law of the energy value for model (iii) is exactly the same as model (ii). There are three discontinuities for model (i), at ∆ = δ peak n = 0.020 mm, ∆ = δ peak t = 0.043 mm, and ∆ = δ n = 0.099 mm, whereas model (ii) gives a smooth continuous curve, and model (iii) only has one discontinuity at ∆ = 0.017 mm. These differences are explained below by analyzing the changes of the traction components during the loading process.
First, consider the proportional loading amplitudes ∆ around 0.017 mm. Figure 14 presents the variations of the traction components during the loading process for a proportional loading amplitude of ∆ = 0.016 mm. The computation results for model (i) and model (iii) are identical; thus, both are presented using Figure 14a. As can be observed in Figure 14a, during unloading, both traction values back up along the same curves that they followed during loading, when the peak of tractions was not reached.
Consequently, the final energy is only dissipated in pure mode I reloading, which is equal to φ n = 100 N/m. In contrast, both unloading curves follow straight lines with the model (ii), as shown in Figure 14b. This is due to no peak value being needed to start using the linear response. That is say, damage is assumed to occur at the beginning, and the initial elastic region is ignored. Due to the assumed damage, the energy dissipated in pure mode I reloading for model (ii) is smaller than for model (i) and model (iii). As a result, the total dissipated energy (97.0 J/m 2 ) for model (ii) is lower than φ n .
When ∆ = 0.017 mm, the peak of normal traction is reached under the mixed-mode loading (Figure 15a,b), and therefore, the linear unloading response of model (iii) is activated. As a consequence, the variations of the traction components during the whole loading process for model (iii) become the same as for model (ii), both presented in Figure 15b. They still apply for larger proportional loading amplitudes ∆ > 0.017 mm.
Thus, in the following analysis, the results for model (ii) and model (iii) are all displayed with the same diagrams. Due to the variation of response, the dissipated energy for model (iii) changes from 100 J/m 2 (φ n ) at ∆ = 0.016 mm to 97.0 J/m 2 at ∆ = 0.017 mm. Such an energy discontinuity is inherent to any CZM model that obeys a curved line in the reversible range and an unloading straight line when irreversibility has appeared, which was discussed in detail by Gilormini et al. [39]. Therefore, the small energy jump is accepted.
When ∆ = 0.019 mm, the peak of normal traction is exceeded, as shown in Figure 16a,b. However, because of ∆ < δ peak n = 0.020 mm, the normal traction for model (i) still returns along the loading path during the unloading process, leading to a questionable response that the traction increases with the decrease in separations. When ∆ = 0.020 mm, the δ peak n value is reached, and thus the model (i) is activated. Similar to that for model (iii) at ∆ = 0.017 mm mentioned above, there is a small energy jump for model (i) due to the change from an elastic region to a softening region. In contrast, for model (ii) and model (iii), the dissipated energy varies continuously.
Consider now the proportional loading amplitudes ∆ around δ peak t = 0.043 mm. Figure 17a, for ∆ = 0.042 mm, shows that tangential traction component T t still returns along the loading path and increases significantly during the unloading process. When ∆ = 0.043 mm (Figure 17c), the δ peak t value is reached, and therefore, the tangential unloading component in model (i) is activated as well. On account of the added energy that is dissipated by T t during the proportional loading/unloading process, the total energy given by model (i) increases sharply, which induces a jump at ∆ = 0.043 mm in Figure 13. In contrast, model (ii) and model (iii) have a smooth evolution of dissipated energy, as can be observed in Figure 17b,d.
Finally, consider the proportional loading amplitudes ∆ around δ n = 0.099 mm. When ∆ = 0.098 mm (Figure 18a,b), which is slightly below the critical value δ n , both tractions are near 0 at the end of loading. For model (i), there is an increase in T t during unloading, and, due to that, ∆ n in the tangential term of model (i) varies during proportional unloading. When the proportional loading amplitude ∆ = 0.099 mm, the critical value δ n is reached, and hence fracture is complete (Figure 18a,b). As a result, the unloading and reloading phases no longer exist, and the dissipated energy for the three models becomes the same, equal to 155.2 J/m 2 .
From the above analysis, the original unloading/reloading relationship (referred to as model (i) here) may induce some questionable responses, such as increasing traction during unloading. The new unloading/reloading relationship proposed by Spring et al. [38] (referred to as model (ii) here) may bring damage at the beginning and ignore the initial elastic region. The improved unloading/reloading relationship (referred to as model (iii) here) proposed in this paper combines the merits of the above two models, which prevents the issues mentioned above, and defines an elastic region before a softening regime. is dissipated by during the proportional loading/unloading process, the total energy given by model (i) increases sharply, which induces a jump at Δ = 0.043 mm in Figure 13. In contrast, model (ii) and model (iii) have a smooth evolution of dissipated energy, as can be observed in Figure 17b,d.
Finally, consider the proportional loading amplitudes Δ around = 0.099 mm . When Δ = 0.098 mm (Figure 18a,b), which is slightly below the critical value , both tractions are near 0 at the end of loading. For model (i), there is an increase in during unloading, and, due to that, Δ in the tangential term of model (i) varies during proportional unloading. When the proportional loading amplitude Δ = 0.099 mm, the critical value is reached, and hence fracture is complete (Figure 18a,b). As a result, the unloading and reloading phases no longer exist, and the dissipated energy for the three models becomes the same, equal to 155.2 J/m 2 .
From the above analysis, the original unloading/reloading relationship (referred to as model (i) here) may induce some questionable responses, such as increasing traction during unloading. The new unloading/reloading relationship proposed by Spring et al. [38] (referred to as model (ii) here) may bring damage at the beginning and ignore the initial elastic region. The improved unloading/reloading relationship (referred to as model (iii) here) proposed in this paper combines the merits of the above two models, which prevents the issues mentioned above, and defines an elastic region before a softening regime. is dissipated by during the proportional loading/unloading process, the total energy given by model (i) increases sharply, which induces a jump at Δ = 0.043 mm in Figure 13. In contrast, model (ii) and model (iii) have a smooth evolution of dissipated energy, as can be observed in Figure 17b,d.
Finally, consider the proportional loading amplitudes Δ around = 0.099 mm . When Δ = 0.098 mm (Figure 18a,b), which is slightly below the critical value , both tractions are near 0 at the end of loading. For model (i), there is an increase in during unloading, and, due to that, Δ in the tangential term of model (i) varies during proportional unloading. When the proportional loading amplitude Δ = 0.099 mm, the critical value is reached, and hence fracture is complete (Figure 18a,b). As a result, the unloading and reloading phases no longer exist, and the dissipated energy for the three models becomes the same, equal to 155.2 J/m 2 .
From the above analysis, the original unloading/reloading relationship (referred to as model (i) here) may induce some questionable responses, such as increasing traction during unloading. The new unloading/reloading relationship proposed by Spring et al. [38] (referred to as model (ii) here) may bring damage at the beginning and ignore the initial elastic region. The improved unloading/reloading relationship (referred to as model (iii) here) proposed in this paper combines the merits of the above two models, which prevents the issues mentioned above, and defines an elastic region before a softening regime.
Application
Interface damage, which even happens in the construction phase, has become a major problem for China Railway Track System (CRTS-II) slab track. To reveal the behavior of the slab track under the difference of temperatures, the effect of daily changing tempera-
Application
Interface damage, which even happens in the construction phase, has become a major problem for China Railway Track System (CRTS-II) slab track. To reveal the behavior of the slab track under the difference of temperatures, the effect of daily changing temperature on the curling behavior and interface stress of slab track in the construction stage was researched by the authors [2]. As a follow-up study, the interface damage of slab track under daily changing temperature is analyzed by implementing the improved cohesive zone model in this section.
The CRTS-II slab track consists of precast slab, CA mortar, and concrete base, as shown in Figure 19. All of these components are modeled according to actual size. The dimensions, material properties, mesh, FE element type, and boundary conditions of each component are all the same as in [2]; those items are not covered again here.
Interface cracks usually occur between the track slab and CA mortar, as shown in Figure 1. The interlaminar cracking is modelled based on the constitutive model proposed in this paper, by using the commercial software ABAQUS with a user-defined interaction (UINTER) subroutine. The validated interface parameters [42] are φ n = 2.6 N/m, φ t = 4 N/m, σ max = 0.015 MPa, τ max = 0.015 MPa, α = 2, β = 2, λ n = 0.1 and λ t = 0.1. Due to symmetry of the geometry and loading conditions, only a quarter of the slab track is established. The 3-D finite element model of CRTS-II slab track is built as presented in Figure 20.
Based on the proposed model, the interface damage is simulated under gravity load and measured temperature. The measured temperature was input into the model as temperature load using user-defined subroutines named UTEMP [2]. In the analysis, the summer temperature (Figure 9 in [2]) is taken as an example. As the initial stress field has an influence on the stress history and stress level, the time of 14:30 with the maximum temperature difference is selected as the starting time. Figure 21 shows the interface crack opening (COPEN) distribution of the slab track system as a result of temperature change. It is found that the damage at four corners is the most obvious. Such damage mode is exactly the same as that observed in high-speed railway lines.
The normal and the two shear stresses of the interface at the slab corner are shown in Figure 22. It can be observed that the interface damage is mainly caused by the presence of normal and lateral shear stresses. It is worth noting that the stresses change smoothly for the model proposed in the paper and PPR model, while piece-wise continuously for the cohesive zone model in ABAQUS. Moreover, the problem of self-repair for the PPR model is found in Figure 22b,c. The cause was mentioned before. Figure 23 shows the interface normal stresses (CPRESS) between the slab and CA mortar layer when interface cracking happens. It is found that the stress distribution for the model proposed in the paper and PPR model is almost the same, and continuously changes with location. However, that for the cohesive zone model in ABAQUS is rugged and unreasonable. For example, the tensile and compressive stresses occur simultaneously around slab corner. This may be due to the stress oscillation [33]. the cohesive zone model in ABAQUS. Moreover, the problem of self-repair for the PPR model is found in Figure 22b,c. The cause was mentioned before. Figure 23 shows the interface normal stresses (CPRESS) between the slab and CA mortar layer when interface cracking happens. It is found that the stress distribution for the model proposed in the paper and PPR model is almost the same, and continuously changes with location. However, that for the cohesive zone model in ABAQUS is rugged and unreasonable. For example, the tensile and compressive stresses occur simultaneously around slab corner. This may be due to the stress oscillation [33].
Conclusions
A simplified cohesive zone model combined with an improved unloading/reloading relationship was proposed in this paper to overcome certain shortcomings of the original model, and was validated using multiple cases.
First, the traction-separation laws of the PPR model under different conditions of fracture energies were compared. We concluded that the cohesive interaction regions for the normal and tangential traction components were different, when the mode I fracture
Conclusions
A simplified cohesive zone model combined with an improved unloading/reloading relationship was proposed in this paper to overcome certain shortcomings of the original model, and was validated using multiple cases.
First, the traction-separation laws of the PPR model under different conditions of fracture energies were compared. We concluded that the cohesive interaction regions for the normal and tangential traction components were different, when the mode I fracture energy was not equal to the mode II fracture energy. This may lead to an undesired response where one traction component is still very large while the other traction component has vanished, which is unrealistic for most interfaces encountered in civil engineering practice. To address this issue, the simplified PPR model was developed based on the original model. We found that the simplified model had unified formulas and cohesive interaction regions regardless of the fracture energies. The investigations of the path dependence of work-of-separation and the simulation of the mixed-mode bending test both demonstrated that the simplified model guaranteed the consistency of the cohesive constitutive model and had better performance in modeling the mixed-mode fracture.
When a loading/unloading/reloading process was applied, we observed that the original unloading/reloading relationship, which was commonly utilized with the PPR model, induced questionable responses, such as increasing the traction during unloading. The new unloading/reloading relationship proposed by Spring et al. [38] ignored the initial elastic region. By conducting an analysis of the above issues and the causes, the unloading/reloading relationship was improved based on the gradient of traction. We verified that the improved unloading/reloading relationship prevented the above issues and defined an elastic region before a softening regime.
The proposed model provides a tool for the research of the interface cracking mechanism of ballastless tracks. After the above analysis and verification, the proposed model solves the problem of "self-repair" in the existing models and can correctly simulate the interface damages and cracking process under reciprocating loads. By using the UINTER platform of ABAQUS/Standard user interface constitutive subroutine, the module of interlaminar cracking analysis based on the constitutive model proposed in this paper could be constructed.
After coupling the module with the main structure model of ballastless track, a nonlinear finite element model of multilayer slab ballastless track system that could accurately simulate the interlayer compound mode cracking was constructed. Based on the model, the mechanism of interface cracking can be analyzed in detail [42]. The results of the research on the defect mechanism of the ballastless track can provide a scientific basis for the maintenance of the defects of ballastless tracks and guide the research of the monitoring of track service status, such as monitoring point placement and data analysis.
The proposed model could model the initiation and propagation of interface cracks under a coupled thermo-mechanical operating condition; however, it does not take into account the time/temperature dependency of the interfacial fracture parameters, which is regarded as our future work. | 2021-01-07T09:08:19.343Z | 2021-01-05T00:00:00.000 | {
"year": 2021,
"sha1": "d7e2f7ec3ec1606c57ee18560f54ce5909f9b899",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/1/456/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7b453d38ea0fb852771ccaf3c0f65f5a3e701bf8",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
252608009 | pes2o/s2orc | v3-fos-license | Evaluation of Higher-Order Skills Development in an Asynchronous Online Poster Session for Final Year Science Undergraduates
Preparing a scientific poster and presenting it at a conference supports the development of a range of skills in undergraduates that are relevant to further study and the workplace. This investigation focused on an asynchronous online poster session in a final year undergraduate science module at a UK university to assess evidence of higher-order skills development and determine student perceptions of the benefits and challenges of participating in the session. The study analysed 100 randomly selected posters from the 2020 session for evidence of scientific understanding, application, and critical evaluation, together with the feedback received on them. While 73% of the posters demonstrated understanding and 70% application, a lower proportion (42%) demonstrated critical evaluation skills. Seventy-eight percent of posters were considered to have received feedback from peers that gave an effective or partially effective evaluation of scientific content. Focus group discussions involving nine students led to the identification of themes relating to constraints, academic challenges, skills and experience, and personal development. Students recognized the value of the conference for skills development and the experience it gave of “ real ” science, while acknowledging the challenges involved in producing posters, giving feedback to peers, and managing their time. The asynchronous online poster session enabled students to develop higher-order cognitive and communication skills that are valued by employers. This format provides a pragmatic and easy to implement alternative to synchronous online conferences, which is relevant to the shift toward online learning in higher education, due to the COVID-19 pandemic and increase in distance learning and international students.
Introduction
Participation in academic conferences provides an opportunity for undergraduate students to expand their knowledge while developing skills in networking and communication, both of which are increasingly valued by employers (Kneale et al., 2016). Poster sessions in conferences enable students to present their work and receive feedback but can feel less intimidating than oral presentations. Hence, they may be particularly suitable for novice presenters (Halligan, 2008). Preparing and presenting a poster enables "situated learning" to take place (Lave & Wenger, 1991), providing a safe environment for the novice, supported by collaboration with peers and more experienced members of the academic community (Kneale et al., 2016).
The process of preparing a poster and presenting it at a conference supports the development of creative, scientific, and communication skills (Holt et al., 2020). Through communicating their research to others, students can apply their knowledge and demonstrate a deeper understanding of the subject (Spronken-Smith et al., 2013). The ability to review, synthesise, and clearly articulate ideas can be considered an essential skill to help graduates transition into careers and be successful life-long learners (Jewell et al., 2020). Giving and receiving feedback engages students actively, which enhances their learning (Liu & Carless, 2007) and can develop critical evaluation and reflection skills (Little, 2020). Careful thought is needed to construct explanations when giving feedback, which helps consolidate the giver's own knowledge and understanding (Van Popta et al., 2017). Both preparing a poster and giving and receiving feedback can therefore support deeper learning, with a greater focus on understanding and constructing meaning (Mathieson, 2014). Previous studies report that students value poster sessions (Kinikin & Hench, 2012;Kneale et al., 2016;Mabrouk, 2009), recognising their benefits, including developing science communication skills and interacting with others at the poster session (Holt et al., 2020).
Bloom's taxonomy (Bloom, 1956) provides a hierarchical framework to assess skills development in producing posters and giving feedback on them. Students need to know and understand concepts to be able to apply that knowledge, for example in considering the wider implications of research findings and critically evaluating study methodologies. Thus, application and evaluation can be considered higher-order skills than knowledge and understanding (Zheng et al. 2008). While some studies have evaluated student academic performance in posters (e.g., Gosselin & Golick, 2020), few have focused on student posters in terms of higher-order cognitive skills, such as critical evaluation and the application of knowledge. These higher-order skills will be the focus of this study, together with the understanding and explanation of the ideas and concepts that underpins them (Zheng et al., 2008). Student poster sessions can take place online and have increasingly done so because of the COVID-19 pandemic, which necessitated a rapid shift to "virtual" delivery of higher education (HE) tuition throughout the world. This shift has accelerated the removal of boundaries between traditional and online education, which were already becoming blurred prior to the pandemic (Lockee, 2021). While the online format has some advantages, such as lower costs (Freeze et al., 2020;Holt et al., 2020) and increased accessibility and equity (Saribipour et al., 2021), the lack of in-person interaction can make it more difficult to discuss the research outlined in posters. This lack of interaction can be particularly challenging for distance learners, who can already feel somewhat isolated (Gillett-Swan, 2017). Despite the challenges of delivering online learning, it is likely that it will continue to be offered throughout the HE sector as a delivery mode (Lockee, 2021).
Several online poster sessions that have taken place since the start of the pandemic have been wholly or partly synchronous. For example, Freeze et al. (2020) report on a student poster session involving a combination of pre-recorded video presentations on YouTube and a live session using Zoom breakout rooms, while Holt et al. (2020) describe a synchronous poster session hosted on Mozilla Hubs involving a virtual poster hall, with students using avatars to stand by their posters and interact with viewers. Synchronous online sessions provide a degree of social presence and can give a sense of community (Holt et al., 2020), but there can be issues with connectivity and Internet speed (Basaran & Yalman, 2020;Freeze et al., 2020).
Online poster sessions can also take place in an asynchronous format. Although they may feel less personal and interactive and lack the immediate feedback that can reduce miscommunication (Wang & Wang, 2021), asynchronous platforms can be more convenient for distance learning (Kear et al., 2012). They give students from different time zones or with other commitments an opportunity to participate that might not be possible with synchronous sessions. Furthermore, the flexibility of asynchronous platforms can make for a more comfortable learning environment for students with disabilities (Terras et al., 2015) and give more time and space for participation (Wang & Wang, 2021).
The Open University (OU) is one of the largest universities in Europe, with over 150,000 students (Open University, 2021). It is an established and respected provider of online HE, which it delivers through a combination of synchronous and asynchronous platforms. OU students have an average age of 27 when commencing their degrees and are often employed in full or part-time work or have family and caring responsibilities. They study at a flexible intensity, ranging from 8 to 36 hours per week depending on the number of modules studied. Here, we focus on an asynchronous online student poster session that is a core component of a third-year multidisciplinary science module. Through analysis of poster content and student perceptions of the poster session we will address the following questions: • How can an asynchronous online poster session help develop science students' understanding, application, and critical evaluation skills?
• What do students consider to be the key benefits and challenges of participating in the asynchronous online poster session and how does this relate to the skills evidenced in their posters?
These questions will be relevant in terms of planning and improving online activities for distancelearning students. They are also more widely relevant as HE institutions expand their online tuition in response both to the COVID-19 pandemic and to increasing numbers of international and distancelearning students, for whom participation in face-to-face activities is not always feasible.
The Online Student Poster Session
The OU runs an online student poster session as part of the third-year undergraduate multidisciplinary "Evaluating Contemporary Science" module, which has up to 250 students in each cohort. The module is recommended to be studied for 8 to 10 hours per week, with three study weeks allocated for researching and preparing the poster and accompanying materials. Each student prepares a poster on a subject of their choosing within one of five topics (antibiotic resistance, diesel vehicles, nuclear legacy, moons and asteroids, and rare earth elements), through which they compare the scientific approaches and research findings in two recent primary research papers of their choice. They also produce a fourminute audio commentary of the poster, key words, and an image that is used to promote their poster.
A series of live online tutorials are offered prior to the poster session on each of the topics, which are recorded so that students can review them as required. These provide instruction on how to search for relevant literature and emphasise the science aspect, which complements the written guidance the students are given on what to include in their posters. Students are instructed to produce their poster in portrait mode and in a font size that is legible, but they are otherwise encouraged to develop their own style and format.
The poster and accompanying material are uploaded onto OpenStudio. This is an online platform where artefacts (e.g., posters and images) are shared and students can add feedback comments, together with more immediate feedback in the form of icons such as "smile" and "favourite." In this way, OpenStudio supports a form of social learning (Jones et al., 2017).
The poster session takes place over a two-week period. During this time, students select at least two other posters through browsing titles and thumbnail images or through a keyword search, and they provide feedback as comments in OpenStudio. They are given a set of structured questions and are encouraged to use the CORBS (clear, owned, regular, balanced, specific) approach when giving feedback (Hawkins & Shohet, 2012).
The student poster and feedback given on other posters contribute to approximately 10 percent of the assessment score for the module. Following the poster session, students develop the research carried out for their poster over an eight-week period, leading to the production of a briefing document and research proposal that forms a major component of their final examined assessment.
Methodology
The research used a mixed-methods approach involving two phases. In the first phase, we analysed student poster content for evidence of scientific understanding, application, and critical evaluation. In the second phase we considered student perceptions of the benefits and challenges of participating in the poster session through synchronous online focus group discussions. Ethical approval for both phases of the research was gained from the OU's Human Research and Ethics Committee prior to commencement.
Analysis of Poster Content and Feedback
We randomly selected 100 posters from the 198 that were uploaded by the 2020 student cohort. This was considered a sufficiently large sample size to capture the variation in the posters while being pragmatic to analyse within the time and resources available for the study. Following anonymisation, they were assessed using eight criteria (Table 1) covering scientific understanding (understanding), application (application), and critical evaluation (evaluation). Each criterion was assigned a score on a Likert scale from 1 (very poor / no attempt) to 5 (excellent). For example, for "use of language," a score of 3 indicated it was satisfactory in meeting the criteria of being clear, concise, and having appropriate use of terminology. "Use of language" that scored 4 (good) and 5 (excellent) also recognised which terms needed to be explained to students from outside their discipline in a manner appropriate to a generally scientifically educated audience. The criteria were grouped into overarching criteria for understanding, application, and evaluation, and the individual criterion scores totalled for each of the three overarching groups. These overarching criteria are hierarchical and reflect elements of Bloom's taxonomy (Bloom, 1956), with understanding (Bloom's "comprehension") underpinning application, above which sits evaluation.
Scientific understanding can be demonstrated through the students' use of language; if they conveyed the key points from the studies in concise and non-technical language, this indicated that they understood them. The presentation of data from the studies can also indicate understanding, with students who successfully produced their own figures and/or annotated figures to indicate key points considered to show a greater understanding than those who simply copied figures from the original papers.
In terms of Bloom's taxonomy, interpretation has been treated as part of both "understanding" or "application" in previous studies (Stanny, 2016), but we considered it a measure of "application" for the purpose of this study. Students' application of knowledge and understanding was evaluated through how they interpreted the research findings from the two papers they compared and drew conclusions from them. They were also required to suggest future research based on their interpretation of the research findings and apply their understanding to contextualise the research. They can demonstrate evaluation skills both through evaluating each study, for example in terms of their limitations, and comparing the two studies.
We assessed the feedback received on each poster in terms of (i) whether the feedback focused on appearance or scientific content (assigned to one of three categories: appearance, content, or equally) and (ii) whether the feedback was considered to give an effective evaluation of the poster's scientific content. This was also assigned to one of three categories: yes (constructive criticism and engagement with points made in the poster), partially (some attempt to give feedback on scientific content) or no (lack of feedback on scientific content).
Each of the study authors assessed approximately half the posters, with a standardisation exercise undertaken prior to analysis to ensure consistency. This involved both study authors, together with a third, independent researcher, analysing the same 10 posters and comparing criteria scores and assessment of the feedback. This showed there were minimal differences between the researchers in their assessment of the posters.
Student Perceptions
Student participants were recruited by contacting all those studying the module in 2020, of whom nine volunteered for the focus group discussions. Two one-hour discussions took place via an online platform (Adobe Connect), with four students in one group and five in the other. The discussions were held after the final module assignment was submitted but before the results were released, to avoid this influencing student views in the discussions.
The focus groups were facilitated by two student volunteers. The volunteers were experienced in using the Adobe Connect platform so they could assist with any technical problems, but they were not part of the student cohort for the module. Prompts for discussion related to: • how students prepared for the poster session • how students experienced the poster session • what students thought they gained from the poster session Thematic analysis was undertaken on the transcripts from the discussion recordings, which were coded using NVivo software. This helped identify groupings within the initial codes and led to the identification of key themes and subthemes (Braun & Clarke, 2006). Table 2 shows the percentage of posters gaining each score for the three overarching criteria (understanding, application, evaluation). Posters generally scored highly in terms of the understanding and application criteria, with 73 percent and 70 percent of the posters scoring in the 3 to 5 range (i.e., considered satisfactory, good, or excellent) and both criteria having a mean Likert score of 3.1. Scores for the evaluation criteria were somewhat lower, with 42% of posters scoring in the 3 to 5 range and with a mean Likert score of 2.3. Nearly three-quarters of posters received feedback that focused mainly on scientific content (19%) or had an equal focus on content and appearance (53%). Over three-quarters of posters were considered to provide an effective evaluation of scientific content (53%) or at least partially so (25%).
Student Perceptions
Four main themes emerged from the student focus group discussions: constraints, academic challenges, skills and experience, and personal development. These themes and the underlying subthemes (Table 3) are discussed below, illustrated by anonymised quotations from focus group participants.
Constraints
Several participants experienced time pressures, particularly those who were studying other modules with competing deadlines or had other commitments that limited the time they could devote to preparing their poster and participating in the poster session. To add to these pressures, the 2020 poster session took place between March 21 and April 3, which coincided with the start of the first COVID-19 lockdown in the United Kingdom. The run up to the March 20 deadline for uploading posters to OpenStudio involved a period of considerable uncertainty, with schools and workplaces shutting in the week preceding the lockdown.
Some participants felt somewhat constrained by the fact that their poster and the feedback they gave on others was assessed. For example, one participant noted that "you were being graded based on how you presented what was said" and that "you were pandering to what you felt was required some of the time as well."
Academic Challenges
The participants had some experience searching for suitable papers from previous study, including earlier on in the module, but some found it challenging to choose suitable papers to base their poster on. The large number of potentially suitable papers available in the literature made it difficult for students to know when to stop searching and finalise their choice of papers. Another challenge was synthesising and comparing the two papers and communicating the findings to a wider audience within the limited space available. One participant noted that "a lot of it was down to how much detail to include … you don't want to give too much but you don't want to give too little -so it's getting the balance right." Choosing posters to give feedback on was sometimes challenging, as there were numerous posters to choose from. The participants wanted to choose posters for which they could provide constructive feedback. One commented that they "wanted one I could actually provide feedback for" and not one which they looked at and thought "I don't really know what to say about this." Giving feedback on posters that were weaker overall was particularly challenging as the participants were aware how much work had gone into each poster and did not want to cause offence. As one participant put it "it was a good exercise in how to be tactful -knowing how to tell somebody that they can improve an aspect not in a way to cause offence but that could actually help them." It was considered more challenging to give feedback on a poster's scientific content than its appearance. However, the importance of giving feedback on content was recognised, with one participant noting that "you've got to try and concentrate on the actual science -obviously the display is part of the process but it's looking at the science -that's the main focus."
Skills and Experience
The participants appreciated the value of giving and receiving feedback and recognised where this fitted into their studies and how this could help improve their work. One participant observed that "it's difficult when people are giving you constructive criticism, but you've just got to take it on board and actually give it some reflection … and try to move forward and incorporate that into your future work." They also recognised the role of the conference in developing skills, including those needed for further study, dissertations, and work-related projects. According to one participant, "You are learning or improving the [skills] you've already got-things like evaluating, making sure work is concise, making sure you are doing it to the right audiences-lots and lots of skills to get your teeth into." The participants appreciated the role of conference poster sessions and the feedback process in real science. As one participant noted, "It's how they learn as well, doing a poster, because they are getting feedback from other scientists, which helps to build and develop whatever you are talking about at conferences and that's how they learn and progress." Linked to this was a more general feeling that they were experiencing how real science operates, for example that "people really do just talk to each other and that's how they develop their ideas."
Personal Development
Recurring themes throughout the discussions were those of interest and enjoyment, with one participant stating that they "enjoyed the creative side" of making a poster as well, while others commented on the interesting science that was presented and how they enjoyed the opportunity to broaden their knowledge. As one participant put it, "It was really interesting to learn about other subjects … I never expected to be reading a poster about volcanoes and satellites, for instance." While the poster session was challenging and took some participants out of their comfort zones, it also helped build confidence. One participant noted that they "gained confidence, otherwise I wouldn't be contributing to this focus group now, so I think it is certainly going to help me in the future." The participants also appreciated the social aspect of the session through interacting with fellow students with similar interests and learning from them: One of the nicest things for me was actually getting to see other students' work because you never normally get to see something another student [has produced] and I think it's quite beneficial to see how other students approach things.
Some participants mentioned that they would have liked to have had the opportunity to discuss each other's posters in real time: that is, for there to have been a synchronous element to the poster session. However, another noted that "because everybody has different timetables, I don't know how it would have been possible to bring everybody together." Furthermore, the asynchronous format meant that students could take their time to look at the posters, which remained accessible in the weeks after the poster session had finished. One participant stated that "you do what you need to do at the time and then you can go back at your leisure which is really nice to have a look through them all."
Discussion
Analysis of the poster content showed that nearly three-quarters (73%) of students demonstrated their understanding through use of language and use and/or adaptation of figures. A slightly lower proportion (70%) demonstrated their application skills through interpreting results, drawing conclusions, proposing further research and contextualising the research. A lower proportion (42%) critically evaluated the studies they investigated, which was also evidenced in the focus group discussions, where the academic challenges in producing posters was highlighted. Understanding provides the foundation for the application of higher-order cognitive skills such as application and evaluation, with a solid understanding of the material needed to apply these skills (Zheng et al., 2008). It is therefore not surprising that evaluation-the highest-order skill out of those assessed according to Bloom's taxonomy-was the least well demonstrated skill, and that the converse was the case with understanding. However, the abilities to apply knowledge and understanding, evaluate information, and think critically are needed for the workplace (Gasper & Gardner, 2013), so development of these skills is particularly important for students.
From the focus group discussions, it was clear that students recognised the role of the poster session in developing key skills such as communication and critical evaluation, which is supported by the wider literature on student conferences (Kneale et al., 2016;Little, 2020;Walkington et al., 2017). They also recognised the relevance of these skills to their future study and work, which also emerged as a key theme in another study investigating the value of student poster presentations (Kneale et al., 2016). The students appreciated the insight the poster session gave them into real science, for example, by experiencing the types of discussion that take place at conferences. This can act as a motivator through enabling them to see themselves as part of an academic community (Little, 2020).
Several academic challenges were mentioned in the focus group discussions, such as choosing suitable papers to base the poster on and communicating findings in the limited space afforded by the poster format. The limited space and time available might have contributed to the poorer performance overall in terms of evaluation, which may have been considered less of a priority by students when having to cover several elements in their posters. The investigation focused on a final year module, where a higher level of learner autonomy and discipline knowledge was expected. Students were therefore provided with less comprehensive guidance than they would be at an earlier stage of study, but we nevertheless recommend this is consolidated and made more prominent for future poster sessions. Some focus group participants commented on the difficulties in selecting posters to give feedback on. Students might not necessarily select the highest quality posters to comment on, instead being drawn to "middling" posters where there is more of an opportunity to give constructive, critical feedback (Lotz et al., 2018).
Over three-quarters of the posters received feedback that was considered to make at least some attempt to effectively evaluate their scientific content. In addition, the majority of posters received feedback that either focused on scientific content, or had an equal focus on content and appearance. However, nearly 30 percent received feedback that focused on the poster's appearance rather than its content, which could be considered an "easier" option to give. Possible reasons for this were not explored in the current study but could be the result of a reluctance to give critical feedback (McMahon, 2010) and risk causing offence, as noted in the focus group discussions. Students might be more comfortable giving critical feedback on poster appearance, such as font size or layout, than on the scientific content when they are aware how much effort went into researching and creating it. Given the challenges students faced producing their posters and their weaker performance in terms of evaluation, it is unsurprising that they found it difficult to give feedback on the scientific content of other posters, whose contents they were not familiar with and might not have felt qualified to judge. Furthermore, some students might have adopted a "surface" approach (Mathieson, 2014) to giving feedback through finding something to say to "tick a box" rather than engaging more deeply with the poster content. Workloads and their perception can influence student approaches to learning, with heavy workloads associated with the adoption of a surface approach (Scully & Kerr, 2014). Some focus group participants commented on the time pressures they were under, and it is possible that students with less time available might have engaged less deeply with poster content when giving feedback.
The focus group participants appreciated the value of receiving critical feedback in terms of improving their future work. The benefits of receiving feedback are widely recognised, both in terms of improving students' research work (Van Popta et al., 2017) and preparing them for developing academic careers: for example, through exposure to the peer review process (Kneale et al., 2016). However, the benefits of giving feedback are less widely recognised, despite contributing to improved understanding (Van Popta et al., 2017) and improving students' self-assessment skills when evaluating their own work in the future (Yucel et al., 2014).
The student posters and the feedback students gave were assessed as part of the module. This was considered a constraint by some focus group participants in that they did not feel they could take any risks in producing their posters, and the feedback format they were expected to use was rather formal. This contrasts with optional, non-assessed student conferences, described by Little (2020) as a "riskfree space, away from the determinants and pressures of summative assessments," which enable research to be reported in an interesting and engaging manner (Walkington et al., 2017). A non-assessed poster session may also give space for students to undertake more challenging conversations with each other, which could support the development of critical thinking skills (Little, 2020). However, such nonassessed activities might involve lower levels of participation, particularly from students experiencing time pressures.
The poster session provided a confidence boost for some participants, which could help reduce anxiety with any future presentations, both in their studies and employment (Little, 2020) and improve their sense of self-worth. Producing a poster and participating in a conference can be an enjoyable experience and give students a feeling of ownership and achievement (Kinikin & Hench, 2012). It can also enable students to gain ideas and inspiration (Kneale et al., 2016), as shown in the focus group discussions where one student described a "lightbulb moment" as to how real science operates. Indeed, some researchers have described student experience of a research conference as being "transformative," both in the short and longer term (Little, 2020;Walkington et al., 2017).
The poster session used an asynchronous format. Although there were a few reported issues with the OpenStudio interface, such as the need to download the audio commentary before listening, the session ran smoothly, with the asynchronous format less reliant on Internet connectivity than a synchronous format (Holt et al., 2020). An asynchronous poster session may lack the informal and spontaneous conversations that may take place in real time, with immediate feedback including from social cues such as facial expressions (Walkington et al., 2017). However, there is some evidence for text-based nonverbal communication through electronic cues such as the frequency and tone of postings and use of emoticons, which could have a positive influence on student engagement (Al Tawil, 2019). Such peer interaction was commented on positively in the focus groups and can help combat feelings of isolation among students (Al Tawil, 2019), particularly those studying at a distance. Asynchronous poster sessions can therefore provide a pragmatic and flexible alternative to synchronous online sessions, which is relevant not just in times of pandemics, but more widely with the increase in distance learning and international students in HE.
The study had some limitations. Firstly, the audio commentaries that students submitted to accompany their posters were not analysed due to time constraints. This may have influenced the findings regarding evidence for skills development, as it is possible the audio might have provided additional evidence, such as for critical evaluation. Analysing the audio commentaries would be a worthwhile follow up to gain a further insight, while recognising that oral communication is a key employability skill. Secondly, although the sample size was large, representing just over half the posters from the 2020 student cohort, it represented a snapshot from a single, perhaps somewhat atypical, year as the conference coincided with the start of the COVID-19 pandemic and first UK lockdown. This meant that students were experiencing considerable stress and uncertainty, both when producing and uploading their posters and during the two-week poster session, which might have compromised their efforts. A longitudinal study following the same approach, but in a more "normal" year would be a worthwhile follow up to this investigation. Thirdly, the number of focus group participants was low, and students volunteered to participate in them. The focus group participants might therefore not have been representative of the wider student population for the module and are likely to have been those that are more actively engaged to start with.
Conclusions
The HE landscape is rapidly changing, with online tuition and learning no longer an exception. This study demonstrated that an asynchronous online format could provide an effective, pragmatic, and flexible alternative to synchronous online poster sessions. The study showed that an asynchronous poster session enabled students to develop and demonstrate a range of higher-order skills relating to understanding, application, and critical evaluation. Students recognised the role of the poster session in developing these skills while being aware of the challenges involved in producing the poster and giving feedback. They appreciated the insight it gave them into real science, together with the personal benefits they gained in the form of enjoyment and increased confidence. Such confidence, together with the skills developed, will be of key importance as they complete their degrees and enter the future workplace. | 2022-09-03T15:37:04.969Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "d4cb35f7551455f33370605e17551ebd2f1ec0d2",
"oa_license": "CCBY",
"oa_url": "https://www.irrodl.org/index.php/irrodl/article/download/6238/5756",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d4cb35f7551455f33370605e17551ebd2f1ec0d2",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
269709138 | pes2o/s2orc | v3-fos-license | Identifying the Differences in Symmetry of the Anthropometric Parameters of the Upper Limbs in Relation to Manual Laterality between Athletes Who Practice Sports with and without a Ball
: The purpose of this study was to identify the asymmetries between the dimensions of the upper limbs, in relation to manual laterality, of the athletes who practice team sports with a ball and those who practice other sports without a ball. We consider the fact that ball handling influences the development of anthropometric parameters at the level of the upper limbs and especially at the level of the hand in correlation with the execution technique and with the characteristics of the practiced sport. This study included 161 student-athletes, who were male and right-handed, divided into two groups: the group of athletes practicing ball sports (G_BS) with 79 (49%) subjects and the group of athletes practicing non-ball sports (G_NBS) with 82 (51%) subjects. The anthropometric measurements of the upper limbs were performed on both sides (right and left): upper limb length, hand length, palm length, hand breadth, hand span, pinky finger, ring finger, middle finger, index finger and thumb. The most relevant symmetries, between the two groups, were recorded in the following anthropometric parameters on the right side (recording the smallest average differences): ring finger 0.412 cm and thumb 0.526 cm; for the left side, they were the ring finger 0.379 cm and thumb 0.518 cm. The biggest asymmetries between the two groups were recorded, for both the right and left sides, for the following parameters: upper limb length > 6 cm; hand span > 2 cm; and hand length > 1 cm. For all the anthropometric parameters analyzed, the athletes from the ball sports group (G_BS) recorded higher average values than those from the other group (G_NBS) for both upper limbs. The results of this study reflect the fact that handling the ball over a long period of time, starting from the beginning of practicing the sport until the age of seniority, causes changes in the anthropometric dimensions of the upper segments, causing asymmetries between the dominant (right) and the non-dominant (left) side.
Introduction 1.General Information about Asymmetries in Sports
Recent research focuses on the identification of symmetry and proportional relationships between different anthropometric body parameters [1][2][3].A series of studies have highlighted numerous minor asymmetries between different human anthropometric parameters, comparing the morphological development of the right and left side of the body [4,5].Sports performance is influenced by the individual characteristics of physical development and by the level of the motor and technical ability of athletes in relation to the specifics of the sport practiced [6,7].Somatic growth and development are influenced by endogenous and exogenous factors embodied by the following aspects: genetic, morphological, endocrine, metabolic, environmental, physical activity level, nutritional, quality of life, Symmetry 2024, 16, 558 2 of 16 etc.[8,9].Studies have highlighted the impact of physical exercise on physical growth and development in different stages of ontogeny [10,11].The diversification of the forms of physical exercise and the modernization of sports equipment and technologies required the adaptation of the training process with an impact on the physical development of the practitioners [12,13].Studies have shown that perceptual asymmetries are beneficial (as is the case of eye acuity for shooting), as well as the development of some anthropometric dimensions of the upper and lower limbs that are the result of a long process of preparation in relation to the sport practiced and involves mainly unilateral executions in the regime of force, speed and coordination [1,2,14,15].In these cases, the dominant segment develops asymmetrically compared to the non-dominant one, and this fact, on the one hand, can facilitate the efficiency of some technical exercises, but on the other hand, it can cause the appearance of musculoskeletal disorders and negative influences on mobility, technique, aesthetics and body postures [16,17].
Specific Information on Asymmetries in Sports That Involve the Use and Non-Use of Implements with the Hands
Sports that use objects, such as a ball, require the athletes to adapt both to the specifics of the sport and the effort, as well as to the dimensions and characteristics of the ball or the equipment used [18][19][20].The technical skills specific to team games with a ball such as catching, passing, throwing, etc., determine the adaptation of the way the ball is held or handled with one or both hands, as well as the characteristics and different sizes of the ball [21].These adaptations require, from the players, a certain arrangement of the palms and fingers on the ball in relation to the dimensions of the ball and the execution technique.Prolonged sports training for handling the ball can influence how the transverse or longitudinal dimensions of the hand develop [22,23].A series of studies have highlighted asymmetries in the development of anthropometric parameters between the dominant and the non-dominant hand [21,24].Other studies have focused on identifying the differences in the anthropometric parameters of the upper and lower limbs according to different age categories or gender [25][26][27].
The specificity of the practiced sport requires the adaptation of the preparation and the adaptation of the technical executions depending on the object of the game.In the case of sports games, the size of the ball is adapted to the age characteristics of the athletes, with the ball being of different sizes depending on the sports category (the size and weight of the ball increases in relation to the age of the players).Perfecting technical skills requires efficient handling of the ball, regarding catching, holding, passing, throwing, etc. Adapting to the characteristics of the ball, we consider that it influences the level of development of the dimensions of the upper limbs, especially at the level of the palm.
Statement of the Problem, Where the Problematic Situation Is Clearly Identified and the Importance of this Study Is Justified
Numerous studies aimed at measuring the anthropometric dimensions of athletes in relation to the practiced sport [10,28,29], but studies that identify how the specific sports training for team games with a ball influences the level of development of the ball are extremely few in number; we have not identified a specialized study on this topic.We consider that the long training time interval from children, juniors and seniors in which the technical executions of players from team sports with a ball required continuous adaptation to the characteristics of the ball.The long sports training with a ball determines the development and adaptation of certain anthropometric parameters of the hand to the dimensions and characteristics of the ball and to the playing technique.Based on the previously presented arguments, we consider that the novel aspects of our study consist of the identification of symmetries and asymmetries between the anthropometric parameters of the right and left upper limbs of athletes who practice sports with a ball compared to those who practice sports without a ball.
Asymmetry of the upper limbs can determine symmetries of the posture of the whole body [30,31].The asymmetry of the upper limbs and the hand can have an influence on Symmetry 2024, 16, 558 3 of 16 the structure of the body involving muscles, joints, tendons, ligaments, nerves, bones, the circulatory system, etc. [32,33].Also, the asymmetries of the upper limbs and the hand can have a major impact on subjects regarding body aesthetics [34,35].In athletes, the inequalities of the longitudinal and transversal anthropometric dimensions of the upper limb combined with the preponderant involvement of the dominant segment in handling the ball can cause the appearance of some medical conditions.Studies have shown that in athletes, the most common diseases of the upper limbs appear as a result of long repetitive demands, among which we have identified sprains or strains, carpal tunnel syndrome, tendinitis and white finger syndrome (Raynaud's syndrome) [36][37][38].Prolonged handling of the ball mainly with the dominant upper segment influences the upper development of motor parameters, such as strength, joint mobility, coordination, etc. [39][40][41].The anthropometric evaluation of the upper limbs and the hand allows for the identification of asymmetries in order to correct them through physical therapy exercises and by preventing the risk of accidents [42][43][44].The identification and correction of the asymmetries of the upper limbs and the hand contribute to maximizing the motor potential of the athletes [45,46].
Objectives of this Study and Hypotheses
The aim of this study was to identify the asymmetries between the dimensions of the upper limbs, in relation to manual laterality, of the athletes who practice team sports with a ball and those who practice other sports without a ball.The hypothesis of this study was based on the assumption that athletes who practice team sports with a ball, compared to those who practice other sports without a ball, have asymmetries of the upper limbs, in relation to manual laterality, as a result of handling the ball for a long time.
Participants
The present cross-sectional study included 161 student-athletes, who were male and right-handed (dominant hand), divided into two groups: the group of athletes practicing ball sports (G_BS) with 79 (49%) subjects and the group of athletes practicing non-ball sports (G_NBS) with 82 (51%) subjects.The characteristics of the group of athletes practicing ball sports (G_BS) included the following: age (arithmetic mean ± SD), 20.73 ± 1.32 years; height, 1.83 ± 0.05 cm; and coefficient of variation (CV) 3.22%, minimum 170 cm and maximum 192 cm.The characteristics of the group of athletes practicing non-ball sports (G_NBS) included the following: age (arithmetic mean ± SD), 20.91 ± 1.18 years; height, 1.79 ± 0.06 cm; and coefficient of variation (CV) 3.35%, minimum 169 cm and maximum 188 cm.The subjects of the G_BS are active athletes from the following team games (with the ball): handball 68 (73.4%) and basketball 21 (26.6%).The subjects of the G_NBS are active athletes from the following sports (without a ball): athletics, swimming, sports dance, karate and gymnastics.The sample size calculated for this study was 148 subjects for a confidence level of 95%, with a margin of error ±5%.In this study, initially 165 subjects were included.We kept 161 subjects, and 4 subjects were eliminated because it was found that they had injuries on a hand and could not perform anthropometric measurements under the specific conditions of this study.The inclusion criteria of the subjects in this study include active athletes, students in the bachelor's and master's program in the field of physical education and sports, performance of all anthropometric measurements, and age 20-24 years.The subjects of this study participated voluntarily on the basis of an informed consensus regarding compliance with the principles of the Declaration of Helsinki.This study was approved, no.11.1./11April 2023, by the Review Board of the Physical Education and Sports Program of "G.E.Palade" University of Medicine, Pharmacy, Science and Technology of Targu Mures, Romania.
Study Design
This study took place between November and December 2023, aiming to measure the anthropometric parameters of the upper limbs of the study subjects (Figure 1).The anthropometric measurement sessions were carried out under similar conditions and with the same measuring instruments for all the subjects in the two groups.The order of the anthropometric measurements was identical for all the subjects.The anthropometric measurements of the upper limbs were performed on both sides of the body (right and left): upper limb length, hand length, palm length, hand breadth, hand span, pinky finger, ring finger, middle finger, index finger and thumb.The height measurement was performed with a digital height measuring scale, and the measurement of the anthropometric dimensions of the hands was performed with a digital caliper.The collection of anthropometric data of the subjects of this study was carried out by the authors in the same institutions and using the same equipment.
sinki.This study was approved, no.11.1./11April 2023, by the Review Board of the Physical Education and Sports Program of "G.E.Palade" University of Medicine, Pharmacy, Science and Technology of Targu Mures, Romania.
Study Design
This study took place between November and December 2023, aiming to measure the anthropometric parameters of the upper limbs of the study subjects (Figure 1).The anthropometric measurement sessions were carried out under similar conditions and with the same measuring instruments for all the subjects in the two groups.The order of the anthropometric measurements was identical for all the subjects.The anthropometric measurements of the upper limbs were performed on both sides of the body (right and left): upper limb length, hand length, palm length, hand breadth, hand span, pinky finger, ring finger, middle finger, index finger and thumb.The height measurement was performed with a digital height measuring scale, and the measurement of the anthropometric dimensions of the hands was performed with a digital caliper.The collection of anthropometric data of the subjects of this study was carried out by the authors in the same institutions and using the same equipment.
Measures
The 11 anthropometric parameters measured for this study were as follows (Figure 2
Measures
The 11 anthropometric parameters measured for this study were as follows (Figure 2
Statistical Analysis
The results of this study were processed statistically with the IBM-SPSS 22 software.To highlight the relevance of the results, we calculated the following statistical parameters: the average (X); standard deviation (SD); mean difference between the final and initial tests (ΔX); Std.Error Difference (SED); Fisher test value (F); Student T-test (t); coefficient of variance for the homogeneity of the group (CV); and the confidence interval with lower and upper levels (95% CI).The reference value selected for statistical significance was p < 0.05.
The standardized Limb Symmetry Index (SI) and the standardized directional asymmetry (DA) were calculated for all the anthropometric parameters targeted in this study.The DA score is a qualitative indicator that indicates the direction of asymmetry of the anthropometric parameters toward the right and the left (a positive value indicates the right side, and a negative value indicates that the left side has higher values).
Statistical Analysis
The results of this study were processed statistically with the IBM-SPSS 22 software.To highlight the relevance of the results, we calculated the following statistical parameters: the average (X); standard deviation (SD); mean difference between the final and initial tests (∆X); Std.Error Difference (SED); Fisher test value (F); Student T-test (t); coefficient of variance for the homogeneity of the group (CV); and the confidence interval with lower and upper levels (95% CI).The reference value selected for statistical significance was p < 0.05.
The standardized Limb Symmetry Index (SI) and the standardized directional asymmetry (DA) were calculated for all the anthropometric parameters targeted in this study.The DA score is a qualitative indicator that indicates the direction of asymmetry of the anthropometric parameters toward the right and the left (a positive value indicates the right side, and a negative value indicates that the left side has higher values).
Results
Table 1 shows the results recorded by the two groups in this study regarding the anthropometric parameters of the right and left upper limbs.In Table 2, we present the comparative results recorded between the right and left upper segments for each group in this study; in Table 3, we show the comparative results between the two groups in this study.In Table 4, we present the results of the asymmetry and asymmetry indexes of the anthropometrics parameters between the right and left upper limbs.indicate a relatively small spread for the sizes of all the fingers and for the palm lengths and hand breadths; for the upper limb lengths, hand lengths and hand spans, the spread is very high.The values of the coefficient of variation were <10%, which indicates a very good homogeneity for the group of players who practice sports without a ball, for all the analyzed anthropometric parameters.For the group of athletes who practice ball sports (handball and basketball), the results of the anthropometric measurements of the right and left upper limbs indicate a relatively small spread for the sizes of all the fingers and for the palm lengths and hand breadths; for the upper limb lengths, hand lengths and hand spans, the spread is very high.The values of the coefficient of variation for all the anthropometric parameters of the upper limbs were <10%, which reflects a very good homogeneity for the group of players who practice ball sports (Table 1).
Table 2 shows the results of the statistical analysis of the anthropometric measurements between the upper right and left segments for athletes who practice sports without a ball (G_NBS).Analyzing the results, it can be seen that the differences recorded between the right and left side are not statistically significant for the reference threshold p < 0.05 for the following parameters: upper limb length, hand length, hand breadth, hand span, pinky finger, middle finger and index finger.Statistically significant differences were identified for the palm length, ring finger and thumb.The dimensions of the upper right segment are larger than the left side only for the following three anthropometric parameters: the hand length by 0.060 cm, hand breadth by 0.005 cm and thumb by 0.021 cm; the other anthropometric dimensions are larger for the left side compared to the right.The biggest differences identifying the asymmetries between the right and the left side were recorded in the upper limb length with −0.067 cm and the hand length with 0.060 cm; symmetries were registered for the hand breadth with 0.005 cm and the pinky finger and index finger with 0.002 cm.The differences in the arithmetic averages between the two segmental parts for all the measured anthropometric parameters fell between the two limits of the 95% CI.
Analyzing the results between the upper right and left segments for the athletes who practice ball sports (G_BS), it can be noticed that the differences recorded are statistically significant (p < 0.05) for all the anthropometric parameters with two exceptions: the hand breadth (p = 0.765) and hand span (p = 0.946).The differences in the arithmetic averages between the two right and left sides, for all the anthropometric parameters measured, fell between the two limits of the 95%CI (Table 4).For the G_BS, the dimensions of the upper right segment (dominant, with which the ball is predominantly handled) are larger than the left (non-dominant) side for the following anthropometric parameters: the hand length by 0.053 cm; palm length by 0.025 cm; hand breadth by 0.001 cm; pinky finger by 0.041 cm; ring finger by 0.019 cm; middle finger by 0.027 cm; index finger by 0.022 cm; and thumb by 0.029 cm.Larger anthropometric dimensions for the left side compared to the right side were recorded in the following parameters: the upper limb length by 0.253 cm and the hand span with 0.001 cm.The biggest asymmetries between the right and the left side were recorded: the upper limb length with −0.253 cm, hand length with 0.053 cm and pinky finger with 0.041 cm; the best symmetries were registered for the hand breadth and hand span with 0.01 cm (Table 2).
Table 3 shows the statistical processing of the results between the two study groups.By analyzing the T-test values recorded in this study, it is obvious that the differences between the two groups, for each anthropometric parameter, for each right and left side, are statistically significant.The differences in the arithmetic averages recorded for each anthropometric parameter on each right and left side fell between the lower and upper limits of the 95% CI.Comparing the results between the two groups, for the group from the ball sports (G_BS), we find that the following dimensions of the anthropometric parameters of the right side are greater than those of the left side: palm length by 0.904 cm; pinky finger by 0.674 cm; ring finger by 0.412 cm; middle finger by 0.708 cm; index finger by 0.584 cm; and thumb by 0.526 cm.The dimensions of the left side of the ball sports group (G_BS_) are larger than those of the non-ball sports group (G_NBS) for the following anthropometric parameters on the right side: the upper limb length by 6.246 cm; hand length by 1.077 cm; hand breadth with 0.617 cm; and hand span 2.294 cm.
The most relevant symmetries, between the two groups, were recorded in the following anthropometric parameters on the right side (recording the smallest average differences): ring finger 0.412 cm and thumb 0.526 cm; for the left side, they were the ring finger 0.379 cm and thumb 0.518 cm.The biggest asymmetries between the two groups were recorded, for both right and left sides, in the following parameters: upper limb length > 6 cm; hand span > 2 cm; and hand length > 1 cm.For all the analyzed parameters, the athletes from the ball sports group (G_BS) recorded higher average values than those from the non-ball sports group (G_NBS) for both parts of the upper segments, which reflects the fact that handling the ball over a long period of time, starting from the beginning of practicing sports and up to the age of seniority, determines changes in the dimensions of the upper segments, especially of the hand.
Analyzing the Limb Symmetry Index (SI) results from Table 4, for the G_NBS, we found that the largest asymmetries were in the following parameters: hand length with 0.325, thumb with 0.388 and palm length with −0.218; for the G_BS, the biggest asymmetries were identified in the anthropometric parameters: pinky finger with 0.641, thumb with 0.489 and middle finger with 0.336.Analyzing the limb directional asymmetry (DA) values, we found that for the G_NBS, the asymmetries indicate that the right side of the upper limb (dominant) in the anthropometric parameters, the hand length, hand breadth and thumb, and most of the parameters are directed toward the left (non-dominant): upper limb length, palm length, hand span, pinky finger, ring finger, middle finger and index finger.For the G_BS, we state that only two parameters are directed toward the left side (non-dominant): upper limb length and hand span; all the other parameters are oriented toward the right side of the upper limb, which also represents the dominant part of the subjects in the G_BS.
Discussions
The present study focused on the identification of asymmetries between the dimensions of the upper limbs, in relation to manual laterality, of the athletes who practice team sports with a ball and those who practice other sports without a ball.The results of this study reveal that there are significant differences between the ball sports group (G_BS) compared to the non-ball sports group (G_NBS) for all the measured anthropometric dimensions between the right and the left upper segment.Analyzing the results between the right and left upper segment for the athletes from the G_BS, it can be seen that the differences recorded are statistically significant (p < 0.05) for all the anthropometric parameters with two exceptions, the hand breadth and hand span, where the differences were not statistically insignificant.Analyzing the G_NBS results, we find that the differences recorded between the right and left side are not statistically significant for the following parameters: upper limb length, hand length, hand breadth, hand span, pinky finger, middle finger and index finger.Statistically significant differences for the G_NBS were identified for the palm length, ring finger and thumb.For both groups, the dimensions of the anthropometric parameters on the dominant (right) side were greater than on the non-dominant (left) side.
The results of our study facilitate the understanding of how the practice of ball sports influences the anthropometric parameters of the upper limb, especially at the level of the hand in relation to the size of the ball, the level of technical mastery and the technical requirements and ball handling requirements specific to the respective sport [49,50].The results of our study are in line with previous studies that identified asymmetries between the anthropometric parameters of the upper limbs depending on the different characteristics of the groups of subjects and in relation to different aptitudes and motor skills [51,52].Our study completes the level of knowledge regarding how practicing ball sports influences the development of anthropometric parameters regarding symmetries and asymmetries in the upper limbs [53,54].
A series of studies have highlighted the link between the anthropometric dimensions and handgrip strength of the players, as well as with the execution level of technical skills, concluding that there are positive correlations between these three parameters [55,56].The studies highlighted that there is an interdependence between the motor (strength and endurance) and functional capacity and the anthropometric ratios of the fingers and the hand, differentiated between male and female groups [21,[57][58][59].
A study carried out on 343 men and 290 women, adults, focused on the measurement of four anthropometric dimensions of the right and left hand and identified significant differences for all parameters, this fact being in correlation with the preferred hand [60].The results of the mentioned study substantiate the results of our study, in which significant statistical differences were identified between different anthropometric parameters between the right hand (dominant, in the case of the present study) and the left hand.Numerous studies have highlighted anthropometric differences between the right-hand and the lefthand parameters, depending on gender [61][62][63]; ethnicity [64,65]; occupation [66,67]; and laterality [68][69][70].A study conducted on 161 university student subjects identified significant differences between the male and female samples, with the male group recording an average hand width of 7.57 cm [71].The results recorded in the previously mentioned study [71] were very similar to our male sample of those who do not practice ball sports (hand breadth: right, 7.759 cm; and left, 7.754 cm).The identification of the factors that influence the anthropometric development of the body and the practice of different physical activities on the body symmetry must be approached in an interdisciplinary manner to facilitate their complex understanding from the perspective of health [72][73][74]; physical exercise [1,75]; education, etc. [76,77].
The results of our study regarding limb directional asymmetry (DA) highlight that the asymmetry is directed predominantly on the dominant right side for the G_BS group in eight anthropometric parameters, and only in two parameters (upper limb length and hand span) is the asymmetry directed toward the non-dominant left side.For the G_NBS group, we identified that only 3 parameters out of the 10 highlight an asymmetry directed toward the dominant right side, and 7 anthropometric parameters show a direction toward the non-dominant left side.The Limb Symmetry Index (SI) values of the G_BS highlight large asymmetries (SI > 0.15) in five anthropometric parameters: hand length, palm length, hand span, ring finger and thumb; in the case of the G_NBS, large asymmetries were identified in eight anthropometric parameters: upper limb length, hand length, palm length, pinky finger, ring finger, middle finger, index finger and thumb.A series of studies carried out on athletes have identified asymmetry between the dominant and the non-dominant segment, which confirms the results of our study [46,78].A study conducted on 36 handball players (average age 26.1 years) observed that the muscle mass and grip strength of the right upper limb is greater than that of the left; handball influences the asymmetric growth of body muscle hypertrophy [41].The results of our study are confirmed by other studies that identified that the most frequent inter-limb discrepancies between the dominant and non-dominant side are determined by the frequent unilateral use of the dominant segment in performing technical skills depending on the specifics of the sport [41,79,80].In another study conducted on 34 young male handball players in which inter-limb asymmetry was evaluated, they highlighted the need to adapt training in order to reduce inter-limb asymmetries in relation to long training periods [81].A series of studies highlighted the relationship between the asymmetric development of the muscle mass of the upper limbs and the dimensions of the bones in subjects who practice sports that involve predominantly unilateral technical executions [82][83][84].Studies have shown that asymmetries between the upper and lower segments increase the risk of injury with an impact on health and sports performance [85,86].
The limits of this study include measuring only the longitudinal and transversal anthropometric parameters, without measuring the circular parameters (circumference); not including female subjects in this study; limiting the age of the subjects to 20-24 years; failure to calculate the proportionality indices between different anthropometric parameters of the upper limbs, respectively, in relation to height; not using a gold standard (Dxa) measure to evaluate the anthropometric parameters; the inclusion in this study of only athletes practicing handball and basketball; and the G_BS results not correlating with the dimensions of the ball because they change depending on the level of the sports category (depending on age and gender).
Practical Implications
The practical implications of the results of this study can be directed at the modeling of sports training and at the adaptation and implementation of exercises to symmetrize the executions in order to ensure symmetry and harmonious physical development.Identifying the asymmetries of the upper limbs in relation to the sport practiced can contribute to adapting the training in order to correct these asymmetries and prevent some musculoskeletal disorders.During sports training, coaches and athletes can perform corrective and restorative exercises with a compensatory role to optimize physical potential.The asymmetries of the upper limbs as a result of practicing some sports that involve handling the ball or other objects and whose technique is predominantly unilateral also determine inequalities in terms of the involvement of physical abilities (usually, the dominant hand has superior strength, coordination, etc., parameters than the non-dominant hand) and the efficiency of technical skills.Studies have shown that the symmetrization of physical development has positive effects on harmonious physical development, body aesthetics and motor potential [75,[87][88][89].The relevant results from the present study can determine the adaptation of sports training by including corrective, compensatory and recovery exercises in order to reduce body asymmetries.
Conclusions
For the G_BS, asymmetries between the right and left sides were recorded for the upper limb length, hand length and pinky finger; the greatest symmetries were recorded for the hand breadth and hand span.For the G_NBS, the biggest differences regarding the asymmetries between the right and the left side were recorded for the upper limb length and hand length; the best symmetries were registered for the hand breadth, pinky finger and index finger.The most relevant symmetries, between the two groups, were recorded for the following anthropometric parameters on the right side: the ring finger and thumb; for the left side, this was the following: the ring finger and thumb.The biggest asymmetries between the two groups were recorded, for both right and left sides, for the following parameters: the upper limb length, hand span and hand length.For all the analyzed parameters, the athletes from the ball sports group (G_BS) recorded higher average values than those from the non-ball sports group (G_NBS) for both parts of the upper segments.The results of this study reflect the fact that handling the ball over a long period of time, starting from the beginning of practicing the sport until the age of seniority, causes changes in the anthropometric dimensions of the upper segments, causing asymmetries between the dominant (right) and the non-dominant (left) side.Analyzing the Limb Symmetry Index (SI) for the G_NBS, we find that the positive symmetry was in the following parameters: the hand length, hand breadth and thumb; for the G_BS, it was the following: the hand length, palm length, hand breadth, pinky finger, ring finger, middle finger, index finger and thumb.The closest upper inter-limb symmetry was identified for the hand span and hand breadth for the G_BS and for the index finger and hand breadth for the G_NBS.The limb directional asymmetry (DA) highlights that the asymmetry is directed predominantly on the dominant right side for the G_BS group in eight anthropometric parameters, and only in two parameters (upper limb length and hand span) is the asymmetry directed toward the non-dominant left side.For the G_NBS group, we identified that only 3 parameters out of the 10 highlight an asymmetry directed toward the dominant right side, and 7 anthropometric parameters show a direction toward the non-dominant left side.
): − Height-the distance between the vertex and the level of the sole (support surface) in the orthostatic position.− Upper limb length-the distance between the acromion and the dactylion in the orthostatic position with the upper limb in maximum extension.− Hand length-the distance between the styloid line and the dactylion.− Palm length-the distance between the styloid line and the proximal phalanges between the middle and ring finger.− Hand breadth-the direct distance from the most lateral point on the head of the second metacarpal to the most medial point on the head of the fifth metacarpal.− Hand span-the distance between the proximal phalanges of the pinky finger and the distal phalanges of the thumb, with the fingers being brought to the maximum angles.− Pinky finger-the distance between the proximal phalanges and distal phalanges of the pinky finger.− Ring finger-the distance between the proximal phalanges and distal phalanges of the ring finger.
): -Height-the distance between the vertex and the level of the sole (support surface) in the orthostatic position.-Upper limb length-the distance between the acromion and the dactylion in the orthostatic position with the upper limb in maximum extension.-Hand length-the distance between the styloid line and the dactylion.-Palm length-the distance between the styloid line and the proximal phalanges between the middle and ring finger.-Hand breadth-the direct distance from the most lateral point on the head of the second metacarpal to the most medial point on the head of the fifth metacarpal.-Hand span-the distance between the proximal phalanges of the pinky finger and the distal phalanges of the thumb, with the fingers being brought to the maximum angles.-Pinky finger-the distance between the proximal phalanges and distal phalanges of the pinky finger.-Ring finger-the distance between the proximal phalanges and distal phalanges of the ring finger.-Middle finger-the distance between the proximal phalanges and distal phalanges of the middle finger.-Index finger-the distance between the proximal phalanges and distal phalanges of the index finger.-Thumb-the distance between the proximal phalanges and distal phalanges of the thumb.−Middlefinger-the distance between the proximal phalanges and distal phalanges of the middle finger.− Index finger-the distance between the proximal phalanges and distal phalanges of the index finger.− Thumb-the distance between the proximal phalanges and distal phalanges of the thumb.
Table 1 .
Descriptive statistics of the anthropometric measurements of the upper limbs of the group practicing non-ball sports (G_NBS) and the group practicing ball sports (G_BS).
X-mean; SD-standard deviation; and CV-coefficient of variance.
Table 2 .
Statistical analysis of the anthropometric measurements of the upper limbs of the group practicing non-ball sports (G_NBS) and the group practicing ball sports (G_BS).
Table 3 .
Independent T-test of the anthropometric parameters of the upper limbs between the two study groups.
G_TS-group of ball sports; G_NBS-group of non-ball sports; ∆X-mean difference; SED-Std.Error Difference; F-Fisher test value; t-value of Student T-test; and p-Sig.level (2-tailed).
Table 4 .
Limb Symmetry Index (SI) and limb directional asymmetry (DA) of the upper limbs of the group practicing non-ball sports (G_NBS) and the group practicing ball sports (G_BS).
Table 1
shows the results of the anthropometric measurements of the right and left upper limbs of athletes who do not practice ball sports (G_NBS).The variance values | 2024-05-11T16:17:13.392Z | 2024-05-04T00:00:00.000 | {
"year": 2024,
"sha1": "443213e86e2ca4ae63400fef618ad9c6b705a191",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/16/5/558/pdf?version=1714989083",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1c312afb319ddf823ff6e8fff4177d165682f9b4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
3735999 | pes2o/s2orc | v3-fos-license | Limit distribution of the quartet balance index for Aldous's b>=0-model
This paper builds up on T. Martinez-Coronado, A. Mir, F. Rossello and G. Valiente's work"A balance index for phylogenetic trees based on quartets", introducing a new balance index for trees. We show here that this balance index, in the case of Aldous's b>=0-model, convergences weakly to a distribution that can be characterized as the fixed point of a contraction operator on a class of distributions.
Introduction
Phylogenetic trees are key to evolutionary biology. However, they are not easy to summarize or compare as it might not be obvious how to tackle their topologies, understood as the internal branching structure of the trees. Therefore, many summary indices have been proposed in order to "project" a tree into R. Such indices have as their aim to quantify some property of the tree and one of the most studied properties is the symmetry of the tree. Tree symmetry is commonly captured by a balance index. Multiple balance indices have been proposed, Sackin's (Sackin, 1972), Colless' (Colless, 1982) or the total cophenetic index (Mir et al., 2013). A compact introduction to phylogenetics, containing in particular a list of tree asymmetry measures (p. 562-564), can be found in Felsenstein (2004)'s book. This work accompanies a newly proposed balance index-the quartet index (QI, Martínez-Coronado et al., 2018b).
One of the reasons for introducing summary indices for trees is to use them for significance testing-does a tree come from a given probabilistic model. Obtaining the distribution (for a given n-number of contemporary species, i.e. leaves of the tree, or in the limit n → ∞) of indices is usually difficult and often is done only for the "simplest" Yule (pure-birth Yule, 1924) tree case and sometimes uniform model (see e.g. Aldous, 1991;Steel and McKenzie, 2001).
Using the contraction method, central limit theorems were found for various balance indices, like the total cophenetic index (Yule model case Bartoszek, 2018) and jointly for Sackin's and Colless' (in the Yule and uniform model cases . Furthermore, showed that Sackin's index has the same weak limit as the number of comparisons of the quicksort algorithm (Hoare, 1962), both after normalization of course. Chang and Fuchs (2010) consider the number of occurrences of patterns in a tree, where a pattern is understood as "any subset of the set of phylogenetic trees of fixed size jk. For a tree with n leaves such a pattern will satisfy the recursion X n,k D = X Ln,k + X * n−Ln,k where X n,k , X * n,k and L n are independent, X n,k D = X * n,k and L n is the size of the left subtree branching from the root. For the Yule and uniform models they derived central limit theorems (normal limit distribution) with Berry-Esseen bounds and Poisson approximations in the total variation distance.
Even though the pure-birth model seems to be very widespread in the phylogenetics community, more complex models need to be studied, especially in the context of tree balance. From Roch and Snir (2013)'s Lemma 4 it can be deduced that Yule trees have to be rather balanced-as the maximum quartet weight (maximum of number of randomly placed marks along branches over induced subtrees on four leaves) is asymptotically proportional to the expectation of the tree's height.
In this work here, using the contraction method, we show convergence in law of the (scaled and centred) quartet index and derive a representation (as a fixed point of a particular contraction operator) of the weak-limit. Remarkably, this is possible not only for the Yule tree case but for Aldous's more general β-model (in the β ≥ 0 regime).
The paper is organized as follows. In Section 2 we introduce Aldous's β-model and the quartet index. In Section 3 we prove our main result-Thm. 3.1 via the contraction method. When studying the limit behaviour of recursive-type indices for pure-birth binary trees one has that for each internal node the leaves inside its clade are uniformly split into to sub-clades as the node splits. However, in Aldous's β-model this is not the case, the split is according to a BetaBinomial distribution, and a much finer analysis is required to show weak-convergence, with n, of the recursive-type index to the fixed point of the appropriate contraction. Theorem 3.1 is not specific for the quartet index but covers a more general class of models, where each internal node split divides its leaf descendants according to a BetaBinomial distribution (with β ≥ 0). In Section 4 we apply Thm. 3.1 to the quartet index and characterize its weak limit. Then, in Section 5 we illustrate the results with simulations. Finally, in the Appendix we provide R code used to simulate from this weak limit.
Aldous's β-model for phylogenetic trees
Birth-death models are popular choices for modelling the evolution of phylogenetic trees. However, Aldous (1996Aldous ( , 2001 proposed a different class of models-the so-called β-model for binary phylogenetic trees. The main idea behind this model is to consider a (suitable) family {q n } ∞ n=2 of symmetric, q n (i) = q n (n − i), probability distributions on the natural numbers. In particular q n : {1, . . . , n − 1} → [0, 1]. The tree grows in a natural way. The root node of a n-leaf tree defines a partition of the n nodes into two sets of sizes i and n−i (i ∈ {1, . . . , n−1}). We randomly choose the number of leaves of the left subtree, L n = i, according to the distribution q n and this induces the number of leaves, n − L n , in the right subtree. We then repeat recursively in the left and right subtrees, i.e. splitting according to the distributions q Ln and q n−Ln respectively. Notice that due to q n 's symmetry the terms left and right do not have any particular meaning attached. Aldous (1996) proposed to consider a one-parameter, −2 ≤ β ≤ ∞, family of probability distributions, q n (i) = 1 a n (β) where a n (β) is the normalizing constant and Γ(·) the Gamma function. We may actually recognize this as the BetaBinomial(n, β + 1, β + 1) distribution and represent where B(a, b) is the Beta function with parameters a and b. Writing informally, from the form of the probability distribution function, Eq.
(2), we can see that if we would condition under the integral on τ , then we obtain a binomially distributed random variable. This is a key observation that is the intuition for the analysis presented here. Particular values of β correspond to some well known models. The uniform tree model is represented by β = −3/2, and the pure birth, Yule, model by β = 0.
Of particular importance to our work is the limiting behaviour of the scaled size of the left (and hence right) subtree, n −1 L n . Aldous (1996) characterizes the asymptotics in his Lemma 3, Lemma 2.1 (Aldous, 1996, Lemma 3 for β > −1)
Quartet index
Martínez-Coronado et al. (2018b) introduced a new type of balance index for discrete (i.e. without branch lengths) phylogenetic trees-the quartet index.
This index is based on considering the number of so-called quartets of each type made up by the leaves of the tree. A (rooted) quartet is the induced subtree from choosing some four leaves. We should make a point here about the used nomenclature. Usually in the phylogenetic literature a quartet is an unrooted tree on four leaves (e.g. Semple and Steel, 2003). However, here we consider rooted trees and following Martínez-Coronado et al. (2018b) by (rooted) quartet we mean a rooted tree on four leaves. We will from now on write quartet for this, dropping the "rooted" clarification. For a given tree T , let P 4 (T ) be the set of quartets of the tree. Then, the quartet index of T is defined as where QI(Q) assigns a predefined value to a specific quartet. When the tree is a binary one (as here) there are only two possible topologies on four leaves (see Fig. 1). Following Martínez-Coronado et al. (2018b) (their Table 1) we assign the value 0 to K 4 quartets and 1 to B 4 quartets. Therefore, the QI for a binary tree (QIB) will be QIB(T ) = number of B 4 quartets of T.
(5) Figure 1: The two possible rooted quartets for a binary tree. Left: K 4 the four leaf rooted caterpillar tree (also known as a comb or pectinate tree), right: B 4 the fully balanced tree on four leaves (also known as a fork, see e.g. Chor and Snir, 2007, for some nomenclature).
Importantly for us Martínez-Coronado et al. (2018b) show in their Lemma 4 that for n > 4, the quartet index has a recursive representation as where T n is the tree on n leaves. Martínez-Coronado et al. (2018b) considered various models of tree growth, Aldous's β-model, Ford's α-model (Ford, 2005, but see also Martínez-Coronado et al. (2018a)) and Chen-Ford-Winkel's α-γ-model (Chen et al., 2009). In this work we will focus on the Aldous's β ≥ 0-model of tree growth and characterize the limit distribution, as the number of leaves, n, grows to infinity, of the QI. We will take advantage of the recursive representation of Eq. (6) that allows for the usage of the powerful contraction method.
We require knowledge of the mean and variance of the QI and Martínez-Coronado et al. (2018b) show for Aldous's β-model that these are (their Corollaries 4 and 7) 3 Contraction method approach Consider the space D of distribution functions with finite second moment and first moment equalling 0. On D we define the Wasserstein metric where · 2 denotes the L 2 norm and the infimum is over all X ∼ F , Y ∼ G. Notice that convergence in d induces convergence in distribution.
For r ∈ N + define the transformation S : D → D as where Y , Y , τ are independent, Y , Y ∼ F , τ ∈ [0, 1] whose distribution is not a Dirac δ at 0 nor 1, satisfying, for all n, where p n,i = P ((i − 1)/n < τ ≤ i/n) and the function C(·) is of the form for some constants C r 1 ,r 2 and furthermore satisfies E [C(τ )] = 0. By Rösler (1992)'s Thms. 3 and 4 S is well defined, has a unique fixed point and for any F ∈ D the sequence S n (F ) converges exponentially fast in the d metric to S's fixed point. Using the exact arguments to show Rösler (1991)'s Thm. 2.1 one can show that the map S is a contraction. Only the Lipschitz constant of convergence will differ being ]} in our case. Notice that as τ ∈ [0, 1] and is non-degenerate at the edges, then C τ < 1 and we have a contraction.
We now state the main result of our work. We show weak convergence, with a characterization of the limit for a class of recursively defined models.
Notice that as Y 1 = 0 and by the definition of the recursion we will have E [Y n ] = 0 for all n.
The Yule tree case will be the limit of β = 0 and this case the proof of the result will be more straightforward (as commented on in the proof of Thm. 3.1).
Notice that L n /n D → τ . It would be tempting to suspect that Thm. 3.1 should be the conclusion of a general result related to the contraction method (as presented in Eq. (8.12), p. 351 Drmota, 2009). However, to the best of my knowledge, general results assume L 2 convergence of L n /n (e.g. Thm. 8.6, p. 354 Drmota, 2009), while in our phylogenetic balance index case we will have only convergence in distribution. In such a case it seems that convergence has to be proved case by case (e.g. examples in Rachev and Rüschendorf, 1995). Here we show the convergence of Thm. 3.1 along the lines of Rösler (1991).
We first derive a lemma that controls the non-homogeneous part of the recursion, i.e. C n (·) as defined in Eq. (11).
Proof For 1 ≤ (n − 1)x + 1 ≤ n − 1 and writing i = (n − 1)x + 1 we have due to the representation of Eqs. (10) and (11) Bounding the individual components, as by construction x cannot differ from i/n by more than 1/n i n Lemma 3.2 (cf. Rösler, 1991, Prop. 3.3) Let a n , b n , p n,i , n ∈ N be three sequences such that 0 ≤ b n → 0 with n, 0 ≤ p n,i ≤ 1, and Then lim n→∞ a n = 0.
Proof The proof is exactly the same as Rösler (1991)'s proof of his Proposition 3.3. In the last step we will have with a := lim sup a n < ∞ the sandwiching for all > 0 0 ≤ a ≤ C(a + ).
Having Lemmata 3.1 and 3.2 we turn to showing Thm. 3.1.
Proof[Proof of Thm. 3.1] Denote the law of Y n as L(Y n ) = G n . We take Y ∞ and Y ∞ independent and distributed as G ∞ , the fixed point of S. Then, for i = 1, . . . , n − 1 we choose independent versions of Y i and Y i . We need to show d 2 (G n , G ∞ ) → 0. As the metric is the infimum over all pairs of random variables that have marginal distributions G n and G ∞ the obvious choice is to take Y n , Y ∞ such that L n /n will be close to τ for large n. Rösler (1991) was considering the Yule model (β = 0) and there τ ∼ Unif[0, 1] and L n is uniform on {1, . . . , n − 1}. Hence, (n − 1)τ + 1 will be uniform on {1, . . . , n − 1}, remember P (τ = 1) = 0, and L n /n D = ( (n − 1)τ + 1)/n. However, when β > 0 the situation complicates. For a given n, (L n − 1) is BetaBinomial(n − 2, β + 1, β + 1) distributed (cf. Eq. 1 and Aldous (1996) Eqs. 1 and 3). Hence, if τ ∼ Beta(β + 1, β + 1) and (L n − 1) ∼ BetaBinomial(n − 2, β + 1, β + 1) we do not have L n /n D = ( (n − 1)τ + 1)/n exactly. We may bound the Wasserstein metric by any coupling that retains the marginal distributions of the two random variables. Therefore, from now on we will be considering a version, where conditional on τ , the random variable (L n − 1) is Binomial(n − 2, τ ) distributed. Let r n be any sequence such that r n /n → 0 and n/r 2 n → 0, e.g. r n = n ln −1 n. Then, by Chebyshev's inequality We now want to show d 2 (G n , G ∞ ) → 0 and we will exploit the above coupling in the bound so that the expectation of the cross products disappears. Our main step is to have a bound where the L n /n term is replaced by some transformation of τ . Letr n be a (appropriate) random integer in {±1, . . . , ± r n } and we may write (with the chosen coupling of L n and τ ), where ξr n ∈ (0,r n ) is (a random variable) such that the mean value theorem holds (for the function (·) r ). As Y n , Y ∞ have uniformly bounded second moments, 0 ≤ ξr n ≤r n ≤ r n ≤ n, by assumption r n /n → 0 and n/r 2 n → 0, we have that r rn n 2 E (n−1)τ +1+ξr n n 2(r−1) and hence for some sequence u n → 0 we have, Remembering the assumption sup i n −r h n (i) → 0, the other component can be treated in the same way as E Ln n r Y Ln − τ r Y ∞ 2 with conditioning on τ and then controlling by r n and Chebyshev's inequality L n 's deviation from its expected value. We therefore have for some sequence v n → 0 Consider the first term of the right-hand side of the inequality and denote d 2 n−1 := sup i∈{1,...,n−1} d 2 (G i , G ∞ ) where p n,i = P ((i − 1)/(n − 1) < τ ≤ i/(n − 1)). Invoking Lemmata 3.1, 3.2 and using the assumption of Eq. (9) with R = 2r we have 4 Limit distribution of the quartet index for Aldous's β ≥ 0-model trees We show here that the QIB of Aldous's β ≥ 0-model trees satisfies the conditions of Thm. 3.1 with r = 4 and hence the QIB has a well characterized limit distribution. We define a centred and scaled version of the QIB for Aldous's β ≥ 0-model tree on n ≥ 4 leaves We now specialize Thm. 3.1 to the QIB case and assume Y 1 = Y 2 = Y 3 = 0 for completeness Theorem 4.1 The sequence of random variables Y Q n random variable for trees generated by Aldous's β-model with β ≥ 0 converges with n → ∞ in the Wasserstein d-metric (and hence in distribution) to a random variable Y Q ∼ Q ≡ G ∞ satisfying the following equality in distribution Proof Denote by P 3 (x, y) a polynomial of degree at most three in terms of the variables x, y. From the recursive representation of Eq. (6) for n > 4 +n −4 P 3 (n, i).
As in our case we have r ≥ 1, then for R = 2r ≥ 2 the assumptions of Lemma 3.2 are satisfied and the statement of the theorem follows through.
2. β = 0, then directly p n,i = n −1 , Eq. (9) and assumptions of Lemma 3.2 are immediately satisfied and the statement of the theorem follows through. This is the Yule model case, in which the proof of the counterpart of Thm. 3.1 is much more straightforward, as mentioned before.
Remark 4.1 When β < 0 the process L n /n seems to have a more involved asymptotic behaviour (cf. Lemma 3 of Aldous, 1996, in β ≤ −1 case). Furthermore, the bounds applied here do not hold for β < 0. Therefore, this family of tree models (including the important uniform model, β = −3/2) deserves a separate study with respect to its quartet index.
Comparing with simulations
To verify the results we compared the simulated values from the limiting theoretical distribution of Y Q with scaled and centred values of Yule tree QI values. The 500-leaf Yule trees were simulated using the rtreeshape() function of the apTreeshape R package and Tomás Martínez-Coronado's inhouse Python code. Then, for each tree the QI value was calculated by Gabriel Valiente's and Tomás Martínez-Coronado's in-house programs. The raw values QIB(Yule 500 ) were scaled and centred as Y Q n = 500 −4 QIB(Yule 500 ) − 1 3 500 4 .
Royal Swedish Academy of Sciences (grants no. MG2015-0055, MG2017-0066) and The Foundation for Scientific Research and Education in Mathematics (SVeFUM). | 2018-04-03T02:48:46.345Z | 2018-03-06T00:00:00.000 | {
"year": 2018,
"sha1": "fc0460b9ed42d66871de68d8c96238a372e25c73",
"oa_license": null,
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2018/11/26/277376.full.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fc0460b9ed42d66871de68d8c96238a372e25c73",
"s2fieldsofstudy": [
"Biology",
"Mathematics"
],
"extfieldsofstudy": [
"Biology",
"Mathematics"
]
} |
244728665 | pes2o/s2orc | v3-fos-license | Pancreas MRI Segmentation Into Head, Body, and Tail Enables Regional Quantitative Analysis of Heterogeneous Disease
Background Quantitative imaging studies of the pancreas have often targeted the three main anatomical segments, head, body, and tail, using manual region of interest strategies to assess geographic heterogeneity. Existing automated analyses have implemented whole‐organ segmentation, providing overall quantification but failing to address spatial heterogeneity. Purpose To develop and validate an automated method for pancreas segmentation into head, body, and tail subregions in abdominal MRI. Study Type Retrospective. Subjects One hundred and fifty nominally healthy subjects from UK Biobank (100 subjects for method development and 50 subjects for validation). A separate 390 UK Biobank triples of subjects including type 2 diabetes mellitus (T2DM) subjects and matched nondiabetics. Field strength/Sequence A 1.5 T, three‐dimensional two‐point Dixon sequence (for segmentation and volume assessment) and a two‐dimensional axial multiecho gradient‐recalled echo sequence. Assessment Pancreas segments were annotated by four raters on the validation cohort. Intrarater agreement and interrater agreement were reported using Dice overlap (Dice similarity coefficient [DSC]). A segmentation method based on template registration was developed and evaluated against annotations. Results on regional pancreatic fat assessment are also presented, by intersecting the three‐dimensional parts segmentation with one available proton density fat fraction (PDFF) image. Statistical Test Wilcoxon signed rank test and Mann–Whitney U‐test for comparisons. DSC and volume differences for evaluation. A P value < 0.05 was considered statistically significant. Results Good intrarater (DSC mean, head: 0.982, body: 0.940, tail: 0.961) agreement and interrater (DSC mean, head: 0.968, body: 0.905, tail: 0.943) agreement were observed. No differences (DSC, head: P = 0.4358, body: P = 0.0992, tail: P = 0.1080) were observed between the manual annotations and our method's segmentations (DSC mean, head: 0.965, body: 0.893, tail: 0.934). Pancreatic body PDFF was different between T2DM and nondiabetics matched by body mass index. Data Conclusion The developed segmentation's performance was no different from manual annotations. Application on type 2 diabetes subjects showed potential for assessing pancreatic disease heterogeneity. Level of Evidence 4 Technical Efficacy Stage 3
organs like the heart, liver, and pancreas. 1 While nonalcoholic fatty liver disease (NAFLD) is a well-recognized disease entity, now affecting one fourth of the worldwide population and one third of US adults, 2 nonalcoholic fatty pancreas disease (NAFPD) was only coined relatively recently 1,3 despite showing similar prevalence described in a recent meta-analysis. 4 Analogously to NAFLD, NAFPD triggers inflammatory processes that, if left untreated, may lead to chronic pancreatitis and pancreatic cancer. 5,6 NAFPD has also been linked to type 2 diabetes. 7,8 Early detection of pancreatic disease is therefore important; however, these are often "silent" conditions that only become symptomatic at a late stage, when they may already be irreversible. 5 Incidental findings, where the target organ is near the pancreas, for instance, in quantitative imaging of the liver, potentially offer a way to detect pancreas pathology early.
Pancreatic disease processes, including fat infiltration, fibro-inflammation, and pancreatic cancer, are also spatially inhomogeneous. 9,10 There is increasing interest in studying pancreatic disease and the implications of disease heterogeneity, aiming to describe regional differences and localize pancreatic lesions. Early work using computed tomography (CT) classified uneven pancreatic fat infiltration into multiple subtypes or patterns, depending on the affected regions. 10 Uneven distribution of islet cells that are responsible for insulin secretion and blood sugar regulation has been reported using histology. 11 Fibrosis has been more commonly found in the ventral pancreas than in the dorsal pancreas in patients with ampullary carcinoma. 12 The frequency of pancreatic cancer also differs regionally, with 60%-70% occurrence in the head of the pancreas, and the symptoms also vary by location. 13,14 From the imaging modalities commonly used for pancreatic assessment, including histology, endoscopic ultrasound, contrast-enhanced CT, and MRI, only MRI can provide safe, noninvasive quantitative information of pancreas state, while providing full coverage and measures of spatial heterogeneity. Quantitative MRI biomarkers such as proton density fat fraction (PDFF) and T 1 have shown potential in detecting pancreas steatosis and early-stage chronic pancreatitis, respectively 15,16 ; PDFF has been used for longitudinally monitoring total pancreatic fat deposition in a diabetes remission trial. 17 Apparent diffusion coefficient from diffusionweighted imaging has shown potential at grading malignancy of a certain pancreatic neoplasm type. 18 While some studies using MRI have reported clinically important quantitative differences between pancreas subsegments, 19,20 other studies have not found such differences. 21 The pancreas is anatomically divided into three segments: head, body, and tail. The pancreas head sits within a C-shaped structure formed by the duodenum and joins with the pancreas body via the pancreas neck, a narrowing or "isthmus" that bends around the superior mesenteric vessels. The pancreas neck is typically approximately 2 cm long and is commonly included as part of the head. The pancreas body spans from the left border of the superior mesenteric vein to the left border of the aorta, where it is joined to the tail. It is generally considered that the body-tail boundary is at the midpoint lengthwise of the two segments. 22 Other pancreas subsegment classification systems have been proposed for the purposes of surgical resection based on embryological foundations. 22,23 Most studies of pancreas pathology using MRI have analyzed the images using regions of interest (ROIs), particularly a standard 3-ROI placement strategy targeting pancreatic head, body, and tail, 20,21,24,25 although some have placed an extra ROI in the pancreatic neck. 19 While ROIs have the advantage of avoiding artifactual regions, their choice of placement inevitably adds interobserver variation that may obscure clinically important differences between pancreatic segments.
Pancreas segmentation that aims to delineate the whole organ through the use of two-dimensional or volumetric scans has been proposed as an alternative analysis method to the 3-ROI placement strategy, which may improve observerdependent bias and provide more advanced metrics for the spatial assessment of chronic disease. 26 Pancreas segmentation can be performed using widely differing amounts of user intervention, however, such is the variability in size and shape of the pancreas that it is often considered too tedious to manually delineate in practice. Manual segmentation is also too costly and generally infeasible in large databases such as the UK Biobank. 27 Metrics derived from pancreas segmentations are clinically important, for instance, total pancreatic volume or the irregularity of the pancreas contour in the context of diabetes. 28,29 Pancreas segmentations may also be used for subsequent characterization of the pancreas in functional or structural quantitative imaging data acquired separately during the same imaging session.
Automated pancreas segmentation methods that have been proposed to date have been based on traditional multiatlas methodology or, more recently, convolutional neural networks. 30,31 However, while these may provide wholeorgan measurements, they do not characterize disease regionally by pancreas subsegments. One automated method for pancreas subsegmentation was reported based on k-means clustering 32 that was applied to pancreas motion analysis under radiation therapy. However, this method is dependent on initial seed points and multiple images from multiple breathing phases and was not validated for accuracy. For these reasons, the validation of a robust, automated approach for pancreas subsegmentation is desirable, with potential to bridge the gap between currently available technology and standard clinical assessment.
Starting from a segmentation of a whole-organ, landmark-based approaches have been proposed for subsegmentation into the organ's constituent parts, eg, the Couinaud segments in the case of the liver, where "landmarks" define planes of separation between the liver segments. 33 However, landmark localization is relatively sensitive to noise and overall image quality. Other methods have addressed organ subsegmentation as a single task, in which segmentation models create a multilabel segmentation, each label corresponding to an individual subsegment. For example, atlas-based segmentation uses image registration to propagate labels from a probabilistic template (constructed offline) to a target dataset. 34 Multiatlas segmentation (MAS) or deep learning (DL) segmentation methods may also be used; however, these typically need individual annotations on training subjects and may require large amounts of annotations. 34 Some DL methods have drawn inspiration from traditional atlas-based methodology. 35 Thus, the aims of this study were to: 1. develop and validate an automated imaging-based method for pancreas subsegmentation and 2. show initial application of the method in regional assessment of pancreatic disease.
Materials and Methods
First, the data that were used for template creation are described together with preprocessing of the training and validation data. Then, a groupwise registration-based parts segmentation method is presented, and the validation experiment is described. Finally, the application of the method to a type 2 diabetes cohort of UK Biobank is shown.
MRI Data
MRI data from the UK Biobank imaging substudy were used. 27 UK Biobank received ethical approval from the North West Multi-center Research Ethics Committee, and written informed consent was obtained for all subjects. One hundred subjects were used for method development, 44 females and 56 males. All were nominally healthy subjects aged 50-70 with a mean age of 55 years for females and 57 years for males. The mean body mass index (BMI) was 25.5 kg/m 2 for females and 27.1 kg/m 2 for males. An additional 50 UK Biobank subjects were used for validation, 21 males and 29 females, with a mean age of 53 years and 57 years and a BMI of 25.9 kg/m 2 and 26.5 kg/m 2 , respectively. As an initial exploration of fat heterogeneity in diseased subjects, a separate dataset of UK Biobank subjects was developed, comprising 390 triples of 1) self-reported type 2 diabetes mellitus (T2DM) subjects; 2) gender-, age-, and BMI-matched nondiabetic subjects; and 3) gender-and age-matched nondiabetic subjects with chosen BMI of <25 kg/m 2 . These groups of subjects will be referred to as: T2DM, matched high BMI nondiabetics, and matched low BMI nondiabetics throughout this work. A total of 390 Â 3 = 1170 subjects were selected. Age was matched to within 5 years, and BMI was matched within one point in all cases. The mean age and the mean BMI for the three groups were 57 years and 31.0 kg/m 2 , respectively, for T2DM; 57 years and 30.8 kg/m 2 , respectively, for Matched high BMI nondiabetics, and 56 years and 23.0 kg/m 2 , respectively, for matched low BMI nondiabetics.
All subjects had been scanned with a 1.5 T Siemens Aera scanner (Siemens Healthineers, Erlangen, Germany) using a twopoint Dixon protocol covering neck to knee, acquired using six overlapping slabs and uploaded to the UK Biobank as Data-Field 20201. Each slab was acquired using TE = 2.39/4.77, TR = 6.69 msec, 10 flip angle, and pixel bandwidth = 440 Hz. Only datasets from the first imaging session of UK Biobank (instance 2) were used. Slabs were stitched together, and the resulting neck-to-knee volume was cropped to the abdominal region, resulting in a subvolume that generally included slabs 2-4 (more details are available in the study by Owler et al 36 ). Slabs 2-4 each had a reconstructed voxel size of 2.23 mm  2.23 mm  4.5 mm, an image matrix size of 224  174 with 44 slices, a phase resolution percentage of 71%, and a slice resolution percentage of 100%.
Multiecho gradient-recalled echo (GRE) two-dimensional singleslice data were also obtained from a separate breath-hold scan for the calculation of confounder-corrected PDFF maps, uploaded to the UK Biobank as Data-Field 20260. The GRE scan had a reconstructed voxel size of 2.5 mm  2.5 mm  6 mm and an image matrix size of 160  160, 10 echoes, TE 1 = ΔTE = 2.38 msec, TR = 27 msec, 20 flip angle, and pixel bandwidth = 710 Hz. A confounder-corrected magnitude-based chemical-shift encoding method 37 was used to reconstruct PDFF maps from the raw 10-echo GRE data, which uses a multi-peak spectral model from liver fat 38 and accounts for R * 2 decay. The whole pancreas had been delineated manually on all 150 training and validation datasets by AB (5 years of experience) as part of previous work. 36 Figure 1 shows three-dimensional renderings of the whole-pancreas segmentations for all subjects in both the template creation dataset and the validation dataset. The volumes and the corresponding whole-organ segmentations were resampled to 2 mm isotropic resolution. We also minimally co-registered the subjects by translating them to align their centroids. The centroid of subject 1 was arbitrarily used as a reference. The prealignment provided a better starting point for the nonlinear registration algorithm, both for template creation and for method inference. Currently, the software is compatible with images that are isotropic, have identical size and are in approximate alignment with each other.
Method Description
An overview of the groupwise registration method is shown in Fig. 2. The method takes a whole-pancreas segmentation as input, either delineated manually or with an automated approach; in this part of the study, the input whole-pancreas segmentations were manually obtained. First, an average pancreas template is constructed offline using groupwise registration from the N = 100 method development dataset of manual whole-pancreas segmentations. Then, the pancreas parts (head, body, and tail) are manually annotated on the constructed template, resulting in a pancreas parts template. Method inference (parts segmentation) is performed by registration of the pancreas parts template to a new target wholepancreas segmentation. Then, the registered parts template labels are propagated to the target whole-pancreas segmentation, obtaining a pancreas parts segmentation for that subject. Offline parts template construction and parts segmentation inference steps are detailed in the following paragraphs.
The backbone for template construction is the large deformation diffeomorphic metric mapping (LDDMM) via geodesic shooting algorithm developed by Ashburner and Friston 39 and available under the "Shoot" toolbox of the SPM12 software (https:// www.fil.ion.ucl.ac.uk/spm/software/spm12/). The toolbox uses diffeomorphic transformations to co-register all the template construction segmentations iteratively into a population average, i.e. the "template" image. MATLAB R2021a (The MathWorks, Inc, Natick, MA, USA) and the batch processing capability of SPM12 were used to run template creation. A probabilistic template (0-1) was obtained from this step after four iterations, that were binarized by thresholding at 0.5.
Pancreas head, body, and tail were annotated on the template image by AB. Note this template-based approach enables annotation of parts on the constructed template, instead of annotating each of the training subjects individually, thus requiring a single annotation step. The initial assumption was that this approach would be substantially equivalent to annotating each "training" subject individually. One additional advantage of this annotation strategy was that some salient features appear on the template after groupwise registration, which correspond to the landmarks defining the pancreas subsegments. These landmarks may otherwise be difficult to identify in Template creation dataset Validation dataset FIGURE 1: Manual whole-pancreas segmentations from the template construction ("training") dataset and the validation dataset, sorted by subjects' age from youngest to oldest (females in red and males in blue). (1) Offline groupwise registration of the whole-pancreas segmentation generated a population average ("template"), on which (2) the parts were manually annotated ("parts template," head: blue, body: green, tail: yellow). (bottom-right) For a new subject, the method (1) computes a registration transformation from the subject's segmentation to the template, (2) applies the inverse transformation on the parts template, and (3) propagates the warped parts template labels to the segmentation.
individual cases, and correct landmark identification is highly dependent on image quality. Annotation was performed by defining one boundary plane between head and body and another boundary plane between body and tail. Given a whole-pancreas segmentation for a new subject, which can be either manually delineated or computed automatically, the method first computes a registration transformation from the subject's whole-pancreas segmentation to the template (again initialized by aligning the centroid). The method then applies the inverse of that registration transformation onto the parts template. Finally, it propagates the labels of the warped parts template toward the wholepancreas segmentation, obtaining a parts segmentation for that new subject.
Validation
The initial manual pancreas segmentations from the validation dataset were subsegmented using the described groupwise registration method and also the method based on k-means proposed by Fontana et al. 32 The latter was implemented for a single image (single "breathing phase"), choosing the initial cluster centroids using the k-means++ algorithm. A dedicated annotation protocol based on the three-dimensional "scalpel" tool of ITK-SNAP (http://www. itksnap.org/) 40 was developed that demonstrated the manual annotation of a whole-pancreas segmentation into parts. The protocol gave instructions for the drawing of two separation planes, one plane at the head-body boundary and one plane at the body-tail boundary, both as perpendicular to the pancreas centerline as possible. The protocol was distributed to four separate medical imaging scientists with ranging degrees of experience in annotating abdominal medical images for research: AB (5 years of experience), MB (25 years of experience), PA (10 years of experience), and JR (<1 year experience), namely R1 to R4, to produce reference annotations. R4, who we refer to as naïve observer, was a recently hired technologist with no prior experience with pancreas anatomy or pancreas imaging and was included in the study so that the robustness of the annotation protocol to rater experience could be estimated. Of the 50 subjects, 10 were included at random twice in the dataset for the purpose of assessing intraobserver variability (referred to as annotation a and annotation b). This yielded a total of 60 annotations per rater. Interobserver variability was also assessed by comparing annotations over multiple raters. The interobserver performance may be used as a comparative benchmark for the automatic results.
For automated and manual segmentations, the volumes of individual parts were determined, as was the pancreatic fat by parts from PDFF maps (when available). For the latter, the median PDFF values from head, body, and tail were reported after reslicing the parts segmentation onto the reconstructed PDFF map. The threedimensional parts segmentation volume was intersected with the two-dimensional PDFF map using the DICOM Reference Coordinate System information, as illustrated in Fig. 3. Differences in volumes and PDFF between the automated parts segmentation and the manual parts segmentations were reported for each subject. As a quality control (QC) step, segment masks with an area of ≤30 pixels were excluded from the comparisons. The median PDFF of the segment masks was reported after excluding pixels with values exceeding 50%, followed by morphological opening with a disk structuring element of 3 pixels in diameter. The 50% PDFF threshold aimed to exclude nonparenchymal pancreatic tissue, eg, surrounding visceral adipose tissue that could have been introduced due to slight subject motion between breath-holds. Individual segments were excluded from further statistical analysis if they did not meet these QC criteria.
Pancreatic Fat Quantification by Parts in Type 2 Diabetes
Since manual whole-pancreas delineations were not available for these subjects, automated whole-pancreas segmentations obtained previously in the study by Owler et al 36 were used, computed using the Attention U-Net model based on the work of Schlemper et al. 31 The groupwise registration-based automated parts segmentation method was run on the automated whole-pancreas segmentations. The reslicing plus QC approach explained in the previous section was run to measure median fat accumulation in the pancreatic head, body, and tail. Pancreatic fat quantification by parts was compared between the three subject groups.
Statistical Analysis
Direct validation of automated pancreas subsegmentation was performed using generally accepted segmentation performance metrics, namely Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (95%HD), as well as the reported volume of each part. Intraobserver agreement, interobserver agreement, and "manual vs. automated" agreement were evaluated using Bland-Altman analysis and right-tailed Wilcoxon signed rank statistical testing. For the 10 subjects used in intrarater variation assessment, the following comparisons were generated for the experienced raters and combined: R1a vs. R1b, R2a vs. R2b, R3a vs. R3b, yielding a total of 30 datapoints (10 subjects  3 raters). Intrarater agreement of the inexperienced rater R4, R4a vs. R4b was reported separately and compared to intrarater agreement of R1-R3. For interobserver variation assessment, three comparisons among raters were combined, R1 vs. R2, R1 vs. R3, R2 vs. R3, with 50 parts segmentations in each comparison, yielding 150 datapoints. The robustness of the annotation protocol was tested for rater experience by comparing the segmentation performance in terms of DSC overlap of the naïve observer R4 vs. themselves and vs. R1-R3. For manual vs. automated (Auto), the following comparisons were performed and combined for each automated method separately: R1 vs. Auto, R2 vs. Auto, R3 vs. Auto, each with 50 parts segmentations, that resulted in 150 datapoints. Indirect validation was performed by evaluating agreement between automated and manual parts segmentation quantification of PDFF using Bland-Altman analysis and the Wilcoxon signed rank test. Mann-Whitney U-test was used to compare quantification by parts across groups of subjects with and without type 2 diabetes.
A P value < 0.05 was considered to be statistically significant.
Direct Validation
Manual and automated parts segmentations for the first 10 subjects in the validation set are displayed in Fig. 4. Three-dimensional renderings of parts segmentations from all four raters, R1-R4, including the naïve rater (R4), as well as automated parts segmentations from the k-means method (A1) and the groupwise registration method (A2) are shown. The intrarater agreement of the experienced observers R1 (R1a vs. R1b), R2 (R2a vs. R2b), R3 (R3a vs. R3b) was not significantly higher than the intrarater agreement of the naïve observer R4 (R4a vs. R4b): for R1, head: P = 0.3848, Intrarater agreement and interrater agreement were reported for each rater, separately by pancreatic head, body, and tail (Table 1). While no predefined time gap was specified between repeat annotations for intraobserver assessment, the actual time between repeat annotations varied from 2 hours later for R2 to 1 month apart for R3. Excellent intraobserver (Dice overlap, head: 0.982, body: 0.940, tail: 0.961, N = 30) as well as interobserver (Dice overlap, head: 0.968, body: 0.905, tail: 0.943, N = 150) agreement was observed in terms of segmentation performance for the combined R1, R2, and R3 metrics.
"Manual vs. automated" differences in DSC, 95% HD, and volumes were reported combined across raters 1-3 for both the k-means method and the groupwise registration method (Table 2). A statistically significant difference was found between "manual vs. k-means method" agreement and "manual vs. groupwise registration method" agreement for the head and body segments, using DSC, but not for the tail (P = 0.6237).
"Manual vs. k-means method" agreement from Table 2 was significantly different to inter-rater agreement from Table 1 in the head and body, using DSC, but not for the tail (P = 0.3965). No significant difference was found between "manual vs. groupwise registration method" agreement from Table 2 and interrater agreement from Table 1, using DSC (head: P = 0.4358, body: P = 0.0992, tail: P = 0.1080).
Indirect Validation
Thirty-eight of the validation set subjects (76%) had available multiecho gradient echo data that enabled PDFF measurement. Note that, since the pancreatic PDFF scan is singleslice acquisition, the pancreatic head will not always be present in the image due to variable slice positioning. Similarly, when the slice position is too low, the pancreatic tail will not be visible. After processing and QC, a total of 14 subjects with visible pancreatic head, 34 with visible body, and 29 with visible tail remained for quantification.
Excellent interobserver agreement in PDFF quantification was observed in the Bland-Altman comparisons by pancreatic segment, as shown in . Both automated segmentation methods, the k-means method, and the groupwise registration method showed comparable PDFF quantification agreement to the interobserver comparisons (Fig. 6). No significant differences were found between PDFF quantification from the k-means method and PDFF quantification from the groupwise registration method, reported separately by parts (head: P = 0.5186, body: P = 0.1313, tail: P = 0.5841).
An example of a subject's PDFF map with the resliced parts segmentations from rater 1, rater 2, rater 3, the groupwise registration method, and the k-means method is shown in Fig. 7.
In terms of PDFF differences between parts within given subject groups, a similar pattern was observed for all three cohorts, namely significant differences were observed between head PDFF and body PDFF, as well as between head PDFF and tail PDFF, whereas the difference between body PDFF and tail PDFF was not significant in any of the groups: T2DM (P = 0.90072), matched high BMI nondiabetics (P = 0.071693), and matched low BMI nondiabetics (P = 0.65078).
Discussion
This work presented and validated a fully automated method based on groupwise registration to subsegment the pancreas into its main anatomical parts: head, body, and tail. The method is based on a single population average or "template" image and a single annotation stage on the template, which yields a parts template that may be used for pancreas subsegmentation in new subjects. The method was validated against manual annotations from expert observers in subjects from the UK Biobank imaging substudy and was compared to previously proposed methodology based on k-means clustering. 32 Validation metrics included segmentation performance metrics as well as more clinically meaningful metrics like volume of parts and fat quantification by parts, which was obtained by intersecting the parts segmentations with PDFF maps. Then, as an initial exploration of the clinical value of parts segmentation, the method was applied to a separate UK Biobank cohort including type 2 diabetics (selfreported) as well as gender-, age-, and BMI-matched nondiabetic individuals, where the spatial distribution of pancreatic PDFF was evaluated.
Note that automated whole-pancreas segmentation could have been used to generate both the template creation dataset and the validation dataset. However, using manual wholepancreas segmentations minimized introducing errors in the annotation of parts. This modular approach in which segmentation of the whole pancreas and the constituent parts are treated separately expedites validation of the subsegmentation method and allows for the introduction of improved wholepancreas segmentation methods when they become available. In the final experiment, which showed the potential of parts segmentation, automated segmentations were used.
Excellent intrarater and interrater agreement was observed among all raters for the proposed head, body, and tail annotation protocol. This was true not only for the three experienced raters (R1, R2, and R3) but also for the "naïve" rater (R4), suggesting that the annotation protocol is robust and repeatable and can be deployed by raters with a wide range of experience. High interobserver agreement facilitates (rather than discourages) automation, because it ensures consistent training labels for a specific task.
Most literature quantifies imaging biomarkers by head, body, tail, 20,21,24 as in the work presented here, although some researchers have considered the pancreatic neck separately in the quantification. 19 Considering the neck as an additional segment during annotation, for instance, by subdividing the head further into head and neck, could lead to increased interrater variation. In any case, considering the image resolution of the PDFF map in UK Biobank, the pancreatic neck area would be comprised of few pixels, diminishing the reliability of neck PDFF quantification. Other acquisitions and applications may be more suitable for separate neck quantification, which we will revisit in future work. Other pancreas subsegmentation systems, for instance, those incorporating embryological basis, 22,23 should also be considered in the future, for they may provide complementary regional assessment of the pancreas.
Excellent agreement was observed between the manual annotations and the automated groupwise registration method, in terms of segmentation performance and derived PDFF quantification. Significant differences were observed between manual raters and the automated k-means method at partitioning pancreatic head and body, although these did not significantly impact derived PDFF quantification. The agreement between expert raters' and the automated methods' quantification suggests that the latter may be used in databases like the UK Biobank, where manual annotation is too costly or infeasible. Automation also reduces friction for a method's deployment into a clinical setting.
The k-means method has the advantage of being unsupervised; however, the surrogate identification of pancreatic segments through clustering may not align well with the actual anatomical definition, compared to, for instance, using a template like in the groupwise registration method. This could explain the observation that the k-means method was overestimating the head segment in the qualitative comparison. One other advantage of groupwise registration methods is that they may be used for subsequent statistical analysis of biological variation across the population. Furthermore, since the three-dimensional parts segmentations themselves could eventually provide clinically important information, for instance, individual pancreatic segment volumes, direct segmentation performance metrics are important, for which the k-means method did not provide comparable results to manual annotations. For these reasons, the groupwise registration method was used in the subsequent experiment, which characterized regional quantification of fat in type 2 diabetics.
As an initial exploration of the clinical application of our parts segmentation, we considered three matched groups: self-reported T2DM subjects, BMI-matched nondiabetics (mean BMI: 31.0 kg/m 2 ), and age-and gender-matched nondiabetics with low BMI (mean BMI: 23.0 kg/m 2 ). The significantly higher whole-pancreas PDFF in diabetics than that of nondiabetics has been reported previously. 25 However, we have shown that PDFF in the pancreatic body is significantly different between T2DM and BMI-matched nondiabetics, demonstrating the potential importance of parts segmentation beyond whole-pancreas measurements, which may obscure subtle but clinically important differences. One other study showed PDFF in the pancreatic tail to be most predictive for T2DM development within 4 years. 20 Our finding needs to be examined in more detail in future validation, eg, using dedicated T2DM cohorts with longitudinal follow-up. The significant differences in pancreatic fat content between the pancreatic parts reported in this study emphasizes the importance of segmentation-based approaches over ROI protocols, which should at least be "balanced" when used, meaning they should target all pancreatic segments, for instance, using multiple slices at different positions.
One method simplification could be introduced based on detecting the body-tail boundary using the pancreas segmentation centerline: the midpoint in length between the head-body boundary and the tip of the pancreatic tail would define the body-tail boundary more similar to the anatomical definition used in this work, that is, the midpoint of the total length of the body and tail, from the work of Suda et al. 22 We may also choose to fit each predicted boundary to a plane, similar to the planes drawn in manual annotation, that is orthogonal to the pancreas centerline; in this scenario, the scalar distance between the manual boundary and predicted boundary planes may be used as the validation endpoint.
To date, we have studied regional differences in pancreatic PDFF, but note the method that is suited to report differences in other biomarkers, such as T 1 , so long as the corresponding parametric maps are available within the imaging session.
Limitations
While our PDFF reconstruction accounted for major confounders, such as R * 2 decay, multipeak fat modeling, and phase errors (although some T 1 bias remained owing to the flip angle), the two-dimensional nature of the PDFF scan created some limitations, namely some pancreatic segments were not visible on the two-dimensional PDFF map for a given subject; in such cases, only the visible segments were included in further statistical analysis. Most frequently, the pancreatic head was not visible on the two-dimensional PDFF map, which yielded an unpaired, imbalanced dataset of segmental PDFF values. This also weighted "whole" PDFF quantification toward the body and tail PDFF, relative to the head PDFF. Moreover, the fact that PDFF came from a separate breath-hold scan may have introduced unwanted misalignment between the three-dimensional segmentation and the PDFF map leading to PDFF quantification. The two-point Dixon scan from the UK Biobank readily provided three-dimensional water and fat images (with which fat fraction may be computed); however, the presence of the mentioned confounders, as well as potential fat-water swap artifacts, discouraged their use for regional pancreatic fat quantification. The lower resolution of two-point Dixon also complicates any postprocessing steps that are taken to avoid surrounding structures that spuriously affect fat quantification. In the future, three-dimensional multiecho GRE acquisitions could be set up for simultaneous pancreas segmentation and confounder-corrected three-dimensional PDFF mapping, which would partially address these concerns.
One criticism of templates is that they might average out differences between subjects. An approach that considers multiple templates based on major components of variation may be useful, eg, clinical metadata information or imaging-based and radiomics features. 30 However, this increases the number of templates that need separate annotation. Evidently, in the extrema of this approach sit MAS methods and DL methods, for which individual subjects in the training set need manual annotation of parts. 34 Our approach seemed to balance well both performance and annotation efficiency and also may generalize more robustly to various scan settings, compared to, for instance, DL methods. The template method's segmentations on the subjects it was trained with may provide good estimations of subsegmentations that could be used if labeling individual subjects is required, eg, in MAS or DL methods, speeding the annotation process. The agreement observed between expert annotations and our automatic method supports this claim.
One limitation of applying our method on type 2 diabetics is that the method was developed on UK Biobank data comprising nominally healthy volunteers aged 50-70 with no self-reported diabetes of any type. Applying the method to the type 2 diabetes cohort might impair method performance and needs more careful evaluation. We plan to expand the method development cohort in a future version.
Conclusion
This study demonstrated the feasibility of automated pancreas parts segmentation and downstream pancreatic imaging biomarker quantification by using groupwise registration of whole-organ segmentations to a template and subsequent annotation of the template image. This enables segmental characterization of heterogeneous pancreatic disease. also thank the Engineering and Physical Sciences Research Council (EPSRC) for the doctoral studentship award. This research was conducted using the UK Biobank Resource under Application Number 9914. | 2021-12-01T02:07:27.796Z | 2021-11-30T00:00:00.000 | {
"year": 2022,
"sha1": "4bfbd48c636c9bfabe6d601eea2d02cc2589ab90",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1101/2021.11.30.21266158",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "289c3f3ed0c710805c6bae1b230726348dc7d96b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219564395 | pes2o/s2orc | v3-fos-license | Association of Primary Humoral Immunodeficiencies With Psychiatric Disorders and Suicidal Behavior and the Role of Autoimmune Diseases
This cohort study assesses whether primary humoral immunodeficiencies that affect antibody level and function are associated with lifetime psychiatric disorders and suicidal behavior, and whether this association is explained by the co-occurrence of autoimmune diseases among adults in Sweden.
M ounting evidence suggests that immune disruption may be etiologically important in psychiatric disorders through a range of mechanisms, such as altered neurodevelopment, postinfectious priming of microglia, or microbial dysbiosis. [1][2][3] However, little is known about the neuropsychiatric consequences resulting from the underproduction of homeostatic antibodies. Thus, primary humoral immunodeficiencies (PIDs) 4 could provide an interesting model to disentangle the effects of humoral immunodeficiency and autoimmune diseases on psychiatric disorders.
Most PIDs affect antibodies 5 and are associated with increased risk of recurrent infections as well as with a markedly increased risk of developing autoimmune diseases. 6,7 Selective IgA deficiency (the most common PID among white individuals) 8 has been linked to increased infections within the mucosa-associated lymphoid tissue (MALT), an important immune barrier. 8 In turn, infections within the MALT have long been suspected of association with certain forms of psychopathology in children, particularly obsessive-compulsive disorder and chronic tic disorders. 9 Preclinical evidence suggests that repeated inoculation of group-A streptococci in the homologous tonsil region of mice leads to T helper 17 cell proliferation, blood-brain barrier breakdown, and inflammation within the brain, even without evidence of bacterial invasion. 10 A large population-based study showed that chronic inflammation within the MALT (using tonsillectomy as a proxy) was robustly associated with a broad range of psychiatric disorders and suicidal behavior, 11 suggesting that the association between immune dysfunction and psychopathology may not be uniquely relevant to obsessive-compulsive disorder and tic disorders. In fact, a growing body of evidence suggests that all studied forms of psychopathology are associated with autoimmune disease. [12][13][14][15][16][17][18] This study aimed to explore whether PIDs that affect immunoglobulins are associated with a broad range of psychiatric disorders and suicidal behavior. Further, we explored the potential contribution of co-occurring autoimmune diseases to the observed associations and used a within-family design to account for shared familial confounders. Given that PIDs are rare disorders, the use of the Swedish nationwide registers provides a unique opportunity to extend our knowledge on the association among immune deficiencies, autoimmune diseases, and psychiatric disorders and suicidal behavior.
Methods
The study was approved by the regional ethical review board in Stockholm. The board waived the requirement for informed consent because the study was register based and the included individuals were not identifiable at any time. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.
Data Sources
Swedish nationwide health and administrative registers were linked using the unique national identification number. 19 Data on demography and migration were extracted from the Total Population Register and the Migration Register. 20 Data on kinship were extracted from the Multi-Generation Register, 21 and causes and dates of death were collected from the Cause of Death Register. 22 Data covering prescribed and dispensed medications from July 2005 were extracted from the Prescribed Drug Register. 23 The National Patient Register provided information on diagnoses given in both inpatient (from 1964, with nationwide coverage for psychiatric disorders from 1973) and outpatient (since 2001) specialist services. 24 Diagnoses are based on the International Classification of Diseases, Eighth Revision (ICD-8;1969-1986, International Classification of Diseases, Ninth Revision (ICD-9;1987-1996, and International Statistical Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10; 1997 onward).
Study Population
A population-based cohort included all individuals living in Sweden anytime from January 1, 1973, to December 31, 2013. Register-based data on exposure, outcomes, and covariates were collected through December 31, 2013. Individuals with a record of PID were linked to their full siblings, and a family identification number was created. A sibling comparison model accounts for unmeasured familial confounders, given that full siblings share a mean of 50% of genetic factors and much of the early environment. Siblings were identified within the same cohort and were included in the family-level analysis if they had at least 1 full sibling discordant for PID.
Variables
Exposure Individuals with any PID diagnosis affecting immunoglobulin levels ever recorded in the National Patient Register from 1973 to 2013 were considered exposed (see ICD codes in eTable 1 in the Supplement). Lifetime exposure to PIDs was dichotomized as any PID vs no PID. In addition, we identified individuals with a lifetime diagnosis of selective IgA deficiency from 1997 to 2013 (ICD-10 only; eTable 1 in the Supplement). Individuals were considered unexposed if no records of any PID were identified.
Key Points
Question Are primary humoral immunodeficiencies associated with psychiatric disorders and suicidal behavior?
Findings In this population-based cohort study of 8378 patients in Sweden, having a record of primary humoral immunodeficiencies was associated with greater odds of psychiatric disorders and suicidal behavior, even after controlling for autoimmune diseases and familial confounding. The associations were significantly stronger in women and among those exposed to primary humoral immunodeficiencies and autoimmune diseases.
Meaning Primary humoral immunodeficiencies were robustly associated with psychiatric disorders and suicidal behavior, particularly in women; the association could not be fully explained by co-occurring autoimmune diseases or by familial confounding, and the mechanisms remain to be elucidated.
We collected lifetime records of autoimmune diseases on specific diagnoses (see ICD codes in eTable 2 in the Supplement)from which we constructed a dichotomous variable for any autoimmune disease. Then, we constructed a variable for joint exposure, which was categorized as any PID only, any autoimmune disease only, any PID plus any autoimmune disease, or none.
Outcomes
A lifetime record of a psychiatric disorder or a suicide attempt in the National Patient Register (as inpatient or outpatient care) or a record of death by suicide in the Cause of Death Register constituted the outcome. Psychiatric disorders included 12 individual disorders (see diagnoses, ICD codes, and median [interquartile range (IQR)] age at the first record of diagnosis in eTable 3 in the Supplement). We created dichotomous variables for each individual disorder and constructed a combined variable for any psychiatric disorder. For suicidal behavior outcomes, we retrieved data on all deaths by suicide and all lifetime records of suicide attempts (see diagnoses, ICD codes, and median [IQR] age at the first record of diagnosis in eTable 4 in the Supplement). We constructed separate dichotomous variables for suicide attempts and death by suicide and a combined variable for any suicidal behavior. Minimal age limits were applied for identification of outcomes to reduce the risk of diagnostic misclassification (eTables 3 and 4 in the Supplement).
Covariates
Lifetime records of autoimmune diseases (as described above) were considered as a potential confounder. From the Total Population Register, we collected information on the individuals' birth year and sex.
Statistical Analysis
Data were analyzed from May 17, 2019, to February 21, 2020. All tests used 2-tailed, unpaired testing, with a significance level set at P < .05.
Cohort Analyses
For the main analysis, logistic regression models were fitted to estimate odds ratios (ORs) and 95% CIs for the association between lifetime records of PID and the following outcomes: (1) any psychiatric disorder, (2) individual psychiatric disorders, (3) any suicidal behavior, (4) suicide attempts, and (5) death by suicide. A model minimally adjusted for birth year and sex was followed by a model with an additional adjustment for lifetime records of any autoimmune disease.
Sibling Analyses
Conditional (fixed-effect) logistic regression models were fitted in the subcohort of full siblings discordant for PID, for which each family was considered a stratum. Within a family, all full siblings were compared with each other, with exposed siblings having their unexposed siblings as controls with the same adjustment strategy as in the cohort analysis.
Additional Analyses
Four additional analyses were performed. First, to determine the possible additive effect from multiple disruptions of the immune system, we assessed the effect of joint and single exposure to any PID and any autoimmune disease in comparison with none of these diagnoses. A logistic regression model was applied to the whole study cohort and adjusted for birth year and sex.
Second, we focused on selective IgA deficiency, a less severe PID than common variable immunodeficiency. A logistic regression model was fitted in the subcohort of those living in Sweden from 1997 to 2013, with the same adjustment strategy as in the main analysis. Individuals with PIDs other than selective IgA deficiency were excluded from this analysis. A sibling analysis was conducted by comparing individuals with selective IgA deficiency with their full siblings with no records of any PID.
Third, the analyses for outcomes in association with joint and single exposure to any PID and any autoimmune disorder were stratified by sex. Fourth, to address a potential bias from cases with more severe outcomes, we repeated the main analyses after excluding individuals with an exposure and outcome diagnosed before 2001 (ie, when the National Patient Register records were based solely on inpatients visits). This analysis was conducted for individuals living in Sweden in 2001 onward and based on data from both inpatient and outpatient outcome records to ensure generalizability of the results to patients with less severe exposures or outcomes.
Results
In the initial cohort of 14 306 315 individuals, we identified 8378 patients (4947 women [59.0%] and 3431 men [41.0%]; median age at first diagnosis, 47.8 [IQR, 23.8-63.4] years) with a record of PID affecting immunoglobulin levels. Among the patients with PID, 2309 (27.6%) had a diagnosis of an autoimmune disease, whereas among the unexposed individuals, 967 774 (6.8%) had a diagnosis of an autoimmune disease, with a statistically significant difference in proportions (P < .001). Table 1 reports the descriptive characteristics of the full cohort and sibling subcohort by exposure status.
Cohort Analyses
A total of 1720 individuals who had a record of PID (20.5% of all individuals with PID) and 1 524 737 of the unexposed individuals (10.7%) had at least 1 diagnosis of a psychiatric disorder ( Figure).
Selective IgA Deficiency
In a subcohort of individuals living in Sweden from 1997 onward, 3123 had a record of selective IgA deficiency and were compared with 11 593 012 individuals without any PID. In the analyses of the whole cohort, fully adjusted models revealed increased likelihoods for aggregated outcomes (AOR for any psychiatric disorder,
Sex-Stratified Analyses
The associations between PID and any psychiatric disorder and any suicidal behavior were significantly larger for women (
Discussion
The main finding of the study is that PIDs are significantly associated with a wide range of psychiatric conditions and suicidal behavior, particularly in women. The associations could not be explained by co-occurring autoimmune diseases or by familial confounders shared by siblings. Furthermore, the association with psychiatric disorders and suicidal behavior was markedly stronger for joint exposure to PID and autoimmune disease than for single exposure to any of these disorders, suggesting an additive effect from these immune-related condi- tions. Overall, this study is the first, to our knowledge, to provide robust evidence of an association between PIDs and a wide range of psychiatric disorders and the first to demonstrate an association between these immune conditions and suicidal behavior.
Multiple aspects of this study are worth noting. First, although PIDs were associated with an increased likelihood of most psychiatric disorders, the strongest association was found with autism spectrum disorders, a finding that remained significant, albeit attenuated, in the sibling comparison. Multiple lines of evidence suggest that immunological disruption may be involved in the etiopathogenesis of autism spectrum disorders, either through altered maternal immune function in utero or through immune disruption after birth. 25 Although the current findings cannot inform on any pathological interaction between PIDs and autism spectrum disorders, previous reports of increased inflammator y bowel disease 2 6 and gut mic robiota dysbiosis 27 in autism spectrum disorders may be related to PIDs because both selective IgA deficiency and common variable immunodefic iency are assoc iated with an increased risk for inflammatory bowel disease and microbiota dysbiosis. [28][29][30] At present, no controlled studies have reported the rate of PIDs in autism spectrum disorders or any effect of PID on the clinical course of autism spectrum disorders, although further research in this area is warranted based on our data.
Another unique finding is the association of PIDs with suicidal behavior, as well as evidence that individuals with joint exposure to PID and autoimmune disease displayed the highest association with suicidal behavior compared with individuals with single exposures. To our knowledge, this report is the first to delineate an association between immune deficiency and suicidal behavior, although autoimmune diseases, inflammation, and infections have all been previously shown to increase the risk of suicide. 31,32 The mechanisms underlying these associations deserve further study, and the results suggest that psychiatric screening and behavioral health maintenance may be necessary in patients with PIDs. In our sex-stratified analyses, women exposed to PID only, but not those exposed to autoimmune disease only, appeared particularly vulnerable to psychopathology, suggesting that sex-specific mechanisms may be at play. These mechanisms require further investigation, and clinicians should be aware that women with PID may be in particular need of careful long-term monitoring of psychiatric disorders and suicidal behavior.
The broader implications of this study are worth considering. Our data indicate that PID is significantly associated with most of the analyzed psychiatric conditions and suicidal behavior, and the associations could not be explained by autoimmune diseases or shared familial confounders. This finding suggests that antibody dysfunction may play a role in psychiatric disorders. However, the mechanisms that may underlie the association between PID and psychiatric outcomes are likely complex and cannot be directly determined by the present study. Two major clinical implications of PID are recurrent infections and an increased risk of autoimmune diseases. It is plausible that the lifetime burden of repeated infections or autoimmune conditions may create significant stress, further increasing the risk of psychopathology. Chronic stress may further increase the risk of developing autoimmune disease, 13 thus creating a vicious circle. Interestingly, analysis of selective IgA deficiency, a less severe subtype of PID, suggested that the association with psychiatric disorders and suicidal behavior was not exclusive to cases with severe PID; this analysis might indicate that immune dysfunction per se (vs psychological consequences of being chronically or recurrently ill) is associated with psychopathology. The potential for chronic inflammation in patients with PID may also result in an increased risk of psychopathology, as suggested by a study by Isung et al 11 in which chronic inflammation within the MALT was robustly associated with both psychiatric disorders and suicidal behavior. Finally, the observed associations could be a consequence of the increased susceptibility of patients with PID to central nervous system infections with detrimental neurodevelopmental consequences 33 and/or secondary to the repeated use of antibiotic treatments with plausible microbial dysbiosis and associated negative cerebral effects. 3,34 To date, little is known regarding the role of immunoglobulins in neurodevelopment or homeostatic brain function, although the results of the present study suggest that further investigation of the role of the humoral immune system in the development of psychiatric disorders and suicidal behavior is necessary.
Strengths and Limitations
The main strengths of the present study are the uniquely large, population-based sample of individuals with PID, a rare set of conditions that are routinely diagnosed and verified through laboratory analyses; the use of nationwide Swedish registers with prospective and uniform data collection, which minimizes the risk of selection, recall, and report biases; and the use of a sibling design, which accounts for familial confounding. Furthermore, our specific focus on a PID subtype with lower severity, as well as a comparison with individuals affected by autoimmune diseases alone, strengthens our hypothesis that PIDs per se are associated with the outcomes of interest.
Study limitations are inherent to register-based data. First, the date of the recorded diagnosis of the exposure and the outcomes may not correspond to the actual date of onset. Thus, we could not confidently establish temporality and make use of the longitudinal data in the registers and, instead, chose to report on associations of lifetime diagnoses. We are therefore not making assumptions of directionality or causality. Second, the potential effect from factors such as recurrent infections or other adverse clinical manifestations, as well as potential immune modulation from antibiotics or psychotropic drugs that may contribute to the reported associations, could not be assessed from our data. Third, our ability to test for specificity of the observed associations was limited. It would have been ideal to use a clinically relevant comparison group, such as patients with an early exposure to another chronic disease associated with increased risk of infections and with immune disruption, which is distinct from antibody dysfunction, such as childhood leukemia. Such comparison, however, was not viable because register data on malignant neoplasms were not available to us under the present project. Fourth, because the National Patient Register only includes records from outpatient specialist care since 2001 and no information from primary care, less severe cases may be underestimated. Such bias was in part accounted for through a sensitivity analysis, in which exposures and outcomes were only collected from 2001 and later. In addition, surveillance bias cannot be fully ruled out because individuals with PID are much more likely to have contact with clinicians and thus have higher chances of receiving psychiatric diagnoses. However, many of the measured psychiatric outcomes are serious and require specialist care in their own right.
Conclusions
Primary humoral immunodeficiencies are associated with a broad range of psychiatric disorders and suicidal behavior, particularly in women, even after controlling for autoimmune diseases, suggesting a role for antibody dysfunction in psychiatric disorders. However, several other mechanisms are possible. The strength of these associations increased when PID and autoimmune conditions were analyzed in tandem, suggesting a multiple-hit scenario. Additional research should explore the underlying mechanisms behind these associations. | 2020-06-11T09:08:55.888Z | 2020-06-10T00:00:00.000 | {
"year": 2020,
"sha1": "c1d61247058d08db72a5cfdd13272ffc3f3468b0",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamapsychiatry/articlepdf/2767220/jamapsychiatry_isung_2020_oi_200031_1603916072.97871.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5fb2ec9ae2e77a9e6ea2bee5c3e6854113fdf356",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1156774 | pes2o/s2orc | v3-fos-license | Similarity Patterns in Words
Words are important both in historical linguistics and natural language processing. They are not indivisible abstract atoms; much can be gained by considering smaller units such as morphemes, phonemes, syllables, and letters. In this presentation, I attempt to sketch the similarity patterns among a number of diverse research projects in which I participated.
Introduction
Languages are made up of words, which continuously change their form and meaning. Languages that are related contain cognates -reflexes of proto-words that survive in some form in the daughter languages. Sets of cognates regularly exhibit recurrent sound correspondences. Together, cognates and recurrent sound correspondences provide evidence of a common origin of languages.
Although I consider myself more a computer scientist than a linguist, I am deeply interested in words. Even though many NLP algorithms treat words as indivisible abstract atoms, I think that much can be gained by considering smaller units: morphemes, phonemes, syllables, and letters. Words that are similar at the sub-word level often exhibit similarities on the syntactic and semantic level as well. Even more important, as we move beyond written text towards speech and pronunciation, the make-up of words cannot be ignored anymore.
I commenced my NLP research by investigating ways of developing computer programs for various stages of the language reconstruction process (Kondrak, 2002a). From the very start, I aimed at proposing language-independent solutions grounded in the current advances in NLP, bioinformatics, and computer science in general. The algorithms were evaluated on authentic linguistic data and compared quantitatively to previous proposals. The projects directly related to language histories still form an important part of my research. In Section 2, I refer to several of my publications on the subject, while in Section 3, I focus on other NLP applications contributions that originate from my research on diachronic linguistics.
Diachronic NLP
The comparative method is the technique applied by linguists for reconstructing proto-languages. It consists of several stages, which include the identification of cognates by semantic and phonetic similarity, the alignment of cognates, the determination of recurrent sound correspondences, and finally the reconstruction of the proto-forms. The results of later steps are used to refine the judgments made in earlier ones. The comparative method is not an algorithm, but rather a collection of heuristics, which involve intuitive criteria and broad domain knowledge. As such, it is a very time-consuming process that has yet to be accomplished for many language families.
Since the comparative method involves detection of regularities in large amounts of data, it is natural to investigate whether it can be performed by a computer program. In this section, I discuss methods for implementing several steps of the comparative method that are outlined above. The ordering of projects is roughly chronological. For an article-length summary see (Kondrak, 2009).
Alignment
Identification of the corresponding segments in sequences of phonemes is a necessary step in many applications in both diachronic and synchronic phonology. ALINE (Kondrak, 2000) was originally developed for aligning corresponding phonemes in cognate pairs. It combines a dynamic programming alignment algorithm with a scoring scheme based on multi-valued phonetic features. ALINE has been shown to generate more accurate alignments than comparable algorithms (Kondrak, 2003b). propose a different method of alignment, which is an adaptation of Profile Hidden Markov Models developed for biological sequence analysis. They find that Profile HMMs work well on the tasks of multiple cognate alignment and cognate set matching.
Phonetic Similarity
In many applications, it is necessary to algorithmically quantify the similarity exhibited by two strings composed of symbols from a finite alphabet. Probably the most well-known measure of string similarity is the edit distance, which is the number of insertions, deletions and substitutions required to transform one string into another. Other measures include the length of the longest common subsequence, and the bigram Dice coefficient. Kondrak (2005b) introduces a notion of ngram similarity and distance, and shows that edit distance and the length of the longest common subsequence are special cases of n-gram distance and similarity, respectively.
Another class of similarity measures are specifically for phonetic comparison. The ALINE algorithm chooses the optimal alignment on the basis of a similarity score, and therefore can also be used for computing phonetic similarity of words. Kondrak (2001) shows that it performs well on the task of cognate identification.
The above algorithms have the important advantage of not requiring training data, but they cannot adapt to a specific task or language. Researchers have therefore investigated adaptive measures that are learned from a set of training pairs. Mackay and Kondrak (2005) propose a system for computing string similarity based on Pair HMMs. The parameters of the model are automatically learned from training data that consists of pairs of strings that are known to be similar. Kondrak and Sherif (2006) test representatives of the two principal approaches to computing phonetic similarity on the task of identifying cognates among Indoeuropean languages, both in the supervised and unsupervised context. Their results suggest that given a sufficiently large training set of positive examples, the learning algorithms achieve higher accuracy than manuallydesigned metrics.
Techniques such as Pair HMMs improve on the baseline approaches by using a set of similar words to re-weight the costs of edit operations or the score of sequence matches. A more flexible approach is to learn from both positive and negative examples of word pairs. Bergsma and Kondrak (2007a) propose such a discriminative algorithm, which achieves exceptional performance on the task of cognate identification.
Recurrent Sound Correspondences
An important phenomenon that allows us to distinguish between cognates and borrowings or chance resemblances is the regularity of sound change. The regularity principle states that a change in pronunciation applies to sounds in a given phonological context across all words in the language. Regular sound changes tend to produce recurrent sound correspondences of phonemes in corresponding cognates.
Although it may not be immediately apparent, there is a strong similarity between the task of matching phonetic segments in a pair of cognate words, and the task of matching words in two sentences that are mutual translations. The consistency with which a word in one language is translated into a word in another language is mirrored by the consistency of sound correspondences. Kondrak (2002b) proposes to adapt an algorithm for inducing word alignment between words in bitexts (bilingual corpora) to the task of identifying recurrent sound correspondences in word lists. The method is able to determine correspondences with high accuracy in bilingual word lists in which less than a third the word pairs are cognates. Kondrak (2003a) extends the approach to the identification of complex correspondences that involve groups of phonemes by employing an algorithm designed for extracting non-compositional compounds from bitexts. In experimental evaluation against a set of correspondences manually identified by linguists, it achieves approximately 90% F-score on raw dictionary data.
Semantic Similarity
Only a fraction of all cognates can be detected by analyzing Swadesh-type word lists, which are usually limited to at most 200 basic meanings. A more challenging task is identifying cognates directly in bilingual dictionaries, which define the meanings of words in the form of glosses. The main problem is how to quantify semantic similarity of two words on the basis of their respective glosses. Kondrak (2001) proposes to compute similarity of glosses by augmenting simple string-matching with a syntactically-informed keyword extraction. In addition, the concepts mentioned in glosses are mapped to WordNet synsets in an attempt to account for various types of diachronic semantic change, such as generalization, specialization, and synechdoche. Kondrak (2004) presents a method of combining distinct types of cognation evidence, including the phonetic and semantic similarity, as well as simple and complex recurrent sound correspondences. The method requires no manual parameter tuning, and performs well when tested on cognate identification in the Indoeuropean word lists and Algonquian dictionaries.
Cognate Sets
When data from several related languages is available, it is preferable to identify cognate sets simultaneously across all languages rather than perform pairwise analysis. apply several of the algorithms described above to a set of diverse dictionaries of languages belonging to the Totonac-Tepehua family in Mexico. They show that by combining expert linguistic knowledge with computational analysis, it is possible to quickly identify a large number of cognate sets within the family, resulting in a basic comparative dictionary. The dictionary subsequently served as a starting point for generating lists of putative cognates between the Totonacan and Mixe-Zoquean families. The project eventually culminated in a proposal for establishing a super-family dubbed Totozoquean (Brown et al., 2011). Bergsma and Kondrak (2007b) present a method for identifying sets of cognates across groups of languages using the global inference framework of Integer Linear Programming. They show improvements over simple clustering techniques that do not inherently consider the transitivity of cognate relations. Hauer and Kondrak (2011) present a machinelearning approach that automatically clusters words in multilingual word lists into cognate sets. The method incorporates a number of diverse word similarity measures and features that encode the degree of affinity between pairs of languages.
Phylogenetic Trees
Phylogenetic methods are used to build evolutionary trees of languages given data that may include lexical, phonological, and morphological information. Such data rarely admits a perfect phylogeny. Enright and Kondrak (2011) explore the use of the more permissive conservative Dollo phylogeny as an alternative approach that produces an output tree minimizing the number of borrowing events directly from the data. The approach which is significantly faster than the more commonly known perfect phylogeny, is shown to produce plausible phylogenetic trees on three different datasets.
NLP Applications
In this section, I mention several NLP projects which directly benefitted from insights gained in my research on diachronic linguistics.
Statistical machine translation in its original formulation disregarded the actual forms of words, focusing instead exclusively on their cooccurrence patterns. In contrast, Kondrak et al. (2003) show that automatically identifying orthographically similar words in bitexts can improve the quality of word alignment, which is an important step in statistical machine translation. The improved alignment leads to better translation models, and, consequently, translations of higher quality. Kondrak (2005a) further investigates word alignment in bitexts, focusing on on identifying cognates on the basis of their orthographic similarity. He concludes that word alignment links can be used as a substitute for cognates for the purpose of evaluating word similarity measures.
Many hundreds of drugs have names that either look or sound so much alike that doctors, nurses and pharmacists sometimes get them confused, dispensing the wrong one in errors that may injure or even kill patients. Kondrak and Dorr (2004) apply anumber of similarity measures to the task of identifying confusable drug names. They find that a combination of several measures outperforms all individual measures.
Cognate lists can also assist in secondlanguage learning, especially in vocabulary expansion and reading comprehension. On the other hand, the learner needs to pay attention to false friends, which are pairs of similar-looking words that have different meanings. Inkpen et al. (2005) propose a method to automatically classify pairs of words as cognates or false friends, with focus on French and English. The results show that it is possible to achieve very good accuracy even without any training data by employing orthographic measures of word similarity.
Transliteration is the task of converting words from one writing script to another. Transliteration mining aims at automatically constructing bilingual lists of names for the purpose of training transliteration programs. The task of detecting phonetically-similar words across different writing scripts is quite similar to that of identifying cognates, Sherif and Kondrak (2007) applies several methods, including ALINE, to the task of extracting transliterations from an English-Arabic bitext, and show that it performs better than edit distance, but not as well as a bootstrapping approach to training a memoriless stochastic transducer. employ ALINE for aligning transliterations from distinct scripts by mapping every character to a phoneme that is the most likely to be produced by that character. They observe that even such an imprecise mapping is sufficient for ALINE to produce high quality alignments. apply the ALINE algorithm to the task of grapheme-to-phoneme conversion, which is the process of producing the correct phoneme sequence for a word given its orthographic form. They find ALINE to be an excellent substitute for the expectation-maximization (EM) algorithm when the quantity of the training data is small. Jiampojamarn and Kondrak (2010) confirm that ALINE is highly accurate on the task of letterphoneme alignment. When evaluated on a manually aligned lexicon, its precision was very close to the theoretical upper bound, with the number of incorrect links less than one in a thousand.
Lastly, ALINE has also been used for the mapping of annotations, including syllable breaks and stress marks, from the phonetic to orthographic forms (Bartlett et al., 2008;).
Conclusion
The problems involved in language reconstruction are easy to state but surprisingly hard to solve. As such, they lead to the development of new methods and insights that are not restricted in application to historical linguistics. Although the goal of developing a program that performs a fully automatic reconstruction of a proto-language has yet to been attained, the research conducted towards this goal has been, and is likely to continue to influence other areas of NLP. | 2014-07-01T00:00:00.000Z | 2012-04-01T00:00:00.000 | {
"year": 2012,
"sha1": "1dda5d8bcbdab216b2eb8b65f7ff6a0127f6b7d9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "1dda5d8bcbdab216b2eb8b65f7ff6a0127f6b7d9",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
256937774 | pes2o/s2orc | v3-fos-license | BREAST CANCER WITH MICROCALCIFICATES: A BIBLIOMETRIC ANALYSIS.
bibliometric
INTRODUCTION / ВСТУП
Breast cancer is one of the most common cancers worldwide.The disease is the leading cause of death from cancer in women in more than 100 countries [1,2].
Microcalcifications in the tissue sample are an important marker of the pathological process.Microcalcification's presence in the tumor tissue is a criterion for determining the stage of the disease and for early diagnosis of this pathology [3].Detection of microcalcifications in the breast using mammography is crucial in diagnosing breast cancer, especially in the early stages.Breast cancer microcalcifications are usually associated with degenerative-necrotic changes in tumor tissue [4].Microcalcifications in the breast correlate with a worse prognosis, especially due to a higher frequency of lymph node invasion and rapid metastasis [5].
Objective.The work aims to carry out a bibliometric analysis and study of data on the pathomorphological characteristics of BC with biomineralization.
Materials and methods
We searched electronic databases such as PubMed, Scopus, Web of Science, and Google Scholar for information on breast cancer (BC) with microcalcifications for the period 1967-2022 using key terms such as "breast cancer," "calcification," "microcalcifications." For bibliometric analysis, an online platform for monitoring and analyzing international scientific research using visualization tools and current citation metrics, SciVal (Scopus) and a tool for building and visualizing bibliometric networks, VOSviewer were used.
We used Scopus database bibliometric tools to analyze the year, source, type of study, subject area, and country of the publication.The VOSviewer system from the University of Leiden (http://www.vosviewer.com/) was used to generate and visualize the bibliometric network.
Physico-chemical classification of biominerals
Biologically, biomineral deposits are divided into two main types: type I, consisting of calcium oxalate (CO), and type II, consisting of hydroxyapatite (HA).Classification is based on chemical composition and mammographic characteristics, including morphology, distribution, and density.Research data indicate that type II is often associated with malignant lesions of the gastrointestinal tract [5,6].
CO is produced by apocrine cells and is most often associated with benign changes in breast tissue.CO cannot be metabolized by mammalian cells, indicating that its presence metabolically affects epithelial cells and can induce proliferation and c-Fos overexpression in MCF-7 cells [6,7].
Biominerals of type I have an amber color, are partially transparent and have the form of pyramidal structures with a flat surface.Minerals of type II are white-gray in color, opaque, respectively, spindle-shaped or egg-shaped with an irregular surface [5,8].When analyzing the majority of studies, it can be asserted that the presence of calcium oxalate is more common in benign breast pathology or non-invasive carcinomas in situ, and the presence of HA is associated with both benign and malignant breast pathology [9].
Type II calcifications can be associated with benign and malignant breast formations; they are present in benign tumors such as fibroadenomas, fibroadenosis, and sclerosing adenosis and are related to invasive cancer in experimental models necrosis and fibrosis [10][11][12].
Not only the detection of microcalcifications but also their specific properties are important.The morphology of biomineral deposits can indicate a malignant process in the breast.Recently, many studies have indicated a connection between histopathological variants and microcalcifications' physicochemical compositions.
Pathomorphological classification of breast pathology
According to pathomorphological characteristics, the pathology of the breast is divided into malignant and benign, which is the basis for verifying the diagnosis, treatment, and prognosis.Benign pathology of the gastrointestinal tract is represented by: benign epithelial proliferative and precancerous diseases (ductal hyperplasia, atypical ductal hyperplasia, columnar epithelium disease), adenomas, adenomas, benign sclerosing diseases, benign papillary tumors, epithelial-myoepithelial tumors, fibroepithelial tumors (fibroadenoma and phylloid tumor), hamartomas The most frequent malignant diseases of the breast include invasive carcinoma of the breast, ductal carcinoma in situ (DCIS), noninvasive lobar neoplasia, malignant papillary tumors, neuroendocrine tumors and cancers of rare types (acinar carcinoma, adenocystic, secretory, mucoepidermoid, polymorphic and high cell carcinoma with inverted polarity [13,14]. We paid considerable attention to the problem of biomineralization in this study (from physicochemical features to the mechanisms of their formation and clinical diagnostic features) since there are data on the important role of microcalcifications and calcification in general in the diagnosis and prognosis of the course of breast tumors.
X-ray criteria for the differences in calcifications
Currently, 30-50% of non-palpable breast cancer is detected exclusively by identifying calcifications on mammography [3,6,15].Welldescribed radiological criteria help distinguish benign calcifications from potentially malignant ones.Mammography is the primary method for assessing these changes.According to the fifth version of the Breast Imaging Reporting and Data System (BI-RADS), biominerals are classified as benign and suspicious.There are five categories of distribution: diffuse, segmental, regional, grouped, and linear.Benign calcifications on mammography are usually more extensive, rougher, rounder with smooth edges, and easier to see than malignant calcifications.Calcifications associated with malignancy are typically small and require magnification to be well visualized.Suspicious morphology includes a gross heterogeneous appearance, amorphous nature, thin pleomorphic elements, and finely branched calcifications [3].Morphologically, biominerals with thin linear branches are associated with worse results than non-linear biominerals [6].
Detection and interpretation of calcinates represent a complex problem, so radiological and pathological evaluations are crucial for accurately diagnosing these lesions.The type and composition of biominerals, including the determination of their biochemical nature, may improve their predictive value.
Mechanism of formation of calcifications
The researchers' immediate attention is focused on studying the molecular mechanism involved in forming biominerals.The mechanism of regulation of pathological biomineralization can be similar to physiological bone mineralization [9,15].
Overexpression of bone matrix proteinssialoprotein, osteopontin (OPN), and osteonectinwas detected in the biopsies of BC [16].Rizwan et al., in their studies, indicate that inhibition of the OPN gene reduces the formation of calcium hydroxyapatite in BC cells.This study describes a direct relationship between calcium deposition and the ability of BC cells to metastasize to distant organs and lymph nodes.Under the influence of specific stimuli, breast epithelial cells that undergo epithelial-mesenchymal transition (EMT) and transform into cells with an osteoblast-like phenotype can influence the formation of biominerals in breast tissue [17].The main molecular mechanism of phenotype change in EMT is the loss of epithelial cell markers, such as E-cadherin and cytokeratin, and their replacement by mesenchymal markersvimentin, nuclear β-catenin, smooth muscle actin, and fibronectin.This pathological transformation leads to the activation of signaling pathways and, reorganization of the cytoskeleton, increased expression of genes encoding MMPs, which participate in the degradation of the extracellular matrix and basement membrane [9].
The role in the metastatic spread of breast cancer cells is still being studied, but studies show that the OPN gene binds to cell surface integrins (β1 and β2 integrins) and CD 44 [16,17].It is the connection of OPN with the cell surface of the CD 44 receptor and damage to the epithelialmesenchymal transition with subsequent cell transformation that is the trigger for the initiation and adhesion of the cell matrix in various types of tumors, which leads to the invasion and metastasis of malignant tumors [9,18].
The primary mechanism of the formation of biominerals is still poorly understood.According to recent studies, bone morphogenetic protein 2 (BMP-2) plays a role in the formation of microcalcifications.BMPs are growth factors of the TGF-β superfamily and are a specific and key regulator of osteoblasts.BMP-2 can encourage the cells of BC to acquire osteoblastic characteristics, which leads to the formation of microcalcifications [19].A recent study also showed that active processes of microcalcification are caused by osteoimmunological disorders [20].Tumorassociated macrophages are the main types of tumors that penetrate the immune cells of the extracellular environment and accumulate around microcalcifications in BC.High APM levels are associated with a poor prognosis.APMs implicated in breast cancer include a spectrum with M1-like and M2-like phenotypes.They may exhibit antitumor potential (M1-like phenotype) or be responsible for increased cancer cell growth (M2-like phenotype), most of which have an M2-like phenotype (CD163).In studies, it is believed that BMP-2 is mainly secreted by the cells of the tumor microenvironment but not by the breast cancer tumor cells themselves.It is known that tumor-associated macrophages are an essential component of the tumor microenvironment and can secrete BMP-2, which contributes to calcification [19].
Bibliometric analysis of scientific literature
We analyzed the Scopus database, which included 924 publications.These electronic sources were filtered by the keywords "breast cancer," "calcification," and "microcalcifications." The results of the bibliometric analysis indicate that the number of publications on the specified topic has increased significantly over the past ten years, which shows the relevance of the problem and ways of solving it among scientists (Fig. 1).
The pathological biomineralization of breast cancer is actively studied by scientists from the United States of America, China, and Great Britain.
After studying the results of the bibliometric analysis of 924 publications of the Scopus database using the tools of the SciVal service for the keywords "breast cancer" and "calcification" for the period 1967-2022, it was established that the vast majority belong to the field of medicine.In addition, 27 thematic clusters can be identified in the specified area, most of which belong to the field of medicine, computer science, engineering, material science, physics, and a fewmathematical sciences.Among the most exciting areas of publishing activity, we should highlight the works devoted to BC: the classification of breast tumors, early diagnosis of BC, and classification of biomineral deposits (Fig. 2).
Figure 2 -The result of visualization of the distribution of publications by topics and clusters using SciVal bibliometric analysis tools
We also analyzed the publication activity of 1967-2022 on the research topic using the VOSviewer tool for building and visualizing bibliometric networks.As a result of the bibliometric analysis of 924 publications in the Scopus database using the keywords "breast cancer" and "calcification," we identified four chronological stages, which include: 1) radiological research methodsmammography, research using clinical and histological methods, 2) pathomorphological evaluation of BC and calcifications, 3) study of biomarkers of tumor progression of BC, 4) predictive assessment of BC depending on metastasis and survival (Fig. 3).The data of the publication were also divided into six thematic clusters: 1) classification of biominerals, 2) mammography, 3) physicochemical composition of calcifications, 3) ductal neoplasia of the breast, 4) biopsy, 5) metastases of BC, 6) calcium hydroxyapatite (Fig. 4).
CONCLUSIONS / ВИСНОВКИ
The presence of biominerals in tumor tissue is an important marker in diagnosing BC.It is a criterion for determining the disease's stage and early diagnosis.
The results of the analysis of scientific sources of the Scopus database by keywords in the period from 1967 to 23.04.2022 indicate that the number of publications on the specified subject has a tendency to increase over the past ten years, which shows the relevance of the issues and ways of solving them among scientists.
Among the most exciting publication areas, we single out works devoted to BC: the classification of breast tumors, early diagnosis of BC, and classification of biomineral deposits.
Using the tool for building and visualizing bibliometric networks VOSviewer of publication activity for the period 1967-2022 in the researched topics of BC calcification, we identified four chronological stages, which include: 1) radiological research methods -mammography research using clinical and histological methods, 2) pathomorphological evaluation of BC and calcifications, 3) research of biomarkers of tumor progression of BC, 4) prognostic evaluation of BC depending on metastasis and survival, as well as published data on six thematic clusters: 1) classification of biominerals, 2) mammography, 3) physicochemical composition of calcifications, 3) ductal neoplasia of the mammary gland, 4) biopsy, 5) metastasis of BC, 6) calcium hydroxyapatite.
The most relevant today is the early diagnosis of breast cancer, as well as factors that affect the deterioration of prognostic criteria, such as survival and metastasis in such patients, which is associated with the formation of biomineral deposits in the breast tissue.
Figure 1 -
Figure 1 -The result of visualization of the publication chronology for 1967-2022 using the tools of bibliometric analysis of the Scopus database
Figure 3 -Figure 4 -
Figure 3 -The result of visualization of the patterns of the chronological development of this topic using VOSviewer bibliometric analysis tools | 2023-02-17T16:03:17.869Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "4bfb0bb55aaddb9575fdeec46aac6632e9a4a502",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21272/eumj.2022;10(4):300-308",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "014e930d1facb82569cbc46b69702869755ad4be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
225327767 | pes2o/s2orc | v3-fos-license | Genetic characterization of indigenous Tswana pig population using microsatellite markers
The indigenous Tswana pig is currently listed as an endangered animal genetic resource and if not conserved, might go extinct. The objective of this study was to assess the genetic diversity (genetic characterization) of the indigenous Tswana pig population. Blood samples were collected from 30 randomly selected Tswana pigs in Kgatleng and South-East districts of Botswana for the assessment of genetic diversity using a panel of 12 FAO-recommended microsatellite markers. All the microsatellite markers screened in indigenous Tswana pigs were polymorphic and the number of observed alleles per marker varied between 3 (SW2406) and 9 (SW225) with mean number of alleles per marker of 6.33. The observed heterozygosity ranged from 0.16 (SW2405) to 0.875 (SW2465) with average observed heterozygosity across all 12 loci of 0.647. The expected heterozygosity was lower than the observed heterozygosity and ranged between 0.143 (SW2405) and 0.776 (SO385) with mean expected heterozygosity across all loci of 0.603. The allelic diversity and levels of heterozygosity indicate high levels of genetic diversity in Tswana pig population. The within-locus inbreeding coefficient (Fis) ranged between -0.321(S0120) and 0.234 (SW35) with inbreeding coefficient of the entire population of -0.012 indicating that the Tswana pig population is relatively outbred.
INTRODUCTION
Indigenous pigs are kept by the rural populace under the low-output free range production system. Indigenous Tswana pigs are mostly owned by women, usually survive in harsh, low input environments and strive under high disease, parasite prevalence and nutrients shortages (Chabo et al., 2000). During the 1980's, indigenous Tswana pigs were found in South East, Kgatleng and Kweneng districts of Botswana while nowadays they are fairly well distributed in the south east district of the country in and around Ramotswa village (Nsoso et al., 2006). The farmers who keep indigenous pigs in Botswana have a tendency to keep low numbers to match herd size with available feed resources. Notable attributes of indigenous Tswana pigs include disease resistance, high fertility, parasite and heat tolerance, low protein requirements, ability to utilize course fibrous rations and strong feet which make them suitable for free range low-intensity management systems affordable to the rural poor (Gandin and Oldenbroek, 1999;Lekule and Kyvsgaard, 2003).
The indigenous Tswana pigs are usually black or black with white stripes (Figure 1) and have a body of medium stature (Nsoso et al., 2006). Indigenous Tswana pigs are however shunned away from commercial production systems due to their inferior growth and reproductive performance and carcass traits relative to exotic breeds (Moreki and Montsho, 2012). Resource-poor farmers in rural areas also view genetic improvement of indigenous Tswana pigs as synonymous to crossbreeding, grading up and possible breed replacement with exotic breeds (Nsoso et al., 2004).
Extensive system coupled with undeveloped markets for indigenous Tswana pigs and lack of a clear policy on the conservation of indigenous animal genetic resources in the country is leading to the disappearance of indigenous Tswana pigs. This poses a risk of worsening poverty levels for most of the rural women populace who own most of the indigenous Tswana pigs since the fastgrowing exotic pigs require high levels of inputs and management unaffordable to the resource-poor and highly marginalized farmers. The population of indigenous Tswana pigs has declined drastically in the last three decades and the indigenous Tswana pig is currently listed as an endangered animal genetic resource (Podisi, 2001). Rege and Lipner (1992) argued that some indigenous animal genetic resources of Africa are endangered and may even be lost before they are described and documented, and the indigenous Tswana pig is one classic example. Research to evaluate the indigenous Tswana pig has been sporadic and inadequate; consequently, the indigenous Tswana pig has not been sufficiently characterized. Information on phenotypic characteristics and production performance of Tswana pigs is still very scarce and there has been no attempt till date aimed at their genetic characterization. Genetic characterization of Tswana pigs by microsatellite markers is important to assess the degree of genetic diversity in the remaining population, the extent of inbreeding and will inform future conservation and management practices. The objective of this study was therefore to assess the genetic diversity of the indigenous Tswana pig population using microsatellite markers.
Population sampling
Blood samples were collected from 30 unrelated Tswana pigs in the Southern half of the country in Kgatleng and South-East districts following the guidelines of Measurement of Domestic Animal Diversity FAO (2011) programme. Blood samples were collected from the ear vein of the animals in vacutainer tubes containing EDTA as the anticoagulant. Blood samples were then transported to the laboratory at 0-4°C (under ice cubes) and stored overnight at -20°C prior to DNA extraction. Information on sampling locations and number of samples per sampling location is given in Table 1.
DNA extraction
Genomic DNA was isolated from whole blood using Zymo Quick-gDNA miniPrep kit following the manufacturer's protocol. Briefly, 400 µl of Genomic Lysis Buffer was added to 100 µl of whole blood and completely mixed by vortexing for 4-6 s. The mixture was allowed to stand for 5-10 min at room temperature, transferred to a Zymo-Spin TM Column in Collection Tube and centrifuged at 10,000 ×g for a minute. The collection tube with the flow through was discarded and the Zymo-Spin TM Column transferred into a new collection tube. 200 µl of DNA Pre-Wash Buffer was added to the spin column and centrifuged at 10,000 ×g for a minute. 500 µl of g-DNA Wash Buffer was added to the spin column and centrifuged at 10,000 ×g for a minute. The spin column was then transferred to clean micro centrifuge tube and 60 µl of DNA Elution Buffer was added and incubated 2-5 min at room temperature. The spin column was then centrifuged at top speed for 30 s to elude the gDNA. The concentration of eluded gDNA was measured using a spectrophotometer (Nanodrop, 2000) and the purity of the gDNA was verified by the 260/280 absorbance ratio (Thermo Fisher Scientific Inc., Waltham, MA, USA).
Microsatellite markers amplification and analysis
A panel of 12 microsatellites recommended by Food Agriculture Organisation (FAO)/ISAG-FAO Advisory Group on Animal Genetic Diversity FAO (1995) were used for genetic characterization of Tswana pigs. The markers used in the study (with chromosome position) were: SW2456 (X/Y), S0165 (3), SW225 (13), SW2008 (11), SW35 (4), SW2406 (6), S0385 (11), S0120 (18), S0073 (4), SW2443 (2), SW949 (X/Y), and SW2410 (17). Selective amplification of different microsatellites was achieved by polymerase chain reaction using the thermocycler GeneAmp PCR system 9700 (Applied Bio systems, Forster City CA, USA) and PCR reagents synthesized by Fermentas Life Sciences Opelstrasse, Germany. Each 25 µl PCR reaction comprised approximately 100 ng gDNA, primers (60 ng each), dNTPs (40 mm each), 10X ammonia-based PCR buffer (2.5 µL), 1.5 mm MgCl2, 1 unit of Taq DNA polymerase and PCR grade deionized water. The PCR reaction was accomplished by initial denaturation for 5 min at 94°C, followed by 33 cycles of denaturation at 94°C for 30 s, primer annealing for 45 s at the desired temperature and DNA replication at 72°C for 1 min. The final extension step was run at 72°C for 10 min. The resulting PCR products were denatured at 98ºC for 3 min and rapidly cooled by placing on ice. The PCR products were separated by capillary electrophoresis on ABI Prism 310 Genetic Analyzer (Applied Biosystems, Foster city, CA, USA) according to the manufacturers recommendations and allele sizing was achieved by using the internal size standard of Genescan-500 LIZ (Applied Biosystems, Foster city, CA, USA). Data on allele sizes was done using Genescan Analysis software v.3.1.2 and the identification of different alleles for each marker was performed by Genotyper 2.5 software.
Statistical analysis
The within breed genetic diversity parameters for Tswana pigs which included observed heterozygosity (Ho), expected heterozygosity (He), polymorphism information content (PIC) and mean number of alleles (MNA) were calculated using Microsatellite Toolkit software (Kim et al., 2005). The inbreeding coefficient (Fis) for each locus was computed using the program FSTAT (Goudet, 2001). The probability test approach (Gou and Thompson, 1992) implemented in the GENEPOP software (Gou and Thompson, 1992) was used to test each locus for Hardy-Weinberg equilibrium.
RESULTS AND DISCUSSION
All the microsatellite markers screened in indigenous Tswana pigs were polymorphic (Table 2). A total of 76 alleles were detected in 12 microsatellite markers screened and the allele size range varied from 83-107 bp at locus S0073 to 220-234 bp at marker locus SW2406. The number of observed alleles per marker varied between 3 (SW2406) and 9 (SW225) with mean number of alleles per marker of 6.33 (Table 2). The range of observed number of alleles per marker and mean number of alleles per marker observed in this study are comparable to 3.38-8.71 and 6.25, respectively, found in local Criollo pig breeds from the Americas (Revidatti et al., 2014) but lower than the range of 5-12 alleles per marker and mean number of alleles per marker of 7.04 reported in indigenous Andaman Desi pig of India (De et al., 2013) and mean number of alleles per marker of 8.45 found in indigenous pigs of Mozambique (Swart et al., 2010). The range of observed number of alleles per marker found in this study is however, higher than the range of 3.98-5.54 reported by Swart et al. (2010) in commercial pig breeds of South Africa (Landrace, large white and Duroc). The mean number of alleles per marker of 6.33 found in this study is comparable to 6.18 found in indigenous South African Kolbroek breed (Swart et al., 2010) but higher than 5.72 found in Uruguayan Pampa Rocha pigs (Montenegro et al., 2015), 3.93 and 5.97 in Namibia and Kune-kune breeds (Swart et al., 2010). Effective number of alleles in Tswana pigs ranged between 1.11 (SW2406) and 5.01 (S0165) with mean effective number of alleles per marker of 3.31±1.18. Revidatti et al. (2014) reported a lower mean effective number of alleles per marker of 3.33±1.56 in Criolli pig breeds of the Americas which is comparable with the present study. The mean effective number of alleles per marker in Tswana pigs is however lower than the mean effective number of alleles per marker of 5.09±0.20 found in Andaman Desi pigs of India. According to Pandey et al. (2006), FAO specified a minimum of four alleles per marker for effective screening of genetic differences between breeds and all the markers used in this study with the exception of SW2406 exhibited sufficient polymorphism for evaluation of genetic variation within breed and genetic differences between breeds.
Apart from the number of alleles per locus and mean number of alleles for all loci, other measures of genetic diversity include observed heterozygosity, expected heterozygosity and polymorphic information content (PIC) and those are depicted in Table 3.
The observed heterozygosity for individual markers ranged from 0.16 (SW2405) to 0.875 (SW2465) with average observed heterozygosity across all 12 loci of 0.647. The expected heterozygosity was lower than the observed heterozygosity and ranged from 0.143 (SW2405) to 0.776 (S0385) with mean expected heterozygosity across all loci of 0.603. For markers to be useful in measuring genetic variation they should have average heterozygosity between 0.3 and 0.8 (Takezaki and Nei, 1996) and therefore all the markers used in this study with the exception of SW2405 were appropriate for measuring genetic variation in Tswana pigs. According to Nei and Kumar (2000), observed heterozygosity and expected heterozygosity are highly correlated but expected heterozygosity also known as Hardy-Weinberg heterozygosity is considered a better estimator of the genetic variability present in a population. More heterozygous loci than expected in Tswana pigs is consistent with Setyawan et al. (2015) who observed a similar pattern in most Indonesian Native cattle breeds. Unlike in Tswana pigs, most pig genetic characterization studies report heterozygote deficiencies than heterozygote excesses (De et al., 2013) (Swart et al., 2010). Compared to commercial pig breeds, the average expected heterozygosity of the indigenous Tswana (0.603) is similar to 0.60 of the large white (Oh et al., 2014) but slightly higher than 0.580 and 0.531 of the South African Landrace and Duroc breeds, respectively (Swart et al., 2010). The high level of genetic variation or diversity in Tswana pigs might be attributed to lack of selective breeding or improvement programs targeted towards the breed and possible existence of population substructure (Genetic uniqueness in terms of alleles of Tswana pigs coming from different villages). The polymorphic information content (PIC) values of the 12 markers employed in the characterization of Tswana pigs ranged from 0.094 for SW2405 to 0.569 for S0385 with average PIC value of all the markers of 0.428 (Table 3). According to Montenegro et al. (2015), markers with PIC values greater than 0.5 are highly informative, those with PIC values between 0.25 and 0.5 are moderately informative and those with PIC values less than 0.25 are uninformative. Following the same classification criterion, four markers (SW2008, S0385, S0073 and SW2443) were highly informative, seven (SW2465, S0165, SW225, SW35, S0120, SW949 and SW2410) were moderately informative and one (SW2405) was uninformative in Tswana pigs. Moderately informative and highly informative markers are more variable and therefore more suitable for genetic diversity studies in indigenous Tswana pigs.
All the 12 microsatellite markers used in the current study were in Hardy-Weinberg Equilibrium clearly indicating the high genetic stability of indigenous Tswana pigs kept by farmers under extensive management system. The high genetic stability of indigenous Tswana pigs confirm that Tswana pigs are mostly random mating under free running management system practised by majority of farmers, is not undergoing any artificial selection (no improvement program for indigenous Tswana pig), the effects of random genetic drift common in small populations like that of Tswana pigs are negligible and Tswana pigs are not subjected to other evolutionary forces such as mutation and migration capable of altering gene, genotype frequencies and causing significant departures form Hardy-Weinberg equilibrium.
The within-locus inbreeding coefficient (F is ) ranged between -0.321(S0120) and 0.234 (SW35) with multilocus inbreeding coefficient of the entire population of -0.012. The negative inbreeding coefficient of Tswana pigs might be due to avoidance of mating among closely related animals (Hui-Fang et al., 2010) which resulted in significant excess of heterozygotes in the population. All the markers with the exception of SW35 and S0073 contributed to the negative inbreeding coefficient of the Tswana pigs. Markers SW35 and S0073 exhibited significant deficit of heterozygotes probably due to genetic drift or linkage disequilibrium of the marker with loci under either natural or artificial selection (Ibeagha and Erhardt, 2005).
Conclusions
Moderate levels of genetic diversity and no inbreeding exist within the Tswana pig population in Southern Botswana. This genetic diversity in the Tswana pigs showed that there is random mating and the animals are not undergoing any form of artificial selection. If deliberate efforts towards conservation are not put in place, this valuable genetic resource with its hardiness, disease resistance and heat tolerance genes might become extinct within the next decades, even before it has been fully characterized. The conservation of indigenous Tswana pigs should be given high priority because it contains valuable genes (disease resistance and heat tolerance genes) for future breed developments and genetic engineering applications to counter the effects of global warming or climate change on pig production and productivity. | 2020-10-28T18:00:07.418Z | 2020-08-31T00:00:00.000 | {
"year": 2020,
"sha1": "74d5c0bc3d90e02729dd589b4a89cca2aa495de9",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/5BE9F8E64573.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "46ca540d37115f1b0a389ef4ec46a2b89affaf61",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
28097356 | pes2o/s2orc | v3-fos-license | Charles Darwin’s Mitochondria
Charles Darwin’s long-term illness has been the subject of much speculation. His numerous symptoms have led to conclusions that his illness was essentially psychogenic in nature. These diagnoses have never been fully convincing, however, particularly in regard to the proposed underlying psychological background causes of the illness. Similarly, two proposed somatic causes of illness, Chagas disease and arsenic poisoning, lack credibility and appear inconsistent with the lifetime history of the illness. Other physical explanations are simply too incomplete to explain the range of symptoms. Here, a very different sort of explanation will be offered. We now know that mitochondrial mutations producing impaired mitochondrial function may result in a wide range of differing symptoms, including symptoms thought to be primarily psychological. Examination of Darwin’s maternal family history supports the contention that his illness was mitochondrial in nature; his mother and one maternal uncle had strange illnesses and the youngest maternal sibling died of an infirmity with symptoms characteristic of mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes (MELAS syndrome), a condition rooted in mitochondrial dysfunction. Darwin’s own symptoms are described here and are in accord with the hypothesis that he had the mtDNA mutation commonly associated with the MELAS syndrome.
C harles Darwin (1809Darwin ( -1882) suffered a debilitating illness for most of his adult life with many very varied and bizarre symptoms. The nature of this illness has been the subject of much academic industry and more than 40 different diagnoses have been proposed at various times (Colp 2008).
The illness was such that Darwin would be incapacitated for days, weeks at a time. Despite this, he produced an enormous and impressive volume of work, writing 19 books, numerous papers, and thousands of letters, many of which are preserved today (Darwin et al. 2009). Apart from his most famous work, On the Origin of Species . . ., Darwin's other publications, such as Variations in Domesticated Animals and Plants, published in two volumes, were works that represented years of study and experimentation.
Darwin never ascertained the genetic basis of hereditary, despite his interest in and work on the subject and his awareness that evolution depends on the existence of heritable variability within a species (Charlesworth and Charlesworth 2009). He carried out extensive plant breeding experiments in his garden and hothouse at Down House in Kent and meticulously recorded numbers of resulting varieties. He failed, however, to interpret these numbers as evidence of hereditable units (Howard 2009). Despite this failure to produce a convincing theory of heredity, he made the single most important contribution to our understanding of biology, his theory of evolution.
In this article, it is proposed that Darwin himself may have had an unusual heredity condition, a mitochondrial genetic disease. Several mitochondrial diseases show bizarre and variable symptoms and those of one in particular, the MELAS syndrome (Finsterer 2007), closely mirror those of Darwin's condition.
Charles Darwin's Illness
Charles Darwin suffered illness for most of his adult life with many very differing symptoms. Some of these symptoms were present when he was a university student, both in Edinburgh and Cambridge. In Edinburgh he was known to have a "weak stomach" and was unable to watch surgical operations; at Cambridge he suffered from eczema of his lips and hands. When attending two recitals in 1 day at a Birmingham Music Festival, he experienced extreme fatigue-"most terribly knocked up," as he expressed in his autobiography (Barlow 1958). When he was a resident in Plymouth, before he sailed on HMS Beagle, he experienced an episode of rapid heartbeat with "pain around the heart." During his voyage on the Beagle, Darwin suffered greatly from seasickness. This was not ordinary seasickness but a sickness that became worse throughout the 5-year voyage. When ashore, he also had periods of illness, including attacks of headache and visual disturbances. These episodes were severe enough for him to be incapacitated for days on end.
Before the voyage, apart from these unusual but mostly sporadic episodes of illness, Darwin was a fit young man. After the voyage, however, his illness progressed and he had attacks of sickness during which he was incapacitated for weeks, even months at a time. He suffered with nausea, retching, vomiting, flatulence, episodes of abdominal pain, "lumbago" or backache, and symptoms of asthma. His "eczema," diagnosed as atopic dermatitis (Sauer 2000), was at times severe and was complicated by frequent boils. He complained of numbness in his fingers (peripheral neuropathy), together with shivering, sweating, and giddy turns (dysautonomia). He had psychological symptoms, waking at night with intense, irrational fear and other episodes of hysterical crying. He continued to have periods of severe lethargy, with times when he could only lie on a sofa and do nothing. Darwin's main symptoms are listed in Table 1; symptoms that may be considered secondary in nature are listed in Table 2 together with their proposed relationship to his primary symptoms.
Interestingly these symptoms improved in later life. His attacks were characteristically brought on by any forms of stress, even by pleasurable events. Darwin learned to prevent these events by restricting visitors and avoiding scientific and social occasions. In older age there was no pressure to publish and Darwin could work at a pace that suited him and, evidently, his frail condition.
Diagnoses of Darwin's Illness
Many of the different diagnoses that have been proposed for Darwin's illness are psychogenic or psychological in nature.
These include repressed hatred for a dominating father, or, alternatively, as a means of bonding with his father by developing a patient-doctor relationship (Colp 2008).
The late Dr. John Bowlby, an English psychiatrist, propounded psychogenic causes for many illnesses, in particular mother-child separation. He suggested that unresolved grief over the death of his mother when he was 8 years old was the cause of Darwin's psychosomatic illness (Bowlby 1965). There is, however, no evidence that Darwin suffered any unusual grieving process. Dr. Bowlby suggested that his hypothesis could be tested by showing that Darwin's symptoms were worse at "anniversary dates," such as the date of his mother's death, and that his symptoms were similar to those of his mother's. Colp diligently examined the state of Darwin's illness on these dates and found no such association (Colp 1977). It will be proposed here, however, that there was an important maternal link to Darwin's illness but that it was genetic, not psychological. Indeed, Darwin and his mother seem to have shared a number of symptoms as would be expected for such a connection.
Other psychogenic causes that have been proposed include inner emotional conflict (repressed hatred) toward his loving and devoted wife Emma and guilt over conflict with religious beliefs and his ideas of evolution (Colp 2008). Darwin certainly had psychological symptoms, including symptoms of a panic disorder with periods of irrational fear (Barloon and Noyes 1997), episodes of hysterical sobbing, and other symptoms that may be psychological such as sweating, tremors, and palpitations. Darwin's illness, however, was not primarily psychogenic in nature.
Other diagnoses relate to possible acquired infection during the voyage of the HMS Beagle; the most persistent of these is that Darwin had Chagas disease (American trypanosomiasis) (Adler 1959). Despite a comprehensive rebuttal by Woodruff and others this diagnosis persists (Woodruff 1965). Woodruff pointed out that although Darwin was bitten by a known vector, and certainly bitten several times, the insect in that place and at that time would be unlikely to have been carrying the infectious agent. Darwin suffered from severe incapacity but he lived to the age of 73; he almost certainly would not have survived to this age with advanced trypanosomal heart and intestinal disease. In addition, Darwin consulted the best physicians of his time and no physical abnormality in their examinations is recorded. Woodruff concluded: ". . . it is beyond credibility that severe incapacity could have been produced (by Chagas disease) for 40-50 years without the development of physical signs . . . ." Intestinal disorders have also been proposed including peptic ulceration, biliary disease, Crohn's disease, and the irritable bowel syndrome (IBS) (Shanahan 2012). Darwin certainly had symptoms of IBS; he also had symptoms of another suggested diagnosis, that of paroxysmal tachycardia (Dent 1965). The symptoms of IBS, panic disorder, atopic dermatitis, and paroxysmal tachycardia may all occur with Darwin's proposed diagnosis. Other diagnoses may be dismissed simply on the grounds that Darwin had some early symptoms of his disorder before he sailed on the Beagle, before he developed any ideas about evolution, and before his proposal and marriage to Emma. In fairness, it should be remembered that many of these more imaginative diagnoses were put forward before there was much knowledge of mitochondrial genetic diseases.
Proposed Diagnosis: a Mitochondrial DNA Disorder
Most of Darwin's symptoms are similar to those seen in patients with cyclic vomiting syndrome (CVS) (Hayman 2009), including some of the more unusual features of this rather poorly defined disorder. Patients with CVS suffer from motion sickness, attacks may be brought on by pleasurable events ("positive stress"), and patients often have relief from water exposure (Fleisher et al. 2005). Only one of the many treatments that Darwin tried seemed to bring him any relief and that was "hydropathy" or the "water cure." Patients with CVS today will spend hours in a bath or under a shower (Cyclic Vomiting Association 2010). Some of Darwin's symptoms, however, were not symptoms experienced by patients with CVS. In his 50's, Darwin experienced episodes of transient partial paralysis, inability to speak, and memory loss (Jones 1867). These symptoms, together with many of his other symptoms such as headache and visual disturbances may occur in MELAS syndrome (Pavlakis et al. 1984). This syndrome, usually regarded as a fatal childhood disorder, may have less severe manifestations and symptoms may first appear in adult life (Higashikata et al. 2001). Lactic acidosis, which is a biochemical feature of the disorder (and part of the acronym) may be associated with feelings of panic (Ehlers et al. 1986), one of Darwin's symptoms. In the original paper defining the syndrome, 7 of the 11 patients listed experienced "episodic vomiting" (Pavlakis et al. 1984), again a key element of Darwin's condition. Eighty percent of patients with this disorder have been shown to have a particular mitochondrial DNA mutation, an A-to-G transition at nucleotide 3243 in the gene for leucine transfer RNA in the mitochondrial ring chromosome (Goto et al. 1990). This same mutation may be associated with cardiac and vestibular symptoms, atopy (atopic dermatitis, asthma), dysautonomia, and peripheral neuropathy (Finsterer 2007). It is a reasonable contention that Darwin had this mutation. Darwin's symptoms and their similarities to the effects created by the mtDNA mutation associated with MELAS syndrome are summarized in Table 1.
Although MELAS syndrome as initially described was progressive and fatal in early life, patients carrying the mutation commonly associated with the condition may have lesser symptoms and a normal lifespan (Manwaring et al. 2007).
Darwin's Family Illnesses
Mitochondria and mitochondrial disorders are maternally inherited. An examination of Darwin's family history provides supporting evidence that Charles Darwin had such an inherited mitochondrial disorder.
Erasmus Alvey Darwin (1804Darwin ( -1881, Charles elder brother, graduated in medicine from the University of Edinburgh but never practiced. Instead, he spent a life in London partly as a socialite and partly as a chronic invalid (Healey 2001). He suffered from abdominal pain and lethargy; today he would probably be diagnosed as having chronic fatigue syndrome. His symptoms are consistent with his having had the same Table 2 Symptoms that may be regarded as secondary in nature, with an explanation as to how these symptoms may be seen as complications of Darwin's primary disorder
Recurrent boils Recognized complication of atopic dermatitis Hematemesis
Complication of forceful vomiting Dental caries Complication of recurrent vomiting Skin pigmentation Addisonian pigmentation, due to increased ACTH/MSH secretion following excessive salt and fluid loss with repeated vomiting (Hayman 2011) ACTH, adrenocorticotrophic hormone; MSH, melanocyte-stimulating hormone. The two hormones are released together from the pituitary, molecule for molecule, by splitting of a large parent molecule.
mtDNA mutation as his younger brother but with a lower level of heteroplasmy. Susannah ("Sukey") Darwin (Wedgwood) (1765-1817), Charles and Erasmus' mother, suffered chronic ill health and was "never quite well and never very ill" (Wedgwood and Wedgwood 1980). As a child she suffered from vomiting and boils, and had "difficulties" with her pregnancies, spending much time in bed and suffering from what most likely was hyperemesis. She also experienced motion sickness and preferred to ride in a phaeton rather than a carriage. She complained that "Everyone seems young but me." Her symptoms are those that may occur in CVS; female patients frequently have hyperemesis during pregnancy (Fleisher et al. 2005).
Tom Wedgwood (1771Wedgwood ( -1805, Susannah's youngest brother, was unwell for most of his short life. As a student he suffered from headaches and later severe abdominal pains and "would roll around the floor in agony." On his trip to the West Indies he also suffered from seasickness and was confined to his cabin for the entire voyage with both vomiting and abdominal pain (Wedgwood and Wedgwood 1980). He died with opium overdosage at the age of 34. His symptoms are consistent with what today would be called abdominal migraine, a disorder that may be associated with CVS and with the same mtDNA mutation (Pronicki et al. 2002).
Mary Anne Wedgwood (1778Wedgwood ( -1786, the youngest child in the family, had short stature and was physically and mentally retarded. She suffered from recurrent fits followed by partial paralysis episodic blindness. She died with progressive dementia at the age of eight (Wedgwood and Wedgwood 1980). Her symptoms are typical of the severer cases of MELAS syndrome, as associated with the A3243G mtDNA mutation (Goto et al. 1990).
Other siblings in the family suffered more ambiguous symptoms such as social and cognitive decline, while daughters of Susannah's sisters also had similar problems. Although less specific as a symptom, psychosocial abnormality may also occur with the A3243G mutation (Finsterer 2007). Two brothers had tremors; one had a lifetime tremor and the other developed classical Parkinson's disease in later life. A diagram, giving symptoms of Charles Darwin's siblings and his maternal ancestors is shown in Figure 1.
Charles Darwin's 10 children were in general a sickly lot; one died in infancy, one in childhood, and their first daughter died at the age of 10. Their illnesses do not seem to be related to one another and not related to the illness of their father. As well as other symptoms, the children suffered from various infections. Their sicknesses may have at least in part been due to the consanguinity of their parents (Charles and his wife Emma were first cousins) as there may be increased susceptibility to infection in the children of such partnerships (Berra et al. 2010).
Conclusion
Darwin's illness, the illnesses of his brother, their mother, his maternal uncle Tom, and a child belonging to the maternal generation as well as other family members show a pattern of maternal inheritance that is the hallmark of mitochondrial mutations, while the particular symptoms point to one specific well-characterized mitochondrial disorder, MELAS syndrome. The evidence is circumstantial, of course, but it is considerable and consistent. As Darwin said of evolution and natural selection: "Let me add that there are many difficulties not satisfactorily explained by my theory of descent with modification, but I cannot possibly believe that a false theory would explain so many classes of facts as I think it certainly does explain." Much the same may be said of this explanation for Darwin's illness.
If the conclusion that Darwin's illness was due to a mtDNA mutation is accepted, then the detailed, lifetime history of his illness and those of family members shows us the range of symptoms that may occur with the one mtDNA abnormality. Further study of diseases associated with mtDNA mutations may lead us to a better understanding of Darwin's illness, in particular of his very diverse symptoms and the manner in which his attacks of illness were precipitated. | 2016-05-12T22:15:10.714Z | 2013-05-01T00:00:00.000 | {
"year": 2013,
"sha1": "09b840a79fd500812a1230e5bba8cfb180925a33",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3632469?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "09b840a79fd500812a1230e5bba8cfb180925a33",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
2362319 | pes2o/s2orc | v3-fos-license | One-dimensional parabolic-beam photonic crystal laser
We report one-dimensional (1-D) parabolic-beam photonic crystal (PhC) lasers in which the width of the PhC slab waveguide is parabolically tapered. A few high-Q resonant modes are confirmed in the vicinity of the tapered region where Gaussian-shaped photonic well is formed. These resonant modes originate from the dielectric PhC guided mode and overlap with the gain medium efficiently. It is also shown that the far-field radiation profile is closely associated with the symmetry of the structural perturbation. ©2010 Optical Society of America OCIS codes: (203.5298) Photonic crystals; (140.3945) Microcavities; (250.5300) Photonic integrated circuits; (140.5960) Semiconductor lasers. References and links 1. O. Painter, R. K. Lee, A. Scherer, A. Yariv, J. D. O’Brien, P. D. Dapkus, and I. Kim I, “Two-dimensional photonic band-Gap defect mode laser,” Science 284(5421), 1819–1821 (1999). 2. H.-G. Park, S.-H. Kim, S.-H. Kwon, Y.-G. Ju, J.-K. Yang, J.-H. Baek, S.-B. Kim, and Y.-H. Lee, “Electrically driven single-cell photonic crystal laser,” Science 305(5689), 1444–1447 (2004). 3. M.-K. Seo, K.-Y. Jeong, J.-K. Yang, Y.-H. Lee, H.-G. Park, and S.-B. Kim, “Low threshold current single-cell hexapole mode photonic crystal laser,” Appl. Phys. Lett. 90(17), 171122 (2007). 4. H. Altug, D. Englund, and J. Vŭcković, “Ultrafast photonic crystal nanocavity laser,” Nat. Phys. 2(7), 484–488 (2006). 5. T. Baba, D. Sano, K. Nozaki, K. Inoshita, Y. Kuroki, and F. Koyama, “Observation of fast spontaneous emission decay in GaInAsP photonic crystal point defect nanocavity at room temperature,” Appl. Phys. Lett. 85(18), 3989–3991 (2004). 6. T. Tanabe, M. Notomi, S. Mitsugi, A. Shinya, and E. Kuramochi, “All-optical switches on a silicon chip realized using photonic crystal nanocavities,” Appl. Phys. Lett. 87(15), 151112 (2005). 7. M.-K. Kim, I.-K. Hwang, S.-H. Kim, H.-J. Chang, and Y.-H. Lee, “All-optical bistable switching in curved microfiber-coupled photonic crystal resonators,” Appl. Phys. Lett. 90(16), 161118 (2007). 8. A. J. Shields, “Semiconductor quantum light sources,” Nat. Photonics 1(4), 215–223 (2007). 9. W.-H. Chang, W.-Y. Chen, H.-S. Chang, T.-P. Hsieh, J.-I. Chyi, and T.-M. Hsu, “Efficient single-photon sources based on low-density quantum dots in photonic-crystal nanocavities,” Phys. Rev. Lett. 96(11), 117401 (2006). 10. C. Santori, D. Fattal, J. Vucković, G. S. Solomon, and Y. Yamamoto, “Indistinguishable photons from a singlephoton device,” Nature 419(6907), 594–597 (2002). 11. T. Yoshie, A. Scherer, J. Hendrickson, G. Khitrova, H. M. Gibbs, G. Rupper, C. Ell, O. B. Shchekin, and D. G. Deppe, “Vacuum Rabi splitting with a single quantum dot in a photonic crystal nanocavity,” Nature 432(7014), 200–203 (2004). 12. K. Hennessy, A. Badolato, M. Winger, D. Gerace, M. Atatüre, S. Gulde, S. Fält, E. L. Hu, and A. Imamoğlu, “Quantum nature of a strongly coupled single quantum dot-cavity system,” Nature 445(7130), 896–899 (2007). 13. D. Englund, A. Faraon, I. Fushman, N. Stoltz, P. Petroff, and J. Vucković, “Controlling cavity reflectivity with a single quantum dot,” Nature 450(7171), 857–861 (2007). 14. H. Mabuchi, and A. C. Doherty, “Cavity quantum electrodynamics: coherence in context,” Science 298(5597), 1372–1377 (2002). 15. G. Khitrova, H. M. Gibbs, M. Kira, S. W. Koch, and A. Scherer, “Vacuum Rabi splitting in semiconductors,” Nat. Phys. 2(2), 81–90 (2006). #122441 $15.00 USD Received 8 Jan 2010; revised 25 Feb 2010; accepted 1 Mar 2010; published 4 Mar 2010 (C) 2010 OSA 15 March 2010 / Vol. 18, No. 6 / OPTICS EXPRESS 5654 16. B. S. Song, S. Noda, T. Asano, and Y. Akahane, “Ultra-high-Q photonic double-heterostructure nanocavity,” Nat. Mater. 4(3), 207–210 (2005). 17. T. Tanabe, M. Notomi, E. Kuramochi, A. Shinya, and H. Taniyama, “Trapping and delaying photons for one nanosecond in an ultrasmall high-Q photonic-crystal nanocavity,” Nat. Photonics 1(1), 49–52 (2007). 18. B. Schmidt, Q. Xu, J. Shakya, S. Manipatruni, and M. Lipson, “Compact electro-optic modulator on silicon-oninsulator substrates using cavities with ultra-small modal volumes,” Opt. Express 15(6), 3140–3148 (2007). 19. M. Notomi, E. Kuramochi, and H. Taniyama, “Ultrahigh-Q nanocavity with 1D photonic gap,” Opt. Express 16(15), 11095–11102 (2008). 20. M. Eichenfield, R. Camacho, J. Chan, K. J. Vahala, and O. Painter, “A picogramand nanometre-scale photoniccrystal optomechanical cavity,” Nature 459(7246), 550–555 (2009). 21. J. Chan, M. Eichenfield, R. Camacho, and O. Painter, “Optical and mechanical design of a “zipper” photonic crystal optomechanical cavity,” Opt. Express 17(5), 3802–3817 (2009). 22. P. B. Deotare, M. W. McCutcheon, I. W. Frank, M. Khan, and M. Lončar, “High quality factor photonic crystal nanobeam cavities,” Appl. Phys. Lett. 94(12), 121106 (2009). 23. L.-D. Haret, T. Tanabe, E. Kuramochi, and M. Notomi, “Extremely low power optical bistability in silicon demonstrated using 1D photonic crystal nanocavity,” Opt. Express 17(23), 21108–21117 (2009). 24. M. W. McCutcheon, and M. Lončar, “Design of a silicon nitride photonic crystal nanocavity with a Quality factor of one million for coupling to a diamond nanocrystal,” Opt. Express 16(23), 19136–19145 (2008). 25. C. Sauvan, G. Lecamp, P. Lalanne, and J. P. Hugonin, “Modal-reflectivity enhancement by geometry tuning in Photonic Crystal microcavities,” Opt. Express 13(1), 245–255 (2005). 26. M.-K. Kim, I.-K. Hwang, M.-K. Seo, and Y.-H. Lee, “Reconfigurable microfiber-coupled photonic crystal resonator,” Opt. Express 15(25), 17241–17247 (2007). 27. Y.-S. No, H.-S. Ee, S.-H. Kwon, S.-K. Kim, M.-K. Seo, J.-H. Kang, Y.-H. Lee, and H.-G. Park, “Characteristics of dielectric-band modified single-cell photonic crystal lasers,” Opt. Express 17(3), 1679–1690 (2009). 28. M.-K. Seo, J.-H. Kang, M.-K. Kim, B.-H. Ahn, J.-Y. Kim, K.-Y. Jeong, H.-G. Park, and Y.-H. Lee, “Wavelength-scale photonic-crystal laser formed by electron-beam-induced nano-block deposition,” Opt. Express 17(8), 6790–6798 (2009). 29. S.-H. Kim, S.-K. Kim, and Y.-H. Lee, “Vertical beaming of wavelength-scale photonic crystal resonators,” Phys. Rev. B 73(23), 235117 (2006). 30. J.-H. Kang, M.-K. Seo, S.-K. Kim, S.-H. Kim, M.-K. Kim, H.-G. Park, K.-S. Kim, and Y.-H. Lee, “Polarized vertical beaming of an engineered hexapole mode laser,” Opt. Express 17(8), 6074–6081 (2009). 31. H.-Y. Ryu, H.-G. Park, and Y.-H. Lee, “Two-Dimensional Photonic Crystal Semiconductor Lasers: Computational Design, Fabrication, and Characterization,” IEEE J. Sel. Top. Quantum Electron. 8(4), 891–908 (2002).
Recently the high Q/V 1-D PhC beam cavity was proposed and employed [18,19].The compactness and the lightness of the 1-D PhC cavity have attracted researchers working on cavity-optomechanics [20,21] and compact optical-devices [22][23][24].In 2008, Notomi et al. proposed a theoretical maximum Q-factor of 2.0x10 8 and a modal volume of ~1.4(λ/n) 3 after precise tuning of the periodic ladder's size [19].The predicted Q-factor is very high in spite of its 1-D structure.
In this work, we propose and demonstrate a new 1-D high-Q PhC beam cavity structure.The width of 1-D PhC waveguide structure is parabolically tapered in order to create a Gaussian-shaped photonic well.The formation of high-Q modes is identified near the photonic well region.The smooth parabolic perturbation minimizes scattering losses [25].The existence of the newly-generated resonant mode is experimentally confirmed through the lasing action of 1-D PhC lasers.We also found that the vertical emission characteristics can be controlled by modifying the symmetry of the cavity structure.
Design of 1-D parabolic-beam PhC cavity
Consider a 1-D periodic PhC waveguide structure where air holes are drilled along x direction periodically, as shown in Fig. 1(a).We choose dielectric guided modes in which electric fields are concentrated in the dielectric region.Dispersion characteristics of the dielectric guided modes are shown in Fig. 1(b), for 1-D PhC structures of different beam widths and a fixed airhole size.Note that the normalized cutoff frequency decreases with the beam width (w).Getting the width thinner makes the effective refractive index of the guiding structure decrease, and thus the cutoff wavelength (frequency) becomes smaller (larger) accordingly.So far tuning of 1-D periodic PhC cavity was achieved by modulation air-hole size or lattice constant [19,22,24].Here, we design a new type of cavity by tuning the width of 1-D PhC beam waveguide structure as shown in Fig. 2(a).It is well known that the parabolic shape of cutoff frequency is advantageous in obtaining high Q factors [16,26].The width of the 1-D parabolic-beam PhC cavity is tuned parabolically as Assuming that the cutoff frequency of the guided mode faithfully follows the dispersion characteristics predicted in Fig. 1(b), the shift of the cutoff frequency is expected to be proportional to x 2 .This parabolic variation makes the Gaussian-shaped optical well in which confined photon modes can reside [26].Figure 2(b) plots the cutoff frequency as a function of x position.Three-dimensional finite-difference time-domain (3-D FDTD) computations predict that there exist four confined modes in this photonic well.These modes originate from the dielectric band, so photon energy is mostly located in the region of dielectric as shown in Fig. 2(c).We, thus, expect the strong interactions between the dielectric gain medium and the modal field [27].Refractive index of the material (n), slab thickness (t), hole radius (r), curvature radius of the waist (R), the width of the center (W 0 ) and the final width(W f ) are 3.4, 0.8a, 0.3a, 500a, 1.6a, 2.0a respectively, where a is lattice constant of periodic air holes.The normalized frequency of the fundamental mode, its Q factor and mode volume are 0.2153, ~7,000,000 and 0.83(λ/n) 3 , respectively.We identify two kinds of cavities of different symmetries as shown in Fig. 3 [28].The aircenter cavity [Fig.3(a)] is symmetric with respect to the central air hole and the dielectriccenter cavity [Fig.3(b)] is symmetric with respect to the line passing through the central dielectric region.As shown in Fig. 1(c), the E y field of the air-center cavity has an odd symmetry with a node at the symmetry plane.Therefore, the cavity loss can be effectively suppressed and the Q factor can be high [29,30].In comparison, the dielectric-center cavity has an even symmetry with an anti-node at the center.In this case, the vertical emission loss is larger than that of the air-center cavity.These two effects are depicted in the emission profiles of Fig. 3(c) and 3(d).
1. D parabolic-beam PhC laser
The 1-D parabolic-beam PhC laser structure is fabricated on a 280-nm-thick free-standing InGaAsP slab, as shown in Fig. 4(a) [30,31].Three pairs of InGaAsP quantum wells emitting near 1.5 µm are employed as the gain medium of the laser.The lattice constant (a), the radius of the air hole (r), and the curvature radius of waist (R) of the fabricated PhC structure are ~350 nm, 100 nm, and 100 µm, respectively.The cavity is pulse-pumped by a 980-nm InGaAs laser diode (10 ns pulses, ~1% duty cycle) using a 50x microscope objective lens with a numerical aperture (N.A.) of 0.85.Threshold behavior is observed and the lasing threshold (irradiated power) is ~1.34 mW, as shown in Fig. 4(b), and the corresponding absorbed power by the slab (effective pump power) is 86 µW.We observe a single lasing peak of the fundamental mode (λ = 1487.5nm)as shown in the PL spectrum [Fig.4(c)].The output field is y-polarized with a measured polarization extinction ratio (PER) of 6.3:1.Considering that the resonant mode is basically a TE mode, this polarization characteristic is understandable.When the incident power increase up to 1.9 mW, two peaks of 0th mode (λ 0 = 1487.5nm) and 1st mode (λ 1 = 1519.3nm) are observed [Fig.4(c)].To confirm the modes, we compare nearfield profiles of the measured CCD image and the vertical component of the Poynting vector obtained by 3-D FDTD computation.The computation is performed with the real fabricated structure, which are directly obtained from the SEM images, as input data.The calculations reflect all fabrication imperfections [2,27,28].The optical properties of dielectric-center cavity are measured with the same manner of air-center cavity (Fig. 5).The fundamental mode (λ 0 = 1524 nm) and the first mode (λ 1 = 1550 nm) are observed as shown in Fig. 5(a).Mode separation of the two peaks is 26 nm and the calculated value is 21 nm.The fundamental mode of the dielectric-center shows a central intensity maximum which prefers the vertical emission [Fig.5(b)].These emission properties agree well with the 3-D FDTD calculation of Fig. 5(c).To see only the emission of the fundamental mode, we employ the 1524 nm band-pass filter.The measured CCD image is blurred by the objective lens and the band-pass filter.
Summary
The new type of 1-D cavity, parabolic-beam PhC cavity, is proposed and demonstrated by the lasing behaviors.This type of laser has a small physical size compared to the conventional 2-D PhC lasers.It also has a high Q/V value.We believe that this type of 1-D parabolic-beam PhC resonator can be useful for photonic integrated circuits and cavity quantum electrodynamics.
Fig. 2 .
Fig. 2. (a) Schematic of 1-D parabolic-beam PhC cavity.R is a tapering radius of curvature, W0 is the width of the thinnest central waist, and Wf is the width of the 1-D structure.(b) The black line plots the expected normalized cutoff frequency as a function of x position of the parabolicbeam.The graph also shows the resonant modes found in the photonic well.The respective resonant frequencies are 0.2153, 0.2125, 0.2099, and 0.2077.(c) Electric field intensity (E 2 ) distribution of the fundamental mode.
Fig. 3 .
Fig. 3. Mode volumes and Q factors of (a) the air-center (b) the dielectric-center parabolicbeam cavity.Poynting vectors (Sz) of (c) the air-center (d) the dielectric-center parabolic-beam cavity (side view).
Figure 4 (
d) is the CCD image of the fundamental mode of air-center cavity [inset of Fig. 4(c)] in which the symmetry axis passes through the center of air hole.Observe that the central node between two bright spots in Fig. 4(d) and 4(e).The calculated vertical component of the propagating Poynting vector at a vertical position of 1.0 µm above the slab matches well with the measured near-field profile of the air-center cavity.The spectral separation (∆λ 01 = 31.8nm) between 0th mode and 1st mode agrees well with the 3-D FDTD computations (∆λ 01 = 33.6 nm).
Fig. 4 .
Fig. 4. (a) Scanning electron microscope (SEM) image of fabricated sample.(b) Light-in versus light-out curve and polarization characteristics of the fundamental mode of the fabricated sample.(c) Measured PL spectra and the SEM image of air-center cavity.(d) Measured IR CCD image of the fundamental mode of air-center cavity.The dotted red line indicates the boundary of the fabricated sample.(e) The vertical component of the Poynting vector obtained with the use of the structural data of the inset of (c).
Fig. 5 .
Fig. 5. (a) Measured PL spectra and the SEM image of dielectric-center cavity.(b) Measured CCD image of the 0th mode of dielectric-center cavity.(c) The calculated vertical component of the Poynting vector. | 2018-04-03T03:20:41.853Z | 2010-03-15T00:00:00.000 | {
"year": 2010,
"sha1": "3c362f985ffaec34027a37f2e315cd032dfd31be",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.18.005654",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "80838610965ef5dca678e3806452677059a0fee2",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
149700165 | pes2o/s2orc | v3-fos-license | Family prospects of the Russian youth in conditions of social change
. The article presents the results of a study of the family prospects of modern youth for the period from 2011 to 2017. A tendency has been revealed to reorient young people from traditional family values, including the birth and upbringing of children, to creating a satisfying need for support, freedom, and self-development of the partnership. There is a reduction in the target saturation of family prospects for young people, the reduction in the content of their goals related to marital relations, while concentrating on the planning of personal development. The family prospects of Russian youth reflect their focus on creating in the future not a traditional patriarchal family, but a free alliance that implements emotional, psychological, as well as recreational functions and a safety function at the expense of the reproductive one.
Introduction
Modernization of the contemporary Russian society affects various areas of its functioning. Intensive changes systematically cover the scope of family and marriage (S. I. Golod, T. A. Gurko, S. V. Darmodekhin, L. V. Kartseva) [1,2,3]. A new model of the family, which is reflected in the public consciousness and is fixed in mass behavior, corresponds today to the changed society. The scale of the transformations that are characteristic of the modern family allows us to speak about the crisis of this institution of socialization, which objectifies itself in changing the structure and functions of the family, changing the nature of its relationships with other social institutions, the transition of family and marriage models from traditional ("patriarchal") to progressive modern types [4].
The family at the moment is a dynamic institution, which is less regulated by social factors. Along with this, changes in the political, economic, educational and other spheres of society have a more destabilizing effect on it. In addition, there is a high rate of functional and structural changes in a modern family that take place during the lifetime of one or two generations. In this situation, the process of transformation of family values and their adaptation to new living conditions often occurs with the active participation of young people [5].
The scale and dynamism of family changes in the contemporary world contribute to the emergence of a special freedom of self-realization of the individual in the family sphere that is not characteristic of previous eras. Among such freedoms, we consider the following: liberalization of sexual norms, the choice of a mate, the time of the birth of children and their number, methods of family education, the level of education of children, the distribution of household duties among family members, forms of joint leisure activities, etc. Today, the transformations of the institution of family and marriage are manifested in the variability of the forms of sexual and marital-family relations, in the increased instability of the family structure, and in the change in the functional side.
As an alternative to the traditional form of marital relations, trial, guest, and seasonal marriages, free unions, prisoners, including between the same sex, are considered. The structure of the family becomes more mobile, which is associated with an increase in the number of divorces, foster families, extramarital births, the prevalence of families of repeated marriages [1,6].
The changes of the contemporary family are also seen by researchers in the fact that it turns into a partnership group from a socially regulated union, where relationships are built on the basis of accepting the uniqueness and independence of another person [2,7]. The leading socio-psychological functions of the modern family are emotional, psychological, and recreational ones; the purpose of which, in general, implies creating conditions for the socio-psychological and mental development of its members [7].
Changes in the family institution of the family in the contemporary world make it possible to comprehend the process of family-marriage self-determination of young people as containing a high degree of uncertainty, proceeding practically without reliance on stable landmarks, in the context of an endless variety of development options. This contributes to the emergence of the freedom of an individual's self-realization in the family sphere, which is not characteristic for the previous stages of social development, which implies the reduction of social restrictions on the choice of the marriage partner, the adoption of a decision on the form of marriage relations, their creation or dissolution, the birth and upbringing of children. In this situation, individual opportunities for self-realization in the family sphere expand, but people's readiness for independence and taking responsibility for their choices, as well as the ability to build consistent and realistic family prospects for him, are of particular importance.
Reformatting the institution of family and marriage at the present stage of development of the Russian society serves as a basis for understanding the new relevance of scientific research on the problem of family prospects. This relevance is seen in the need to fix the process of transforming the youths' ideas about their future in the sphere of family and marriage, taking place within the last decade, to reveal a new content of family prospects, and to forecast the trajectories of development of marital unions in Russia in the future. Particular attention in the study of this problem area is given to the youth as the age cohort most acceptable to social changes, simultaneously entering the period of creation of one's own family and the birth of children.
At the present stage of the development of Psychological Science, the issue of family perspectives is a field of study of the generalized human concepts about the future, being concretized in the aspect of studying the prospects for its self-realization in the family sphere. Relying on the scientific views of K. A. Abulkhanova-Slavskoy, E. I. Golovakhi, and A. A. Kronika, family perspectives can be defined as a person's ideas about the future of family life, mediating the search and selection of a mate, the reproduction of a certain style of conjugal and child-parent relationships [8,9]. Family prospects are considered as a vector of self-realization of an individual in the sphere of family and marital relations. In the basis of building the future in this sphere are the value-semantic reference points of an individual. Values, personal meanings, and goals set the prototype of the future, organize person's activities in the present in order to achieve the desired future [10,11].
Materials and methods
We undertook an investigation into the transformation of the value/semantic coordinates and goals of young Russians in the sphere of family and marital relations.
Results and discussion
Let's consider the results of the study on the value-semantic content of those family prospects that modern young people have. Analyzing the hierarchy of family values in the groups of respondents being interviewed in 2011, 2014, and 2017, it can be noted that such value as "loyalty," "trust," "freedom as independence in deeds and actions" increase every year. In contrast, the orientation towards "love," "birth," and "upbringing of children" decreases. Also, the availability of sexual gratification and diverse pastime increase. The values of "children" and "love" become less accessible compared to the group of respondents in 2011.
As a result of applying the variance analysis, reliable differences were established between representatives of three groups of respondents according to the rating of the importance of family values: "freedom as independence in deeds and actions" (p = 0.029), "children" (p = 0.0001), and the availability of such values as "love" (р = 0,007), "community of interests" (р = 0,016), and "children" (р = 0,0001). Every year, young people increasingly appreciate the preservation of personal freedom and devalues the birth and upbringing of children in a future family. There is a value-and-meaning restructuring that determines the face of family prospects, which is building a family union based on mutual respect for personal borders and the interests of partners, preserving the freedom of choice in the family.
Today, it is important for young people to maintain the psychotherapeutic and recreational functions of the family, the realization of which assumes the formation of conditions for the development and self-expression of each of its members. At the same time, young people, probably, see the preservation of their freedom in the future family life in refusing to give birth to children or postponing this event for a later period. With regard to the availability of family values, it is important to note that a diversified pastime in a family seems to be more achievable than love. These results can show the specificity of the choice of a marriage partner by modern young people, increasingly focused on finding a person with whom it could be interesting to spend time and have fun. And love is probably seen as a temporary and unstable phenomenon, along with a more stable mutual interest of partners. Thus, the modern youth is committed to building partnerships in families as opposed to traditional marital relations.
According to the data analyzed, among the representatives of the youth groups being surveyed, the most common goal in the marriage and family sphere is the birth of a child, which, however, is postponed for a fairly long time. Respondents of 2017 think of achieving financial stability, self-development within the family and abroad, and building a career as important goals. The construction of satisfying family relationships and civil marriage are less important for them. Against this background, the young people surveyed in 2014 listed the goals related to having an official marriage, career building, and the acquisition of their own housing as top priority. Less important were the planning of selfdevelopment and being officially married.
It is established that the number of goals lying in the sphere of marital-family relations significantly reduces for the young people every year (p = 0.0001). Observing a reduction in the number of such goals, the arsenal of possible means of achieving them narrowed (p = 0.0001). Along with this, the number of obstacles on the way to achieving the goals is also significantly reduced (p = 0.001). With regard to the orientation of the goals in the sphere of family and marital relations, it can be noted that young people are less likely to enter into official marriage (p = .017) and enter into a civil marriage (p = 0.041). Priorities are the goals associated with the development of their own personalities (p = 0.004).
Conclusion
Thus, on the basis of the conducted research, it can be concluded that there is an intensive transformation of the family prospects of the youth in the aspects of their value-semantic bases and goal-setting under the conditions of social changes. Every year, young people increasingly appreciate the preservation of personal freedom, trust in a partner, and loyalty to a future family. At the same time, the guidelines for childbearing are weakening. In the extension of the period of childlessness, modern young people see the possibility of maintaining independence in the family. They are not ready to face the "family routine" associated with the standard implementation of conjugal and parental roles. This emphasizes the growing importance of the recreational function of the family, the function of rest, active, and rich pastime. In the rapidly changing world, the youth view marriage as being based on the recognition of the importance of their own needs as opposed to the norms and requirements of the society. The family as a social institution remains attractive to young Russians, while the reorientation to new forms of family relations is becoming more actual. | 2019-05-12T14:24:16.721Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "9612d85d5729178a8a5f09f65cdd7004cd4066ee",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2018/16/shsconf_icpse2018_02002.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b15ddd472f930648cd0d433295d3afe2e9dee304",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
250475655 | pes2o/s2orc | v3-fos-license | Inosine Pretreatment Attenuates LPS-Induced Lung Injury through Regulating the TLR4/MyD88/NF-κB Signaling Pathway In Vivo
Inosine is a type of purine nucleoside, which is considered to a physiological energy source, and exerts a widely range of anti-inflammatory efficacy. The TLR4/MyD88/NF-κB signaling pathway is essential for preventing host oxidative stresses and inflammation, and represents a promising target for host-directed strategies to improve some forms of disease-related inflammation. In the present study, the results showed that inosine pre-intervention significantly suppressed the pulmonary elevation of pro-inflammatory cytokines (including tumor necrosis factor-α (TNF-α) and interleukin-1β (IL-1β)), malondialdehyde (MDA), nitric oxide (NO), and reactive oxygen species (ROS) levels, and restored the pulmonary catalase (CAT), glutathione peroxidase (GSH-Px), superoxide dismutase (SOD), and myeloperoxidase (MPO) activities (p < 0.05) in lipopolysaccharide (LPS)-treated mice. Simultaneously, inosine pre-intervention shifted the composition of the intestinal microbiota by decreasing the ratio of Firmicutes/Bacteroidetes, elevating the relative abundance of Tenericutes and Deferribacteres. Moreover, inosine pretreatment affected the TLR4/MyD88/NF-κB signaling pathway in the pulmonary inflammatory response, and then regulated the expression of pulmonary iNOS, COX2, Nrf2, HO-1, TNF-α, IL-1β, and IL-6 levels. These findings suggest that oral administration of inosine pretreatment attenuates LPS-induced pulmonary inflammatory response by regulating the TLR4/MyD88/NF-κB signaling pathway, and ameliorates intestinal microbiota disorder.
Introduction
Lung injury is a life-threatening condition, and has attracted the wide attention due to its high morbidity and mortality [1,2]. Clinical evidence suggests that lung injury is generally accompanied by rapid alveolar injury, pulmonary infiltrates, uncontrolled inflammatory response, and excessive oxidative stress [3]. Among those, the abnormal inflammatory response plays a vital role in the process of lung injury, and is mainly represented by elevated concentrations of harmful reactive oxygen species (ROS) and insufficient cellular anti-inflammatory defenses [4]. High levels of ROS are frequently found in patients with lung injury, and can affect inflammatory signaling pathways-especially toll-like receptors/myeloid differentiation primary response 88/nuclear factor-kappa B (TLR4/MyD88/NF-κB) [5,6]. Activation of the TLR4/MyD88/NF-κB signaling pathway accelerates the recruitment of inflammatory cells, production of inflammatory molecules, and oxidative stresses, including tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), interleukin-6 (IL-6), superoxide dismutase (SOD), and malondialdehyde (MDA) [7]. Thus, Nutrients 2022, 14, 2830 2 of 14 reducing ROS accumulation, inhibiting the TLR4/MyD88/NF-κB signaling pathway, and suppressing inflammatory cytokines can effectively prevent the process of lung injury.
Purine nucleosides are small molecules derived from the co-metabolism of intestinal microbiota, and possess an extensive range of physiological properties, such as antiinflammatory, antioxidant and hepatoprotective effects [8,9]. Purine nucleosides are mainly composed of adenosine and its primary metabolite inosine. Recently, inosine has drawn wide attention because of its excellent anti-inflammatory effects. Mager et al. found that microbiome-derived inosine could elevate the effects of checkpoint blockade immunotherapy by activating antitumor T cells [10]. Our previous study showed that inosine pretreatment suppressed the inflammatory responses in the liver by regulating the TLR4/MyD88/NF-κB pathway [11]. In lung injury, regulatory T cells could prevent and/or treat related inflammation by inhibiting the secretion of IL-1β [12]. Therefore, whether inosine pretreatment can protect against acute lung injury, and whether it is related to the TLR4/MyD88/NF-κB pathway and IL-1β, needs to be further studied.
It is widely accepted that the intestinal microbiota plays a vital role in controlling the host energy metabolism and immune system [13], and dysbiosis of the intestinal microbiota can cause hyperglycemia, hyperlipidemia, intestinal barrier injury, and immune deficiency [14]. Liu et al. found that the decreases in the Firmicutes/Bacteroidetes ratio exhibited a noteworthy positive association with the levels of serum inflammatory cytokines, including IL-1β, IL-6, TNF-α, and TNF-β [15]. Moreover, some beneficial bacteria are negatively related to the inflammatory factors and oxidative stress levels, and positively related to the activity of antioxidant enzymes [16,17]. Therefore, whether the inflammatory state of lung injury can be altered by modulating the gut microbiota requires further investigation To date, there are few studies concerning the potential mechanisms by which inosine ameliorates the lung injury induced by intraperitoneal injection of LPS. The purpose of this study was to assess the effects of inosine pretreatment on LPS-induced lung injury. More importantly, this study sought to explore whether inosine could change the composition of the intestinal microbiota, and to analyze the possible correlations between the intestinal microbiota and the lung-damage-associated parameters. The results should provide important knowledge to promote new strategies for the treatment of lung damage.
Purine nucleosides are small molecules derived from the co-metabolism of intestinal microbiota, and possess an extensive range of physiological properties, such as anti-inflammatory, antioxidant and hepatoprotective effects [8,9]. Purine nucleosides are mainly composed of adenosine and its primary metabolite inosine. Recently, inosine has drawn wide attention because of its excellent anti-inflammatory effects. Mager et al. found that microbiome-derived inosine could elevate the effects of checkpoint blockade immunotherapy by activating antitumor T cells [10]. Our previous study showed that inosine pretreatment suppressed the inflammatory responses in the liver by regulating the TLR4/MyD88/NF-κB pathway [11]. In lung injury, regulatory T cells could prevent and/or treat related inflammation by inhibiting the secretion of IL-1β [12]. Therefore, whether inosine pretreatment can protect against acute lung injury, and whether it is related to the TLR4/MyD88/NF-κB pathway and IL-1β, needs to be further studied.
It is widely accepted that the intestinal microbiota plays a vital role in controlling the host energy metabolism and immune system [13], and dysbiosis of the intestinal microbiota can cause hyperglycemia, hyperlipidemia, intestinal barrier injury, and immune deficiency [14]. Liu et al. found that the decreases in the Firmicutes/Bacteroidetes ratio exhibited a noteworthy positive association with the levels of serum inflammatory cytokines, including IL-1β, IL-6, TNF-α, and TNF-β [15]. Moreover, some beneficial bacteria are negatively related to the inflammatory factors and oxidative stress levels, and positively related to the activity of antioxidant enzymes [16,17]. Therefore, whether the inflammatory state of lung injury can be altered by modulating the gut microbiota requires further investigation To date, there are few studies concerning the potential mechanisms by which inosine ameliorates the lung injury induced by intraperitoneal injection of LPS. The purpose of this study was to assess the effects of inosine pretreatment on LPS-induced lung injury. More importantly, this study sought to explore whether inosine could change the composition of the intestinal microbiota, and to analyze the possible correlations between the intestinal microbiota and the lung-damage-associated parameters. The results should provide important knowledge to promote new strategies for the treatment of lung damage.
Design of Animal Experiments
Forty male C57BL/6 mice (six weeks old, 19 ± 1 g) were provided by the Animal Research Center (Shanghai, China) and housed in controlled conditions. All mice were allowed free access to standard chow and water. After 1 week of adaptive feeding, all mice were randomly classified into 4 groups (n = 10) ( Figure 1B). Mice in the control group (NC) and model group (LPS) were orally administrated with sterile NaCl solution (0.9%, w/v). Mice in the treatment groups-namely, the IN-L and IN-H groups-were orally administered with different doses of inosine (30 and 100 mg/kg/day, respectively). After inosine treatment for 14 days, mice in the NC group were intraperitoneally injected with sterile NaCl solution (0.9%, w/v). Mice in the other groups were intraperitoneally injected with the same volume of LPS solution (5 mg/kg). At 4 h after LPS injection, the feces were collected in sterile centrifuge tubes and stored at −80 • C, and then all mice were euthanized. The experimental protocol was approved by the Ethics Committee of Jiangnan University (No.20201115c0701230 [309]).
The reactive oxygen species (ROS) levels in the lung tissues were determined as described in a previous report [18]. In brief, the supernatants of the samples were diluted 200-fold in phosphate-buffered saline, and 0.1 mL of the diluted supernatant was mixed with 0.1 mL of DCFH-DA and transferred to 96-well plates. Then, the plates were placed in the control incubator for 5 min, and the fluorescence was quantified.
Lung Histopathology
Lung tissues were analyzed by hematoxylin and eosin (H&E) staining following a standard procedure described in a previous report [19]. In brief, the samples were collected, fixed in paraformaldehyde, and stained with H&E.
Intestinal Microbiota Analysis
The extraction of fecal DNA and amplification of the V3-V4 region of the 16S rRNA gene was implemented as described in a previous report [20]. The PCR products were purified using AMPure magnetic purification beads (Agencourt Brea, CA, USA) and then pooled into equal concentrations. A Qubit 2.0 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA) was applied to assess the quantity of sequencing libraries and sequenced on the Illumina MiSeq platform. The raw data from high-throughput sequencing were demultiplexed and quality-filtered using the QIIME2 platform. The results were assigned to operational taxonomic units (OTUs) by UCLUST, with a threshold of 97%.
The ACE, Chao1 and Shannon indices of intestinal microbiota were assessed on 15 June 2020 (https://www.microbiomeanalyst.ca/). Principal coordinates analysis (PCOA) based on Bray-Curtis dissimilarity was implemented using R software (v 4.1.2). In addition, Spearman's correlation analysis was applied to analyze the relationships between intestinal microbiota and the key parameters related to inflammation and oxidative stress, using R software (v 4.1.2). On the basis of the Spearman analysis, heatmaps and networks were generated using TBtools software (Ver. 1.09861) and Cytoscape (v 3.6.2). Phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt2) was predicted using Xshell (v 7.0).
Quantitative RT-PCR
Total pulmonary RNA was collected using the commercial extraction kit (Carry Helix Biotechnologies Co., Ltd., Beijing, China). Then, total RNA was reversely transcribed using commercial kits (Takara, Dalian, China). qPCR was performed using a StepOnePlus Real-Time PCR System (Applied Biosystems, Foster City, CA, USA) with SYBR Premix Ex Taq II (Takara, Dalian, China). The pulmonary mRNA expression levels were normalized to the β-actin levels. The primers applied in the present study are presented in Table S1. The transcriptional levels of the genes related to inflammation and oxidative stress were computed by the 2 −∆∆Ct method.
Statistical Analysis
All data in the present study are presented as the means ± SEM, and were analyzed by one-way analysis of variance using GraphPad Prism 7.0 software (GraphPad, San Diego, CA, USA). The significance levels were set at p < 0.05 (significant) and p < 0.01 (extremely significant) relative to the LPS group.
Inosine Pretreatment Alleviated LPS-Stimulated Inflammatory Response in Lung-Injured Mice
The anti-inflammatory role of inosine in LPS-treated mice was explored by detecting the concentrations of pro-inflammatory cytokines (i.e., TNF-α, IL-1β, and IL-6) in the lungs. As shown in Figure 2, the levels of colonic TNF-α, IL-6, and IL-1β were remarkably elevated in the LPS group compared with the NC group (p < 0.01). Pre-intervention with different concentrations of inosine had different effects on the pro-inflammatory cytokines-in particular, 100 mg/day inosine pre-intervention remarkably reduced the pulmonary TNF-α and IL-1β levels (p < 0.01). However, there was no statistical difference in the pulmonary IL-6 levels between the LPS and IN-H groups (p > 0.05).
Quantitative RT-PCR
Total pulmonary RNA was collected using the commercial extraction kit (Carry Helix Biotechnologies Co., Ltd., Beijing, China). Then, total RNA was reversely transcribed using commercial kits (Takara, Dalian, China). qPCR was performed using a StepOnePlus Real-Time PCR System (Applied Biosystems, Foster City, CA, USA) with SYBR Premix Ex Taq II (Takara, Dalian, China). The pulmonary mRNA expression levels were normalized to the β-actin levels. The primers applied in the present study are presented in Table S1. The transcriptional levels of the genes related to inflammation and oxidative stress were computed by the 2 −ΔΔCt method.
Statistical Analysis
All data in the present study are presented as the means ± SEM, and were analyzed by one-way analysis of variance using GraphPad Prism 7.0 software (GraphPad, San Diego, CA, USA). The significance levels were set at p < 0.05 (significant) and p < 0.01 (extremely significant) relative to the LPS group.
Inosine Pretreatment Alleviated LPS-Stimulated Inflammatory Response in Lung-Injured Mice
The anti-inflammatory role of inosine in LPS-treated mice was explored by detecting the concentrations of pro-inflammatory cytokines (i.e., TNF-α, IL-1β, and IL-6) in the lungs. As shown in Figure 2, the levels of colonic TNF-α, IL-6, and IL-1β were remarkably elevated in the LPS group compared with the NC group (p < 0.01). Pre-intervention with different concentrations of inosine had different effects on the pro-inflammatory cytokines-in particular, 100 mg/day inosine pre-intervention remarkably reduced the pulmonary TNF-α and IL-1β levels (p < 0.01). However, there was no statistical difference in the pulmonary IL-6 levels between the LPS and IN-H groups (p > 0.05).
Inosine Pretreatment Regulated LPS-Induced Oxidative Stress in Lung-Injured Mice
Considering that oxidative stress is a major pathological characteristic of pulmonary inflammatory response in LPS-treated mice, the concentrations of MDA and NO in the lungs were measured to assess the amelioration caused by inosine pre-intervention ( Figure 3A). There were no remarkable discrepancies in pulmonary MDA and NO levels between the NC group and the LPS group (p > 0.05), indicating that the LPS injection did not change the pulmonary MDA and NO levels within 4 h. However, inosine pre-intervention at 100 mg/kg significantly decreased the pulmonary MDA levels (p < 0.05). In addition, the pulmonary ROS levels were strongly associated with the progression of lung inflammation. LPS injection significantly elevated the pulmonary ROS levels relative to the NC group (p < 0.05), suggesting that the model of lung injury was suc- Figure 2. Effects of inosine pretreatment on the levels of TNF-α, IL-1β, and IL-6 in mice treated with LPS. * p < 0.05; ** p < 0.01, relative to the LPS group.
Inosine Pretreatment Regulated LPS-Induced Oxidative Stress in Lung-Injured Mice
Considering that oxidative stress is a major pathological characteristic of pulmonary inflammatory response in LPS-treated mice, the concentrations of MDA and NO in the lungs were measured to assess the amelioration caused by inosine pre-intervention ( Figure 3A). There were no remarkable discrepancies in pulmonary MDA and NO levels between the NC group and the LPS group (p > 0.05), indicating that the LPS injection did not change the pulmonary MDA and NO levels within 4 h. However, inosine pre-intervention at 100 mg/kg significantly decreased the pulmonary MDA levels (p < 0.05). In addition, the pulmonary ROS levels were strongly associated with the progression of lung inflammation. LPS injection significantly elevated the pulmonary ROS levels relative to the NC group (p < 0.05), suggesting that the model of lung injury was successfully established. Inosine pre-intervention decreased the pulmonary ROS levels in a dose-dependent manner. Thus, the pre-intervention with inosine effectively improved LPS-induced oxidative stress in mice. cessfully established. Inosine pre-intervention decreased the pulmonary ROS levels in a dose-dependent manner. Thus, the pre-intervention with inosine effectively improved LPS-induced oxidative stress in mice. To understand the anti-inflammatory role of inosine in lung injury induced by LPS, pulmonary CAT, GSH-Px, SOD, MPO, and LDH activities were analyzed ( Figure 3A). Relative to the NC group, the activity of pulmonary CAT was suppressed in the LPS group, although the discrepancy was not statistically remarkable (p > 0.05). Interestingly, oral administration of inosine at 100 mg/kg remarkably elevated the activity of pulmonary CAT compared with the LPS group (p < 0.05). In addition, the activities of pulmonary GSH-Px and SOD in the LPS group were remarkably inhibited relative to those of the NC group (p < 0.05), whereas inosine pre-intervention at 100 mg/kg recovered the activities of pulmonary GSH-Px and SOD (p < 0.05). LPS injection resulted in the elevation of the activities of pulmonary MPO and LDH. Nevertheless, inosine pre-intervention inhibited the activities of pulmonary MPO and LDH, especially at the dose of 100 mg/kg (p < 0.05). Thus, inosine ameliorated the pulmonary anti-inflammatory status in mice treated with LPS.
To further assess the amelioration effects of inosine in mice treated with LPS, the morphology of lung tissues in mice was observed using H&E staining ( Figure 3B). Complete structural integrity was observed in the mice in the NC group. On the other hand, LPS treatment destroyed the pulmonary structure, increased the alveolar septa, accelerated the accumulation of red blood cells, and induced inflammatory cell infiltration and edema of the alveolar wall. Notably, inosine pre-intervention dramatically alleviated the inflammation and alveolar septa in both the IN-L and IN-H groups-especially the IN-H group.
Inosine Pretreatment Modulates the Intestinal Microbiota's Structure
To investigate the effects of inosine on the intestinal microbiota, alpha diversity analysis-including ACE, Chao1, and Shannon indices-was applied to evaluate the richness and diversity of the intestinal microbiota. There were no remarkable discrepancies in the ACE, Chao1, or Shannon indices of the intestinal microbiota between the NC group and the LPS group (p > 0.05) ( Figure 4A). However, the Chao1 and Shannon indi- To understand the anti-inflammatory role of inosine in lung injury induced by LPS, pulmonary CAT, GSH-Px, SOD, MPO, and LDH activities were analyzed ( Figure 3A). Relative to the NC group, the activity of pulmonary CAT was suppressed in the LPS group, although the discrepancy was not statistically remarkable (p > 0.05). Interestingly, oral administration of inosine at 100 mg/kg remarkably elevated the activity of pulmonary CAT compared with the LPS group (p < 0.05). In addition, the activities of pulmonary GSH-Px and SOD in the LPS group were remarkably inhibited relative to those of the NC group (p < 0.05), whereas inosine pre-intervention at 100 mg/kg recovered the activities of pulmonary GSH-Px and SOD (p < 0.05). LPS injection resulted in the elevation of the activities of pulmonary MPO and LDH. Nevertheless, inosine pre-intervention inhibited the activities of pulmonary MPO and LDH, especially at the dose of 100 mg/kg (p < 0.05). Thus, inosine ameliorated the pulmonary anti-inflammatory status in mice treated with LPS.
To further assess the amelioration effects of inosine in mice treated with LPS, the morphology of lung tissues in mice was observed using H&E staining ( Figure 3B). Complete structural integrity was observed in the mice in the NC group. On the other hand, LPS treatment destroyed the pulmonary structure, increased the alveolar septa, accelerated the accumulation of red blood cells, and induced inflammatory cell infiltration and edema of the alveolar wall. Notably, inosine pre-intervention dramatically alleviated the inflammation and alveolar septa in both the IN-L and IN-H groups-especially the IN-H group.
Inosine Pretreatment Modulates the Intestinal Microbiota's Structure
To investigate the effects of inosine on the intestinal microbiota, alpha diversity analysis-including ACE, Chao1, and Shannon indices-was applied to evaluate the richness and diversity of the intestinal microbiota. There were no remarkable discrepancies in the ACE, Chao1, or Shannon indices of the intestinal microbiota between the NC group and the LPS group (p > 0.05) ( Figure 4A). However, the Chao1 and Shannon indices were remarkably elevated after pre-intervention with inosine. Subsequently, trends in the intestinal microbial communities between different groups were analyzed by PCOA ( Figure 4B). In the four groups, the IN-L group had the most similar microbial community distribution to the control and model groups. Nevertheless, the IN-H group had the most discrepancy in microbial community compared to the other groups.
ces were remarkably elevated after pre-intervention with inosine. Subsequently, trends in the intestinal microbial communities between different groups were analyzed by PCOA ( Figure 4B). In the four groups, the IN-L group had the most similar microbial community distribution to the control and model groups. Nevertheless, the IN-H group had the most discrepancy in microbial community compared to the other groups. Subsequently, the microbial community composition at the phylum level was assessed ( Figure 4C). The relative abundance of Verrucomicrobia and Tenericutes was significantly elevated in the LPS group relative to the NC group, while the relative abundance of Patescibacteria was decreased ( Figure 4C). Interestingly, inosine pre-intervention significantly reduced the relative abundance of Actinobacteria, Bacteroidetes, Cyanobacteria, Deferribacteres, Firmicutes, Patescibacteria, Proteobacteria, Tenericutes, and Verrucomicrobia relative to the LPS group. Nevertheless, no differences in the composition of the intestinal microbiota were observed between the IN-L and IN-H groups.
At the genus level, there was a higher proportion of GCA-900066225, Romboutsia, Tyzzerella 3, Rumen bacterium, Clostridia bacterium, Barnesiella sp., Streptococcus, GCA-900066575, Anaeroplasma, Lachnospiraceae UCG-001, Coprocossus 2, Family XIII UCG-001, Enterorhabdus, and Clostridiales bacteria in the LPS group relative to the NC group, whereas inosine treatment clearly reversed those changes ( Figure S1). Furthermore, the relative abundances of Clostridium sensu stricto 1, Peptococcus, Bacillus, Family XIII AD3011 group, and Ruminococcaceae UCG-005 in the LPS group were lower than in the NC group, but inosine treatment did not significantly affect the trend of those genera. Thus, our data suggested that inosine pre-intervention attenuated the intestinal microbial disorders in LPS-treated mice. Subsequently, the microbial community composition at the phylum level was assessed ( Figure 4C). The relative abundance of Verrucomicrobia and Tenericutes was significantly elevated in the LPS group relative to the NC group, while the relative abundance of Patescibacteria was decreased ( Figure 4C). Interestingly, inosine pre-intervention significantly reduced the relative abundance of Actinobacteria, Bacteroidetes, Cyanobacteria, Deferribacteres, Firmicutes, Patescibacteria, Proteobacteria, Tenericutes, and Verrucomicrobia relative to the LPS group. Nevertheless, no differences in the composition of the intestinal microbiota were observed between the IN-L and IN-H groups.
At the genus level, there was a higher proportion of GCA-900066225, Romboutsia, Tyzzerella 3, Rumen bacterium, Clostridia bacterium, Barnesiella sp., Streptococcus, GCA-900066575, Anaeroplasma, Lachnospiraceae UCG-001, Coprocossus 2, Family XIII UCG-001, Enterorhabdus, and Clostridiales bacteria in the LPS group relative to the NC group, whereas inosine treatment clearly reversed those changes ( Figure S1). Furthermore, the relative abundances of Clostridium sensu stricto 1, Peptococcus, Bacillus, Family XIII AD3011 group, and Ruminococcaceae UCG-005 in the LPS group were lower than in the NC group, but inosine treatment did not significantly affect the trend of those genera. Thus, our data suggested that inosine pre-intervention attenuated the intestinal microbial disorders in LPS-treated mice.
PICRUSt2 Analysis
PICRUSt2 was applied to investigate the bacterial functions of the members among different groups. A total of 22 potential functional profiles of intestinal microbiota were significantly altered between the NC and LPS groups ( Figure S2), of which 5 potential functional profiles were remarkably upregulated and 17 potential functional profiles were remarkably downregulated in the LPS group relative to those in the NC group. Notably, 24 potential functional profiles were significantly increased and 5 potential functional profiles were remarkably reduced in the IN-L group compared to the LPS group ( Figure 5A). However, 6 potential functional profiles were significantly increased and 13 potential functional profiles were remarkably decreased in the IN-H group compared with the LPS group ( Figure 5B). There were remarkable discrepancies in the potential functional profiles of intestinal microbiota between the IN-L and IN-H groups, which may have been associated with the inosine-induced beneficial effects.
significantly altered between the NC and LPS groups ( Figure S2), of which 5 potential functional profiles were remarkably upregulated and 17 potential functional profiles were remarkably downregulated in the LPS group relative to those in the NC group. Notably, 24 potential functional profiles were significantly increased and 5 potential functional profiles were remarkably reduced in the IN-L group compared to the LPS group ( Figure 5A). However, 6 potential functional profiles were significantly increased and 13 potential functional profiles were remarkably decreased in the IN-H group compared with the LPS group ( Figure 5B). There were remarkable discrepancies in the potential functional profiles of intestinal microbiota between the IN-L and IN-H groups, which may have been associated with the inosine-induced beneficial effects.
Associations between the Intestinal Microbiota and the Biochemical Indices Related to Lung Injury
Pearson's correlation analysis was applied to analyze the potential associations between the intestinal microbiota and the inflammatory indices related to lung injury (Figures 6 and S3). Interestingly, the pulmonary ROS levels were positively associated with the abundance of Rikenella, Lachnoclostridium, Lachnoclostridium 2, Lachnospiraceae FCS020 group, and Ruminiclostridium 6. The pulmonary SOD activities were positively associated with the proportions of Peptococcus, Family XIII AD3011 group, Eubacterium brachy group, and Negativibacillus, and were negatively associated with the abundance of Coprococcus 2, Eubacterium nodatum group, Romboutsia, and Ruminiclostridium 6. Furthermore, the activity of pulmonary GSH-Px was positively related to the proportion of Peptococcus and Family XIII AD3011 group, but was negatively related to the abundance of Coprococcus 2 and Enterorhabdus.
Associations between the Intestinal Microbiota and the Biochemical Indices Related to Lung Injury
Pearson's correlation analysis was applied to analyze the potential associations between the intestinal microbiota and the inflammatory indices related to lung injury (Figures 6 and S3). Interestingly, the pulmonary ROS levels were positively associated with the abundance of Rikenella, Lachnoclostridium, Lachnoclostridium 2, Lachnospiraceae FCS020 group, and Ruminiclostridium 6. The pulmonary SOD activities were positively associated with the proportions of Peptococcus, Family XIII AD3011 group, Eubacterium brachy group, and Negativibacillus, and were negatively associated with the abundance of Coprococcus 2, Eubacterium nodatum group, Romboutsia, and Ruminiclostridium 6. Furthermore, the activity of pulmonary GSH-Px was positively related to the proportion of Peptococcus and Family XIII AD3011 group, but was negatively related to the abundance of Coprococcus 2 and Enterorhabdus.
Inosine Pretreatment Regulated the mRNA Expression of Genes Associated with Inflammation in Lung-Injured Mice
To explore the underlying mechanism of inosine in LPS-induced lung injury, the mRNA expression of genes related to inflammation was measured using RT-qPCR (Figure 7). Relative to the NC group, the transcription levels of MyD88, NF-κB, COX2, TNF-α, IL-1β, and IL-6 (but not TLR4) in the LPS group were dramatically upregulated (p < 0.05), but the transcription levels of Sirt1, Nrf2, HO-1, and IκBα in the LPS group were dramatically downregulated (p < 0.05). By contrast, pre-intervention with inosine dramatically reversed the changes in mRNA expression induced by LPS. In particular, inosine pre-intervention at 100 mg/kg remarkably downregulated the transcription levels of
Inosine Pretreatment Regulated the mRNA Expression of Genes Associated with Inflammation in Lung-Injured Mice
To explore the underlying mechanism of inosine in LPS-induced lung injury, the mRNA expression of genes related to inflammation was measured using RT-qPCR ( Figure 7). Relative to the NC group, the transcription levels of MyD88, NF-κB, COX2, TNF-α, IL-1β, and IL-6 (but not TLR4) in the LPS group were dramatically upregulated (p < 0.05), but the transcription levels of Sirt1, Nrf2, HO-1, and IκBα in the LPS group were dramatically downregulated (p < 0.05). By contrast, pre-intervention with inosine dramatically reversed the changes in mRNA expression induced by LPS. In particular, inosine pre-intervention at 100 mg/kg remarkably downregulated the transcription levels of TLR4, MyD88, NF-κB, COX2, TNF-α, and IL-1β relative to the LPS group (p < 0.01), and significantly upregulated the transcription levels of Akt, Sirt1, Nrf2, HO-1, and IκBα (p < 0.05). These data suggest that inosine pre-intervention could improve LPS-induced lung injury by shifting the TLR4/MyD88/NF-κB signaling pathways.
Discussion
Lung injury is a complicated and lethal condition, which is universally characterized by higher levels of inflammation and abnormal oxidative stress status [21]. Therefore, the inhibition of inflammation accumulation is an essential strategy for preventing and/or treating lung injury. In the present study, inosine pre-intervention remarkably reduced the oxidative stress levels and elevated the activities of anti-inflammatory enzymes in LPS-treated mice. We also found that inosine pre-intervention significantly shifted the composition of the intestinal microbiota, and regulated the TLR4/MyD88/NF-κB signaling pathway.
Lung injury is regarded as associated closely with high concentrations of pro-inflammatory cytokines, especially TNF-α, IL-6, and IL-1β [22]. TNF-α is mainly secreted by macrophages and cell debris, and is a vital player in many inflammatory diseases, liver inflammation [23], arthritis [24], and colitis [25]. Higher levels of TNF-α were also discovered in mice with early pulmonary fibrosis, which is consistent with the results of this study [22]. In addition, a previous study found that the secretion of TNF-α is monitored by IL-1β [26]. IL-1β is a central monitor in the initiation of inflammation that is produced by infiltrating myeloid cells. Abnormal levels of IL-1β can cause serious local or systemic inflammatory reactions [27]. Furthermore, IL-6 is another common inflammatory cytokine that is produced by a variety of different cell types, and has frequently been found in metabolic inflammation and glycolipid metabolism disorder [28]. The above results indicate that inosine pre-intervention at 100 mg/kg·day can effectively attenuate LPS-induced lung injury through suppressing the pulmonary pro-inflammatory cytokines (i.e., TNF-α, IL-1β, and IL-6).
Furthermore, oxidative stress has been recognized as a pivotal pathophysiological characteristic of lung injury. The abnormal status of oxidative stress can accelerate the development of lung injury, and can even induce the beginning of lung cancer. It is well known that LPS-treated mice show abnormal oxidative stress status-such as increases in Figure 7. Impact of inosine pre-intervention on the transcription of genes related to the TLR4/ MyD88/NF-κB signaling pathway in mice treated with LPS. * p < 0.05; ** p < 0.01, relative to the LPS group.
Discussion
Lung injury is a complicated and lethal condition, which is universally characterized by higher levels of inflammation and abnormal oxidative stress status [21]. Therefore, the inhibition of inflammation accumulation is an essential strategy for preventing and/or treating lung injury. In the present study, inosine pre-intervention remarkably reduced the oxidative stress levels and elevated the activities of anti-inflammatory enzymes in LPS-treated mice. We also found that inosine pre-intervention significantly shifted the composition of the intestinal microbiota, and regulated the TLR4/MyD88/NF-κB signaling pathway.
Lung injury is regarded as associated closely with high concentrations of pro-inflammatory cytokines, especially TNF-α, IL-6, and IL-1β [22]. TNF-α is mainly secreted by macrophages and cell debris, and is a vital player in many inflammatory diseases, liver inflammation [23], arthritis [24], and colitis [25]. Higher levels of TNF-α were also discovered in mice with early pulmonary fibrosis, which is consistent with the results of this study [22]. In addition, a previous study found that the secretion of TNF-α is monitored by IL-1β [26]. IL-1β is a central monitor in the initiation of inflammation that is produced by infiltrating myeloid cells. Abnormal levels of IL-1β can cause serious local or systemic inflammatory reactions [27]. Furthermore, IL-6 is another common inflammatory cytokine that is produced by a variety of different cell types, and has frequently been found in metabolic inflammation and glycolipid metabolism disorder [28]. The above results indicate that inosine pre-intervention at 100 mg/kg·day can effectively attenuate LPS-induced lung injury through suppressing the pulmonary pro-inflammatory cytokines (i.e., TNF-α, IL-1β, and IL-6).
Furthermore, oxidative stress has been recognized as a pivotal pathophysiological characteristic of lung injury. The abnormal status of oxidative stress can accelerate the development of lung injury, and can even induce the beginning of lung cancer. It is well known that LPS-treated mice show abnormal oxidative stress status-such as increases in the levels of MDA and NO in the lungs-which is consistent with our results [29]. MDA served as the end product of polyunsaturated fatty acid peroxidation, which could reflect the level of host oxidative stress to a certain extent. High concentrations of MDA damage the integrity of cells through the crosslinking and polymerization of proteins or DNA. In addition, NO is produced from arginine by nitric oxide synthase, and is widely scattered in the organism. NO is generally considered to be a marker of inflammation in the lungs because it can facilitate the secretion of pro-inflammatory cytokines [30]. Therefore, reducing the production of MDA and NO can be an effective measure to ameliorate lung inflammation. Interestingly, inosine pre-intervention minimized the increase in MDA and NO caused by LPS. In addition, the activities of MPO, LDH, CAT, GSH-Px, and SOD were measured in this study, in order to estimate the influence of inosine on antioxidant and antiinflammatory properties in LPS-treated mice. MPO serves as a biomarker of neutrophils, as its concentration was positively associated with the amounts of neutrophils in tissues. LDH mainly exists in the cytoplasm after cell death and local inflammation of cells, and high activity of LDH is regarded as a biomarker of lung injury. Our results showed that inosine pre-intervention reduced the pulmonary MPO and LDH levels, indicating that inosine directly protected the lungs against LPS toxicity, and prevented the development of lung injury. In addition, LPS treatment can partially induce lung injury by inhibiting the antioxidant enzymes (such as SOD, GSH-Px, and CAT) [31]. The superoxide radicals can be transformed by SOD into hydrogen peroxide, which is then degraded to water and oxygen by GSH-Px and CAT [31]. Therefore, increases in SOD, GSH-Px, and CAT activities are important strategies to protect the cells from oxidative stress. We found that inosine pre-intervention elevated the activities of pulmonary SOD, GSH-Px, and CAT, suggesting that the protective effects of inosine on LPS-induced lung injury may be attributed to the elevated levels of antioxidant enzymes.
ROS represent a wide class of molecules that play an essential role in the signaling pathways associated with inflammatory response. In this study, intraperitoneal injection of LPS induced increases in pulmonary ROS levels, which is consistent with previous findings [32]. ROS can easily destroy the structure of cells and the function of organs through changing the mitochondrial respiratory chain enzymes, lipid peroxidation, and modifications of membrane transport proteins [33]. In addition, overproduction of ROS subjects a biosystem to oxidative stress, resulting in the synthesis, secretion, and accumulation of pro-inflammatory cytokines [34]. Thus, the reduction of ROS is a vital measure for effectively maintaining cellular homeostasis. Our data showed that inosine pre-intervention remarkably reduced the pulmonary ROS levels, indicating that inosine may be a promising anti-inflammatory substance for relieving/treating lung damage.
The intestinal microbiota has been attracting a lot of attention because its composition is strongly associated with the host's physiology. Furthermore, increasing research has suggested that the increasing usage of active substances induces alterations of the intestinal microbiota, which can influence the host immune response and energy metabolism [35]. Firmicutes and Bacteroidetes are the two dominant phyla in the intestinal tract, accounting for more than 80%. Thus, the balance of Firmicutes and Bacteroidetes is extremely important to maintain human health. Jia et al. found that the reduction in Firmicutes/Bacteroidetes ratio may contribute to mitigating the host's systemic low-grade inflammation [36]. Moreover, patients with inflammatory bowel disease also exhibited increases in the Firmicutes/Bacteroidetes ratio, supporting the emerging view that the Firmicutes/Bacteroidetes ratio is closely associated with host inflammation [37]. Our results showed that inosine preintervention slightly reduced the Firmicutes/Bacteroidetes ratio, which may be beneficial for relieving the development of lung inflammation. In addition, patients with inflammatory bowel disease presented a higher abundance of Tenericutes when compared with healthy people [38]. Deferribacteres play a vital role in maintaining the bowel's iron balance, and their abundance is positively correlated with the risk of diseases [39]. In this study, we also found that inosine pre-intervention reduced the relative abundance of Tenericutes and Deferribacteres. Therefore, we speculate that inosine improved host immunity partially by altering the intestinal microbiota structure. At the genus level, Candidatus arthromitus plays a vital role in promoting the immune system maturation [40]. Anaerostipes acts as one of the most important probiotics, is positively associated with L-glutamine, riboflavin B2, and IL-10 levels, and may elevate the beneficial effects on the cardiac functions of the host [41]. Anaeroplasma, belonging to Mollicutes class, is an opportunistic pathogen that can stimulate a series of immune responses [42]. Turicibacter has been described as being responsible for the process of intestinal inflammation [43]. As expected, inosine pre-intervention remarkably elevated the proportions of Candidatus arthromitus and Anaerostipes, and reduced the proportions of Anaeroplasma and Turicibacter. In addition, a previous report suggested that the abundance of Akkermansia and Bifidobacterium is positively associated with the intestinal inosine levels [10]. In the present study, inosine pretreatment slightly reduced the relative abundance of Akkermansia and Bifidobacterium in LPS-treated mice.
To deeply explore the underlying mechanisms of the ameliorating influence of inosine pre-intervention on lung injury induced by LPS, the transcription levels of genes related to inflammatory response were analyzed. The TLR4/NF-κB/IκBα signaling pathway is generally considered to be the typical signaling pathway related to the production of immunomodulators by activating macrophages [44]. TLR4 serves as the specific receptor of LPS originating from Gram-negative bacteria, and is mainly distributed in monocytes, dendritic cells, and macrophages. Overexpression of TLR4 stimulates the intracellular signaling pathways associated with lung injury, including the upregulation of MyD88 and NF-κB transcription [45]. However, the activation of MyD88 could inhibit the transcription levels of IκBα, and then upregulate the transcription levels of NF-κB. Some reports showed that NF-κB contributes to impairing the structure of cells and tissues by elevating the secretion of some inflammatory factors [46]. Our data showed that the expression of TNF-α, IL-1β, and IL-6 was dramatically upregulated in the LPS group as compared with the NC group. These inflammatory cytokines play a vital role in the beginning and development of inflammation in LPS-induced lung injury [47]. TNF-α is an outstanding regulator that induces pneumocyte apoptosis. A previous study showed that TNF-α production is one of the early stages of various types of inflammatory diseases, especially for lung injury [48]. IL-1β is involved in the inflammatory and immune responses, promoting neutrophil recruitment and activating the site of inflammation [49]. In addition, IL-6 has been strongly correlated with inflammatory disease, and elevates NF-κB activation in the histiocytes [50]. Moreover, our results showed that LPS treatment could inhibit the transcription of Sirt1, Nrf2, and HO-1, and elevated the transcription of COX2, which is consistent with the findings of Han et al. [51]. Sirt1 serves as an immunity regulator involved in many inflammationassociated diseases. Sirt1 activation can control inflammatory responses by suppressing the transcription of NF-κB [52]. In addition, overexpression of Sirt1 can elevate the HO-1 expression and the activity of GSH-Px and CAT by stimulating Nrf2 expression [53]. HO-1 is an endogenous antioxidant enzyme whose transcription is regulated by Nrf2 [54]. HO-1 plays a vital role in maintaining the homeostasis of organelles in cells, and prevents the cell damage induced by inflammation and oxidative stress by ameliorating mitochondrial dynamics [55]. Therefore, inosine pre-intervention can ameliorate lung injury by altering the TLR4/MyD88/NF-κB signaling.
Conclusions
In the present study, we found that inosine pre-intervention ameliorated LPS-induced lung injury by suppressing the secretion of pro-inflammatory cytokines, inhibiting oxidative stress and pulmonary ROS levels, and regulating TLR4/MyD88/NF-κB signaling. In addition, inosine pre-intervention shifted the structure of the intestinal microbiotaespecially the increases in the proportions of Tenericutes and Deferribacteres. These results are conducive to further understanding of the mechanisms of inosine pre-intervention to alleviate lung injury by altering the TLR4/MyD88/NF-κB signaling pathway and the composition of the gut microbiota, and can hence guide further development of inosine-related commodities.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/xxx/s1, Figure S1: Heatmap of relative abundance at the genus level; Figure S2: The remarkable differences in metabolic pathways between the LPS group and the IN-L group; Figure S3: Heatmap of Spearman's correlations between the key intestinal microbial phylotypes and the parameters related to acute liver injury and inflammation; Table S1: Primer sequences for quantitative real-time PCR of hepatic genes. | 2022-07-13T16:33:06.130Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "a1fab9c407a17c2b5a13753298855643382e8237",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/14/14/2830/pdf?version=1657618059",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8f706713cf5e609cbe1d6a988f2a8bc2b0ec1f4",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
78402852 | pes2o/s2orc | v3-fos-license | TOOTH SENSITIVITY AMONG RESIDENTIAL UNIVERSITY STUDENTS IN CHENNAI
Objective: To assess the experience of TS of residential university students from different universities in Chennai. Methods: Students of different colleges were given questionnaires on TS. The answered questionnaires were then analyzed in SPSS online software and the results were found Results: From the data, it is infered that the knowledge of treatment for TS isn’t well-known among students and improper brushing techniques and soft drink consumption are main reasons for TS. Conclusion: From the data, one can infer that TS has becomeNo quite prevalent in today’s society and with less awareness on its treatment. The people should be taught to maintain their oral hygiene, and proper brushing techniques should be taught and diet should be altered according to health. Soft drinks should be avoided to the highest extent and treatment for sensitivity should be taken promptly.
INTRODUCTION
Tooth sensitivity (TS) or dentine hypersensitivity has been defined as the sudden or transient pain arising from exposed dentine on contact with chemical, thermal, tactile, or osmotic stimuli, which does not as arise due to any dental defect or pathology [1]. Cold and air stimulations are known to be the most common trigger [2,3], which has shown to have a significant potential in evoking dentine sensitivity [4]. This sensitivity can be explained by one of the most accepted theories, which is the hydrodynamic theory which states that the flow of fluid in dentinal tubules trigger receptors within the tooth. It is the most widely accepted theory explaining TS from the other stated theories [5]. Dentine hypersensitivity affects eating, drinking, and breathing. Increased sensitivity hampers the ability to control dental plaque effectively and can thereby disturb the maintenance of one's oral health. Severe hypersensitivity may even result in emotional changes that can alter lifestyle [6]. In general, a slightly higher incidence of dentine hypersensitivity is reported in females [7,8], which was said to reflect their overall health care and better oral hygiene awareness [9]. The reasons for improper care for dentinal sensitivity as the conditions for stimulated, so they develop adaptive behavior of restricting self from stimulants and seldom avoid using the affected side of the mouth [10,11]. This survey is done to find the prevalence of TS and the habits and lifestyles of the people affected by dentine hypersensitivity in residential students and thereby provide an insight into the leading causes of TS and thereby provide treatment and help to provide awareness.
METHODS
The survey was conducted on the residential students of five universities situated in Chennai. The students were randomly asked if they suffered from TS and were given questionnaires which were filled with the consent of the students who suffered from TS. The numbers of students without TS were also noted. These data were then analyzed using SPSS software and put in the form of tables.
RESULTS
The survey mainly focused on the students suffering from TS and from questioning almost 217 students, 110 students were found to suffer from TS. The following data show the habits and potential causes of dentin hypersensitivity.
DISCUSSION
From the data, it was evident that almost half of students were affected by dentine hypersensitivity who responded to the questionnaire. It was found that males were found to be more prone to TS when compared females. The major stimulus for sensitivity was cold [7,[12][13][14]. In the case of the food having sensitive temperatures, people often wait for the food to come to normal temperature which may lead to social discomfort in the presence of other people. It was also found that medium bristles toothbrushes were found to be more in use, and the force of brushing was found to be vigorous in nature which may lead to many gum problems such as gum bleeding and gum recession which may expose the underlying dentin [15]. Hard brushing may also lead to enamel erosion which also exposes the dentin [16]. Unorthodox tooth brushing such as using hard brushes, excessive forces, excessive scrubbing at the cervical areas or even lack of brushing would lead to accumulation of plaque and gingival recession [17,18]. It was found that many students were suffering from TS for more than 2 years without any treatment, this may be explained by scientists who have postulated that many patients assume that their condition is a natural occurrence developing with age or that it is untreatable [11]. It was interesting to note that more than half of the students had soft drinks regularly and this may also play a good role in TS due to enamel erosion from bad brushing habits. Erosive agents also play an important role in the progression of TS as they tend to remove or erode the enamel or open up the dentinal tubules, thereby exposing them [19,20]. It was also found almost 80% of the students brush their teeth only once per day. Few students who had sensitivity from dental procedures showed the scaling procedure to be one of the main causes of TS [21]. Gum problems and gastric problems along with grinding of teeth were found to be low among the students which are in relevance with another study made before [22]. Smoking was found in less than half of the students while alcohol consumption was very low.
CONCLUSION
From the data, one can infer that TS has become quite prevalent in today's society and with less awareness on its treatment. The people should be taught to maintain their oral hygiene, and proper brushing techniques should be taught and diet should be altered according to health. Soft drinks should be avoided to the highest extent and treatment for sensitivity should be taken promptly. | 2019-03-16T13:07:27.685Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "073029536777b965646c91db7a85d37d9684b3c3",
"oa_license": "CCBYNC",
"oa_url": "https://innovareacademics.in/journals/index.php/ajpcr/article/download/13228/8124",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3292c93575f119072494931a056473feb70d148b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268288571 | pes2o/s2orc | v3-fos-license | Genetic correlations and causal relationships between cardio-metabolic traits and sepsis
Cardio-metabolic traits have been reported to be associated with the development of sepsis. It is, however, unclear whether these co-morbidities reflect causal associations, shared genetic heritability, or are confounded by environmental factors. We performed three analyses to explore the relationships between cardio-metabolic traits and sepsis. Mendelian randomization (MR) study to evaluate the causal effects of multiple cardio-metabolic traits on sepsis. Global genetic correlation analysis to explore the correlations between cardio-metabolic traits and sepsis. Local genetic correlation (GC) analysis to explore shared genetic heritability between cardio-metabolic traits and sepsis. Some loci were further examined for related genes responsible for the causal relationships. Genetic associations were obtained from the UK Biobank data or published large-scale genome-wide association studies with sample sizes between 200,000 to 750,000. In MR, we found causality between BMI and sepsis (OR: 1.53 [1.4–1.67]; p < 0.001). Body mass index (BMI), which is confirmed by sensitivity analyses and multivariable MR adjusting for confounding factors. Global GC analysis showed a significant correlation between BMI and sepsis (rg = 0.55, p < 0.001). More cardio-metabolic traits were identified to be correlated to the sepsis onset such as CRP (rg = 0.37, p = 0.035), type 2 diabetes (rg = 0.33, p < 0.001), HDL (rg = − 0.41, p < 0.001), and coronary artery disease (rg = 0.43, p < 0.001). Local GC revealed some shared genetic loci responsible for the causality. The top locus 1126 was located at chromosome 7 and comprised genes HIBADH, JAZF1, and CREB5. The present study provides evidence for an independent causal effect of BMI on sepsis. Further detailed analysis of the shared genetic heritability between cardio-metabolic traits and sepsis provides the opportunity to improve the preventive strategies for sepsis.
the instrumental variables.To do so, we selected GWAS significant SNPs with p < 5 × 10 −8 and then performed LD clumping with LD r 2 < 0.001 within a 10,000 kb window.The secondary clumping threshold was p = 5 × 10 −8 .The extracted SNPs were then queried against the requested outcome of sepsis/sepsis (under 75).If a particular SNP is not present in the outcome dataset then it is possible to use SNPs that are LD 'proxies' instead.The proxies (LD tags) with minimum LD r2 value of 0.8 were looked for, and the tag alleles were aligned to target alleles.The effect of an SNP on an outcome and exposure were then harmonized to be relative to the same allele.The heterogeneity statistics were reported to assess the robustness of the causal relationships.The result from each SNP was considered an independent RCT, and the results from all SNPs were pooled with a meta-analytic approach to obtain an overall causal estimate 17,18 .The effect size for each meta-analysis is reported in the main results as the effect of a one-standard deviation (1-SD) change in continuous traits (log transformation was applied if necessary).To examine whether the effect of BMI was independently associated with sepsis, we performed multivariable MR analysis.For each exposure, the instruments are selected then all exposures for those SNPs are regressed against the outcome together, weighting for the inverse variance of the outcome.
Pleiotropy is the phenomenon of a single genetic variant influencing multiple traits, which can lead to a false positive conclusion, we used multiple MR methods for the causal effect estimations, such as MR-Egger, weighted median, inverse variance weighted, simple mode, and weighted mode.We evaluated the directional pleiotropy based on the intercept obtained from the MR-Egger analysis 19 .We also performed a leave-one-out analysis in which we sequentially omitted one SNP at a time, to evaluate whether the MR estimate was driven or biased by a single SNP.The TwoSampleMR (v0.5.6) package was employed for this analysis.We follow the reporting guideline Strengthening the reporting of observational studies in epidemiology using the Mendelian randomization (STROBE-MR) 20 .
Global genetic correlation analysis
The above-mentioned Mendelian randomization uses significantly associated SNPs as instrumental variables to quantify causal relationships between the exposure and outcome.This is effective for traits where many significant associations account for a substantial fraction of heritability.However, heritability is distributed over thousands of variants with small effects for many complex traits, thus genetic correlation was performed by using genomewide data rather than data for only significantly associated variants to obtain more accurate results.Global genetic correlation (r g ) analysis was performed using the cross-trait LD Score regression 10 .The method relies on the fact that the GWAS effect size estimate for a given SNP incorporates the effects of all SNPs in linkage disequilibrium (LD) with that SNP.For a polygenic trait, SNPs with high LD will have higher χ 2 statistics on average than SNPs with low LD.A similar relationship holds if we replace the χ 2 statistics for a single study with the product of the z scores from two studies of traits with non-zero genetic correlation.The python package LDSC (LD Score; v1.0.1) was employed for the analysis.
Local genetic correlation analysis
A global r g represents an average of the shared association across the genome, local r g s in opposing directions could result in a nonsignificant global r g , and local r g s in the absence of any global relation may be undetected.Thus, we performed local genetic correlation analysis by using the LAVA (Local Analysis of [co]Variant Association) 21 .Sample overlap was estimated using the intercepts from bivariate LDSC.The European panel of phase 3 of 1000 Genomes (MAF > 0.5%) was employed as an LD reference 22 .The genomic loci were created by partitioning the genome into blocks of approximately equal size (~ 1 Mb) while minimizing the LD between them.For each phenotype pair (traits versus sepsis), the loci were first filtered by the univariate test so that both phenotypes exhibited univariate signal at Holm-corrected P < 0.05.Multivariate genetic association analysis can be performed via either partial correlation or multiple regression.The analysis was performed by the R package LAVA (v0.1.0) 21.
Ethics approval and consent to participate
The study was conducted by secondary analysis of data from other studies, and informed consent was obtained from participants or their family members in the original studies.
The causal association between cardio-metabolic traits and sepsis
Genetically predicted larger BMI (each 1 SD increase) was associated with a significantly higher risk of sepsis (OR: 1.53 [1.4-1.67];p < 0.001 by IVW method).As expected, the associations were consistent in sensitivity analyses using the MR-Egger method (OR: 1.49 [1.18-1.88];p < 0.001) and the weighted median method (OR: 1.5 [1.29-1.74];p < 0.001, Fig. 1).But the latter two methods provided less precise estimates than that with the conventional IVW method.In a leave-one-out sensitivity analysis, we found that no single SNP was strongly driving the overall effect of BMI on sepsis (Fig. 2A,C).The MR regression slopes are illustrated in Fig. 2B.There was no evidence for the presence of directional pleiotropy in the MR-Egger regression analysis, the P-values for the intercepts were large and the estimates adjusted for pleiotropy suggested null effects (Egger Intercept = 0.00047, p = 0.81; SDC Table S1).These results were in line with the hypothesis that genetic pleiotropy was not driving the result.No significant heterogeneity was identified for the causal effect of BMI on sepsis (Q = 511 for MR-Egger; p = 0.123; Q = 511 for IVW method, p = 0.129, SDC Table S2).
To examine whether the effect of BMI was independently associated with sepsis, we performed multivariable MR analysis.The results showed that BMI was independently associated with sepsis risk (adjusted OR: 1.29; 95% CI: 1.09-1.52),while other cardio-metabolic traits were no longer associated with the sepsis risk (Fig. 3).Similar results were reproduced by restricting to sepsis under 75 years old (adjusted OR: 1.21; 95% CI: 1.04-1.41),although the magnitude was lower.This result indicated that the causal effects of type 2 diabetes, LDL, and HDL could be explained by BMI.
Global genetic correlation analysis
Since sepsis is a complex trait and its development is driven by thousands of genetic variants, with small effects from each of these variants.Thus, the genetic correlation was performed by using genome-wide data rather than data for only significantly associated variants to obtain more accurate results (SDC Table S4).As compared with the MR analysis, more cardio-metabolic traits were identified to be correlated to the sepsis onset such as CRP (r g = 0.37, p = 0.035), type 2 diabetes (r g = 0.33, p < 0.001), HDL (r g = − 0.41, p < 0.001), coronary artery disease (r g = 0.43, p < 0.001), and BMI (r g = 0.55, p < 0.001).The results were consistent in sepsis under 75 (Fig. 4A).There were other cross-trait correlation pairs such as type 2 diabetes and HDL cholesterol, CRP and BMI (Fig. 4B).
Local genetic correlation analysis
We applied LAVA to sepsis outcome and cardio-metabolic traits (Table 1), testing the pairwise local r g s within 2495 genomic loci (genome-wide).The genomic loci were created by partitioning the genome into blocks of approximately equal size (~ 1 Mb) while minimizing the LD between them, and the genomic coordinates are in reference to the human genome build 37. Sample overlap was estimated using the intercepts from bivariate LDSC obtained in the above section.With a Holm-corrected p < 0.05, we detected 572 significant bivariate local r g s across 318 loci, of which 140 loci were associated with more than one phenotype pair.Figure 5A shows the correlation between cardio-metabolic traits and sepsis outcome.The correlation strength as measured by the number of significant local r g s was consistent for sepsis and sepsis under 75.BMI showed the largest number of significant r g s, followed by HDL, CRP, and CAD.For most significant correlations, 95% confidence intervals (CIs) for the explained variance included 1, consistent with the scenario that the local genetic signal of those phenotypes is completely shared (Fig. 5B).
We further displayed three top loci that had the largest number of significant correlations to examine possible genes driving these traits (Fig. 6A-C).The locus 1126 had the greatest number of significant r g s, which showed positive r g s for BMI and CAD, and negative r g s for HDL and eosinophil cell count (Fig. 6B).The locus complex traits such as sepsis, there can be thousands of SNPs with small effects responsible for the heritability, thus global GC can help to address this issue.Cardio-metabolic traits have been explored in other epidemiological studies for their associations with the risk of sepsis development and/or sepsis severity.For example, in a large multi-center cohort study, lower BMI (< 20 kg/m 2 ) was associated with reduced mortality in patients with bloodstream infection 25 .A compelling body of evidence from MR studies has significantly contributed to our understanding of the relationship between obesity and sepsis 26,27 .The pathogenetic pathways connecting BMI or obesity to sepsis risk are multifaceted.Chronic low-grade inflammation, altered immune responses, and metabolic dysregulation have emerged as key contributors [28][29][30] .Studies have elucidated the impact of adipose tissue-derived inflammatory mediators on immune function, potentially predisposing obese individuals to an exaggerated inflammatory response during infections 31,32 .However, studies conducted in critical care settings showed that greater BMI was associated with improved survival, which is known as the obesity paradox in the intensive care unit (ICU) [33][34][35] .Probably, the pathophysiology of critical illness is different from those in the non-critical care setting.Critically ill patients are more likely to benefit from a greater BMI and long-term exposure to low-grade metabolic inflammation.Possible pathological mechanisms underlying the obesity paradox included higher energy reserves, inflammatory preconditioning, anti-inflammatory immune profile, and endotoxin neutralization 36 .Furthermore, our study focused on the sepsis predisposition rather than the mortality risk after the development of sepsis.It should be emphasized that susceptibility to sepsis is not equivalent to sepsis severity.Epidemiological studies for sepsis predisposition are usually performed in the patient population who are not critically ill, and long-term exposure to metabolic inflammation increases the risk of sepsis 37,38 .
Although the MR technique employed genetic variants as the IV, which is less likely to be affected by environmental confounding factors.Violations to other IV criteria are still great threats to causal inference, such as the pleiotropic effects of genetic variants.To account for this bias, we first employed Egger's method, which failed to identify statistically significant pleiotropic effects.The results were robust in sensitivity analysis restricting to sepsis under 75.Then, we performed multivariable MR analysis using covariates known to be associated with sepsis such as CRP, type 2 diabetes, and neutrophil counts.After covariate adjustment, BMI remains to be independently associated with sepsis.furthermore, we also performed a leave-one-off analysis to test whether there are SNPs that significantly drive the results.The results revealed that there was no single SNP strongly driving the overall effect of BMI on sepsis.
Although MR analysis consistently showed causal effects of BMI on sepsis predisposition, it was not able to reveal underlying mechanisms responsible for the association.Local GC analysis may help to reveal some www.nature.com/scientificreports/potential pathways mediating the linkage.By examining genes residing within the top loci, we identified some potential pathways related to inflammatory responses.For example, in the top locus 1126, we found several genes that are playing key roles in inflammatory responses including HIBADH, JAZF1, and CREB5.JAZF1 encodes a nuclear protein with three C2H2-type zinc fingers and functions as a transcriptional repressor.Genetic variations in this gene are correlated with decreased body mass index (BMI) and waist circumference 39,40 .Further experimental studies confirmed its important role in adipocyte differentiation, obesity, insulin resistance, and inflammation 41,42 .
In conclusion, our MR study establishes the causal effects of increased BMI on sepsis development.While more work is needed to understand the pathophysiology explaining these associations, an underlying derangement in inflammation should be suspected.
30 Method: Inverse variance weightedFigure 1 .Figure 2 .
Figure1.Forest plots showing the causal effects of cardio-metabolic traits on the risk of sepsis.inverse variance weights estimates were performed.Sensitivity analysis was performed by restricting to sepsis under 75 years old.
Figure 3 .
Figure 3. Multivariable MR analysis to adjust for possible confounding factors.The error bar indicates a 95% confidence interval.
o l e s t e r o l t r i g l y c e r i d e s b a s o p h i l c e l l c o u n t w h i t e b l o o d c e l l c o u n t m o n o c y t e c e l l c o u n t l y m p h o c y t e c e l l c o u n t e o s i n o p h i l c e l l c o u n t n e u t r o p h i l c e l l c o u nFigure 4 .Figure 5 .
Figure 4. Global genetic correlations across sepsis and cardio-metabolic traits.(A) genetic correlation for top pairs of cardio-metabolic traits and sepsis; (B) Heatmap plot showing the genetic correlation across each pair of traits.
Figure 6 .
Figure 6.Sample loci with the top number of significant traits.(A) The top 3 loci with the largest number of significant traits; genetic correlation network between traits for locus 1126 (B) and 2036 (C).The red color indicates a negative correlation, and the blue color indicates a positive correlation.The number on the line indicates the genetic correlation (r g ).Each green node represents a trait.
Table 1 .
Data used for the Mendelian randomization analysis.For categorical outcome data participant numbers were split into cases and controls. | 2024-03-10T06:17:33.103Z | 2024-03-08T00:00:00.000 | {
"year": 2024,
"sha1": "f13af19fe447c54b6316a6015c45cead0c0c91e7",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b08a229dd6562790db6a2bb131b6f37914334376",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260947174 | pes2o/s2orc | v3-fos-license | The Use of Multidimensional Nomial Logistic Model and Structural Equation Model in the Validation of the 14-Item Health-Literacy Scale in Chinese Patients Living with Type 2 Diabetes
Objective To evaluate the psychometric properties of the 14-item health literacy scale (HL-14) in patients living with type 2 diabetes mellitus (T2DM) in clinical setting. Methods Cross-sectional study using item response theory and structural equation modeling (SEM) for testing the item difficulty and three dimensional-HL configurations was adopted in this study. Chinese patients living with T2DM admitted to endocrinology department of Huadong hospital were evaluated by the HL-14 including communication, functional and critical health literacy from August to December 2021. Results The multidimensional random coefficients multinomial logistic model indicated the difficulty settings of the scale are appropriate for the study populations, and differential item functioning was not observed for sex in the study. SEM demonstrated that the three-dimensional configuration of the scale is good in the study population (x2/df=2.698, Comparative Fit Index = 0.965, Root Mean Square Error of Approximation = 0.076, standard residual mean root = 0.042). Conclusion The HL-14 scale is a reliable and valid measurement, which can perform equitably across sex in evaluating the health literacy in Chinese patients living with T2DM. Moreover, the scale may help fill the gaps of multidimensional health literacy assessment and rapid screening of health literacy ability for clinical practice.
health optimally, and it is shown that good HL can predict better diabetes-related knowledge, self-care behavior, glycemic control and fewer odds of micro-and macrovascular complications. [5][6][7] A meta-analysis indicated that HL was positively associated with medication adherence and adherence outcomes, and the impact of HL was even greater in samples of lower income and of racial ethnic minority patients. 8 Moreover, HL may also pose a crucial effect on patients' self-efficacy for medication use in the context of self-management behaviors. [9][10][11] However, T2DM patients with poor level of HL were prevalent and associated with increased health care costs. [12][13][14][15] This may be because individuals with inadequate health literacy were significantly associated with the increased prevalence of chronic conditions (cerebral, vascular, cancer, diabetes and arrhythmias), higher hospitalization and the poor access to medical care. 16,17 Several instruments were developed to assess the HL in the clinical practice. The Rapid Estimate of Adult Literacy in Medicine (REALM) and the Test of Functional Health Literacy (TOFHLA) are the most commonly used measurements in the clinical study. 18,19 In the former measurement, HL is evaluated with word recognition tests, the latter combined HL and numeracy tests. Although there are Chinese edition of these HL tests, such as the simplified Chinese version of the S-TOFHLA, 20 these measurements are not commonly used in the mainland China, this is probably due to different morphological typology between English and Chinese language. Moreover, although multidimensions are indicated in the model of health literacy, most of the measurements such as TOFHLA and REALM pay close attention to the functional HL. 21 Functional HL, defined as the basic skills in reading and writing to be able to function effectively in daily situations, is only the narrow definition of "health literacy"; together with the more advanced cognitive and literacy skills including communicative literacy (the skills can be used to actively participate in daily activities, extract and derive information from different forms of communication, and to apply new information to changing circumstances) and critical literacy (the skills can be applied to critically analyze information so as to exert greater control over life events and situations), Nutbeam indicated that the different levels of HL progressively allow for greater autonomy and personal empowerment. 22 Hence, measurements for more dimensional concepts in health literacy are warranted. The 14-item health literacy scale (HL-14) is a brief measurement utilized for measuring HL in the clinical and public health contexts. 23 The biggest advantage of the HL-14 is that it evaluates three dimensions of HL including patients' functional, communicational and critical ability when they are facing healthcare information. The measurement is a Likert scale with good reliability and validity and has been translated into Brazilian Portuguese version which also showed good internal consistency and three-dimensional model fit. 24 Using the HL-14, Shirooka reported that non-frailty community-dwelling older adult represented higher HL ability, which indicated the importance of HL in maintaining good status in these people. 25 So far, the measurements such as the HL-14 suitable for rapid clinical screening are still unavailable in the clinical practice in China. Therefore, the purpose of the study is to translate the HL-14 into Chinese and validate the measurement in a sample of patients living with T2DM. To avoid the limitations of the Classical Test Theory (CTT), the study adopted the Item Response Theory (IRT) to investigate the psychometric properties of the HL-14 scale, meanwhile the appropriateness of the item difficult settings and differential item functioning could also be checked during the study. Moreover, for further interrogation of the three-dimensional configuration of the HL-14, structural equation modeling (SEM) was used.
Study Population
Participants were eligible to the study criteria if they were with T2DM and could communicate with the investigators fluently without any others' help. All participants were approached during their hospitalization in the endocrinology department of Huadong Hospital from August to December 2021. Exclusion criteria included the following: type 1 diabetes and other specific types of diabetes; diabetic ketoacidosis; diabetic coma and those who cannot communicate by themselves or refuse to accept the assessment.
The sample size was computed based on the number of free parameters of the measurement as suggested by the experts. As the original Japanese edition of HL-14 contains fourteen free parameters, the ideally sample size would be 280 patents according to the 1:20 ratio. 26
Instruments
The Chinese edition of HL-14 is a five-point Likert Scale set that indicates how much the participants agrees or disagrees with the item. The scores on the items were summed up to give the total HL scores, as well as the HL scores of each dimension. Higher scores indicated greater HL possessed by participants. 23 The other two measurements used in the study were the Adherence to Refills and Medication Scale (ARMS) and the Charlson comorbidity index (CCI), which was used to evaluate the medication adherence and the comorbidity state of the study group. 27,28 The ARMS is a four-point Likert scale with 12-item set which is a valid and reliable measurement that has been widely used among patients with chronic disease and low literacy. 27 The total score of ARMS ranges from 12 (best) to 48 (worst). Lower scores of ARMS indicate better adherence to the medical regimen. The CCI is the most commonly used comorbidity index which was developed to predict death from comorbid disease. 28 It contains 19 issues such as diabetes with diabetic complications, congestive heart failure, peripheral vascular disease etcetera, each of which is weighted according to their potential influence on mortality. The instrument has been widely used in patients living with diabetes in clinical studies. [29][30][31][32] Translation and Adaptation of the HL-14 Scale The translation and adaptation process of the HL-14 scale was conducted according to the Brislin's model for cross-cultural research as follows: 33 (1) Forward translation: two bilingual clinical pharmacists translated the HL-14 from English to Chinese independently to form the first two Chinese versions of the HL-14; (2) Back translation: another two bilingual clinical pharmacists who were blind to the original English version scale back translated the two translated version from Chinese to English; (3) First group discussion: the four translators discussed the differences among the above mentioned translations and adapted the Chinese version of the HL-14 to achieve the most accurate meaning and the suitable expression for patients living with T2DM; (4) Second group discussion: two new bilingual clinical pharmacists back translated the adapted Chinese version of the HL-14 to English, and then a second discussion of these six translators was held to review the two new back-translation. Any discrepancies were discussed until consensus was achieved. The professor of clinical pharmacy, Chen Z.L., a doctoral supervisor of pharmacy, confirmed the accuracy between the original and back-translated version of the HL-14, and the Chinese version of the HL-14 was considered as the final version for psychometric testing, Table 1.
Data Collection
This is a cross-sectional study. All participants were approached during their hospitalization in the endocrinology department. The HL-14 scale was given to the participants by the clinical pharmacist and finished by the participants themselves. All questionnaires were collected on site and checked for the completeness. Participants' information, including age, sex, duration of diabetes, education level and their laboratory test results, was recorded according to the Huadong Hospital Information System.
Statistical methods
The IRT and SEM were used to explore the reliability and validity of the HL-14 scale. The strength of IRT in analyzing the Likert Scale has been reported by Bond and Fox. 34 The principal advantage of the IRT over the CTT lies in its invariance of the parameters and estimating each item difficulty/person ability on a common logit scale which can be displayed vividly by itemperson map (Wright Map). SEM is a statistical method for analyzing the relationship between variables based on the covariance matrix, which enabling the researchers to test a set of regression equations simultaneously.
This study adopted the multidimensional random coefficients multinomial logistic model (MRCMLM) to examine the psychometric properties of the HL-14 scale. MRCMLM is a development of Rasch models of IRT (eg, Dichotomous Model, Rating Scale Model, Partial Credit Model) that incorporates multiple dimensions of the measurement procedure. The mathematical expression of MRCMLM model is presented below: Where P nij and P ni(j-1) refer to the probability of a person of ability θ n being observed as responding to category j or lower category j-1 respectively of a rating scale on a particular item i of difficulty ξ with d ij and d i(j-1) which is held as design vector to express the linear combinations within the ξ. ACER ConQuest 5.12.3: Generalised Item Response Modeling Software was used to perform the MRCMLM analysis in this study. Wright Map would be checked for the items and participants in terms of their difficulty or location in the same logit scale. Moreover, the existence of differential item functioning (DIF) with respect to sex in the HL test was also explored using ACER ConQuest 5.12.3.
In the SEM, fit indices including χ 2 /df, Comparative Fit Index (CFI), Normed Fit Index (NFI), Root Mean Square Error of Approximation (RMSEA), Tucker-Lewis Index (TLI) and standard residual mean root (SRMR) were computed in the process of confirmatory factor analysis (CFA), and all these model fit indices were set based on Hu and Bentler. 35 The factor model is acceptable and appropriate if the results of CFA demonstrate that TLI ≥ 0.95, CFI ≥ 0.95, NFI ≥ 0.95, SRMR < 0.06 and RMSEA < 0.08. Moreover, modification indices (MIs) would be checked to interrogate the independent of residuals. Once the residuals were identified, all authors reviewed and discussed the relative observed variables that may share the conceptually overlapped meanings. AMOS 21.0 (SPSS, Chicago, IL, USA) was used for the CFA in SEM.
The other analyses including demographic information, Pearson correlation (numerical variables) and Spearman correlation (categorical variables) were also performed in this study using SPSS Statistics 23.0.
MRCMLM Summary Statistics
The MRCMLM indicated that the Coefficient Alpha is 0.94 which showed high internal consistency of the scale items. The average dimension estimates of functional HL, communicational HL and critical HL were 0.694, 0.552 and 0.600, respectively. The correlation was moderate between functional HL and communicational HL (0.684) or critical HL (0.672), and high between communicational HL and critical HL (0.949). Table 3 indicates the internal consistency and floor and ceiling effects of the scale.
Item Analysis
The results of response model parameter estimates and item difficulties calibrated by the MRCMLM analysis are presented in Table 4. Item estimates range from −0.597 to 0.635 with the item 1 is the easiest and the item 2 is the
1571
hardest. In the present MRCMLM analyses, the infit and outfit range of 0.5 to 1.7 was adopted to investigate the reliability and validity of the HL-14 scale according to Yu Minning. 36 The outcome of the response model showed that all items fit in this range except item 9, suggesting that the items in the HL-14 scale fit the MRCMLM sufficiently well to define the three HL construct, ie, patient's functional HL, communicational HL and critical HL. An examination of the T values (ideally range within −2 to +2) showed that five items (item 5, 8,9,11,14) may produce less or more variation than modeled. Of these five items, item 5, 9, 14 indicated underfit to the model, which represents the guessing risk or unpredictable responses produced by these items, whereas item 8 and 11 are overfit which is closer to the Guttman-style response string.
Wright Map
The distinctive advantage of IRT analysis is that it can graphically illustrate the location of persons and items on the same interval-level measurement scale using Wright map. The map is presented in Figure 1. The histogram on the left side of the map was showing the distribution of patients in the order of their HL ability which seems distributed normally along with the logit scale. Those located at the upper end were patients with higher ability, whereas those located at the lower end were patients with lower ability. Similarly, the right side of the Figure depicted items plotted to indicate their difficulty level. Given that each person ability and item difficulty were estimated on a common logit scale, therefore, the difficulty level of the HL-14 items seems to be appropriate to most of the study patients as they cluster around the ability between −1 and 1 logit level compared with the ability of the study patients distributed from approximately-5 to 7 logit level. Table 5 shows the sex differences in ability estimates. The parameter estimates indicated that the male participants have performed more poorly than the female counterparts. But the estimated difference of 0.04 is small at just 5.1% of a participant standard deviation (0.785), and the chi-square value of 0.26 on one degree of freedom does not indicate DIF in terms of sex.
Assessment of CFA Model
All observed variables showed adequate factor loadings from 0.51 (item 09) to 0.95 (item 11) in the configuration of three-dimensional HL model, Figure 2. However, the result of CFA revealed that the model fit did not achieve a good (Table 6), which present the independent of residuals of the corresponding items. After referring the results of MRCMLM model and the discussion within the study group, all the authors believe that there is conceptually overlapping effect between the item 10 (Apply the obtained information to my daily life) and 14 (Collect information to make my healthcare decisions), or the item 12 (Consider whether the information is credible) and 13 (Check whether the information is valid and reliable). Hence, the covariances between these two groups of measurement errors were drawn and model fit was tested again. The reconstructed CFA model indicated a good fit (x 2 /df=2.698, CFI = 0.965, GFI = 0.917, NFI = 0.947, TLI = 0.956, RMSEA = 0.076, SRMR = 0.042) (Figure 3), and all observed variables also showed adequate factor loadings in this three-construct model, Figure 3. Table 7 shows the correlation between HL ability and study As higher ARMS scores indicate poorer medication adherence while higher HL scores indicated higher HL ability, the correlation coefficient of these two variables was negative.
Discussion
The present cross-sectional study interrogates the reliability and validity of the HL-14 scale in patients living with T2DM in Shanghai, China. The results showed that the HL-14 scale is a psychometric sound measurement with adequate reliability and validity. To our best knowledge, this is the first study to validate the HL scale based on both MRCMLM and SEM in clinical research, which provides a robust theoretical foundation to the use of the HL-14 scale in clinical practice. Moreover, the test of DIF indicates that there is no statistically significant sex effect among the HL-14 items.
The results of the MRCMLM show that the difficulty settings of the HL-14 scale are appropriate for most of participants in this study as these scale items clustered around the middle part of the logit scale where participants' latent estimates distributed as shown by Wright map. The average dimension estimates showed that the functional HL items are more difficult than the communicative HL and critical HL items. The correlation between communicative HL and critical HL is very high (r = 0.949) which is even higher than the original Japanese scale (r = 0.66), 23 and this finding can also be identified in the CFA model. We can speculate that there are significant differences in the perception of the communicative and critical HL between Chinese and Japanese, and these two HL abilities can also be regarded as unidimensional construct. As in recent years, there are experts who suggest to create a new term called health literacy fluency, defined as the effective use of health information by those who need it, to shift the focus of the current research. 37 Shirooka et al also proposed a concept of comprehensive health literacy which incorporates the concept of DovePress communicative and critical HL in addition to the functional HL. 25 Therefore, more research on interrogating the dimensionality of this kind of information-handling health literacy is warranted in the future. In the IRT model, some items (item 5,8,9,11,14) show less or more variation than modeled. Among these items, only item 9 failed to meet both the infit/outfit and T value criteria. We found that many participants usually had trouble in giving the answer when they were confronting with the item 9 in the investigation. Some participants insisted that they would only talk to their attending physician, while others insisted that they would talk to their family members instead of healthcare professionals. Moreover, many participants wonder that whether they should talk about their illness or not. Hence, we believe that highly guessing risk could be the main reason causing the underfit of item 9 in the MRCMLM. Therefore, Wu emailed the author of the HL-14, Machi Suka, about this result and inquired about her suggestion and permission about rephrasing the item 9 of the HL-14 into the following sentence in Chinese edition: "I tell my opinion about my illness to my doctor or other healthcare professionals". As we believe it is a crucial trait of health literacy for patients to communicate and acquire healthcare-related information from healthcare professionals, in that a patient may not always get decent information from nonprofessional healthcare workers, and Machi Suka agreed with the modification of item 9 expression in the HL-14 Chinese edition. As for the other items showing the misfit, we think logical correlations exist among the HL-14 items when participants' functional, communicational and critical HL are evaluated (conceptually overlapping items), which may cause the Guttman-style response or independent measurement errors in the 23 Although China had made great progress in improving health literacy in the last decades, geographic disparities were still evident, with the East outperformed the Central and the West, and cities better than rural areas. 38 The Chinese Health Literacy Scale is the official instrument to measure Health Literacy in China, but the use of this 80-item scale is time consuming and not suitable for rapid screening especially in clinical practice. 39 The HL-14 validated in this study is a unique tool that aims to evaluate functional, communicative and critical HL which provide the clinical a new option appropriate for rapid assessment with adequate reliability and validity. Moreover, together with the medication management tools like ARMS and the Self-Efficacy for Appropriate Medication Use scale (SEAMS), 27,40 the use of the HL-14 can provide the clinical research more evidence in analyzing the sociodemographic determinants in patients' selfmanagement abilities. 41 The correlation analyses of this study indicated that patients with high health literacy seem to be associated with milder comorbidity status and less medications, meanwhile they possessed better educational level, monthly income and medication adherence. Although a recent meta-analysis indicated the heterogeneous role of health literacy in self-care and glycemic control, 6 the paucity evidence on multi-dimensional HL in the study give rise to the need of multi-dimensional measurement like the HL-14 in clinical research.
However, limitations should also be noted in this study. Due to the worldwide pandemic of Covid-19, this crosssectional study was conducted in the endocrinology department of a single tertiary hospital in Shanghai, China. The study findings could be more generalizable if we adopt varied population of patients in more hospitals and districts of China in the near future. Besides the methodology used in this study, more IRT models in validating the scale could be explored.
In conclusion, the HL-14 Chinese edition validated in this study showed adequate reliability and validity. The difficulty setting of this measurement is appropriate for most of patients living with diabetes. Unlike most measurements that focus on functional HL, the HL-14 can be utilized for evaluating more information related HL dimensions (ie, communicational and critical HL) in healthcare practices. The use of the Chinese HL-14 may provide an entirely new view and understanding of the health literacy research in clinical practices in China.
Data Sharing Statement
All data generated or analyzed during this study are included in this published article.
Ethics Approval and Consent to Participate
Ethics approval was obtained from the ethics committees of Huadong Hospital affiliated with Fudan University, China (2021k143). This study was conducted in accordance with the declaration of Helsinki and local laws and regulations. All of the participants gave their written informed consent before taking part in the study. | 2023-08-17T15:13:48.494Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "c4c4b666dd080b374819e2aa8968bddb23a4f300",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=92008",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2b3e0bf2fbb1f9ad549e3c641f8badb6009ddbf1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119327054 | pes2o/s2orc | v3-fos-license | Supercurrent in Long SFFS Junctions with Antiparallel Domain Configuration
We calculate the current-phase relation of a long Josephson junction consisting of two ferromagnetic domains with equal, but opposite magnetization $h$, sandwiched between two superconductors. In the clean limit, the current-phase relation is obtained with the help of Eilenberger equation. In general, the supercurrent oscillations are non-sinusoidal and their amplitude decays algebraically when the exchange field is increased. If the two domains have the same size, the amplitude is independent of $h$, due to an exact cancellation of the phases acquired in each ferromagnetic domain. These results change drastically in the presence of disorder. We explicitly study two cases: Fluctuations of the domain size (in the framework of the Eilenberger equation) and impurity scattering (using the Usadel equation). In both cases, the current-phase relation becomes sinusoidal and the amplitude of the supercurrent oscillations is exponentially suppressed with $h$, even if the domains are identical on average.
I. INTRODUCTION
Hybrid systems containing superconducting and ferromagnetic elements gained recently a lot of attention due to experimental progress as well as possible applications in magnetoelectronics and quantum information. Theoretical studies are revealing a variety of new features, making these system generators of novel theoretical concepts.
It is a common knowledge that current in hybrid normal metal -superconductor (NS) systems flows by means of Andreev reflections: an electron in N is reflected from the NS interface as a hole with the opposite charge and velocity. Imagine first that the piece of normal metal is ballistic. An electron at the Fermi surface is reflected as a hole at the Fermi surface, and they propagate in the normal metal with the same phase. If the electron is taken at a finite energy E (counted from the Fermi surface), a momentum mismatch δp = 2E/v F between this electron and the reflected hole appears, v F being the Fermi velocity.
Consider now an interface between an (s-wave) superconductor and a ferromagnet. Electron and hole have opposite spin directions, and the exchange field h in the ferromagnet leads to a Zeeman splitting of energies of the two different spin projections. Thus, even an electron and a hole at the Fermi surface acquire the momentum mismatch 2h/v F ; hence their relative phase grows as δϕ = 2hx/( v F ), where x is the distance from the interface. This affects phase-sensitive physical quantities like the supercurrent in superconductor -ferromagnet -superconductor (SFS) junctions: It becomes an oscillating function of the thickness d of the ferromagnetic layer, with a period v F /2h. If, furthermore, the ferromagnet is diffusive, the oscillating behavior is accompanied by an exponential decay ∝ exp(−(h/ D) 1/2 d), where D is the diffusion coefficient. Typically, h is much larger than the superconducting gap ∆, and thus the length scales related to the magnetic field are much shorter than the superconducting coherence length ξ, v F /∆ and ( D/∆) 1/2 in the clean and diffusive case, respectively. In other words, the proximity effect is suppressed in the ferromagnet.
This qualitative discussion suggests that the main effect observed in SFS contacts is oscillations of the supercurrent with the thickness of the ferromagnetic layer -the transition from a so-called 0-state (energy of the contact is minimum for zero phase difference between the superconductors) to a π-state (energy is minimum for a phase difference π). This topic was at the focus of attention since the early exploration of the field [1]. Theoretically, the π-state was predicted in a variety of SFS junctions: Ballistic [2,3,4,5], short diffusive [6,7], long diffusive [5,6,8], ferromagnetic insulating barrier [1,9,10], ballistic [11,12,13] and diffusive [5,14,15,16,17,18] junctions with a barrier separating two ferromagnetic layers, and ballistic [19] and diffusive [18] with two tunnel barriers. The transition to the π-state was recently observed experimentally in SFS junctions [20,21,22,23]. All these observations are limited to small thickness of ferromagnetic layer(s), d ( D/h) 1/2 . For thicker layers, supercurrent does not exist.
In this situation, it is useful to understand how one can enhance the proximity effect. Several options have been recently discussed in the literature. First, the above qualitative argument assumes that the pairing between an electron and a hole participating in the Andreev reflection is singlet -they have opposite spin projections. Obviously, if the superconductor allows for a non-trivial symmetry of the order parameter, this needs not be the case, and triplet pairing between an electron and a hole with the same spin projection can arise. Since tripletpaired electron and hole at the Fermi surface have no momentum difference, they can propagate with the same phase and enhance the proximity effect. Coupling of two d-wave superconductors via a ferromagnetic layer has been considered in Ref. 24. Moreover, triplet pairing can even appear in a contact of an s-wave superconductor and a ferromagnet, provided the magnetization in the latter is non-uniform [25,26,27]. In this case, the proximity effect survives at the same distance ξ from the interface as in non-magnetic metals. Indeed, the supercurrent in SFS junctions with non-uniform magnetization is considerably enhanced [5]. We also mention that the supercurrent in a long diffusive SFS junction is exponentially suppressed only on average; phenomena related to the proximity effect still occur in such a junction as a result of mesoscopic fluctuations around average quantities [28]. Finally, if the ferromagnetic layer is split into domains, the coherence can be preserved if an electron and a hole propagate between the superconducting electrodes along the two sides of a domain wall [29].
In this Article, we explore a different way to enhance the supercurrent in SFS junctions. Imagine first that the junction is ballistic and the ferromagnetic layer consists of two domains with opposite directions of the magnetization, as shown in Fig. 1. Triplet pairing is not generated in this geometry. Consider an electron and an Andreevreflected hole propagating from left to right between the superconducting electrodes. They first acquire the relative phase δϕ 1 = 2hx 1 /( v F ), x 1 being the distance traversed in the first ferromagnetic layer. However, in the second layer the exchange field has the opposite sign, and the phase gain δϕ 2 = −2hx 2 /( v F ) partially compensates δϕ 1 . For x 1 = x 2 we have full compensation: The ferromagnetic bilayer behaves as a piece of normal (not ferromagnetic) metal, and the proximity effect is fully restored. Indeed, previous studies of SFS contacts where the two ferromagnetic domains were separated by a barrier, found that the supercurrent in the antiparallel domain configuration is enhanced with respect to the parallel one [11,13,15,18]. If the domains are identical, there is no transition to the π-sate in the antiparallel configuration.
Below, we consider such a situation quantitatively. Section II treats a ballistic SFFS junction with two ferromagnetic domains parallel to the superconducting interfaces. We show that this system behaves as a ballistic SFS junction with an effective exchange field. If the widths of the two domains are the same, this effective field vanishes. In the next two Sections, we study the effect of disorder in the same system and show that supercurrent in diffusive SFFS junctions decays exponentially with their width, similarly to SFS contacts without domains. We consider long junctions, d ≫ ξ, and assume that the superconducting electrodes do not influence the magnetic structure of the contact.
II. CLEAN SFFS CONTACT
We consider first a system of two clean ferromagnetic strips [30] with the thicknesses d 1 and d 2 and antiparallel orientations located between two superconductors (Fig. 1). The dynamics of quasiparticles in this system are described by the Eilenberger equation, Here the semi-classical Green's functiong σ is a matrix in Nambu space,g which describes the singlet pairing (the triplet component is not generated in our geometry), and the spin index σ = ±1. The exchange field h is zero in the superconducting banks, and has antiparallel orientations in the ferromagnets: The upper/lower signs in Eq. (1) corresponds to the left/right ferromagnet (h > 0). To stay in the framework of the semi-classical consideration, we have assumed that the Zeeman splitting h is much weaker than the Fermi energy, but can be arbitrary in comparison with the superconducting gap ∆. We put = 1; it will be restored in the final results. In this Article, we consider the case of a long contact: The thicknesses of both ferromagnetic layers are much larger than the superconducting coherence length, d 1,2 ≫ v F /∆. Then the matrix∆ can be taken in a piecewise approximation: It is zero in both ferromagnets, and in the superconductors. Here χ = −ϕ/2 and χ = ϕ/2 in the left and the right superconducting bank, respectively. In the bulk superconductor far from the contacts the Green's function is isotropic and equals for |ω| < ∆ In addition, the Green's function and its derivative must be continuous at each interface. We introduce the coordinate x parallel to n and directed from left to right. Let us choose x = 0 at the boundary of the left superconductor; then x = d 1 / cos θ at the interface of the two ferromagnets, and x = (d 1 + d 2 )/ cos θ at the boundary of the right superconductor. The quasiparticles in the clean system move along a straight line (Fig. 1). It follows from Eq. (1) that the normal component g σ (r, n) is constant along the trajectory inside the ferromagnets. The calculation gives where the phase α accumulated along the trajectory is The supercurrent density is expressed as follows, where ν is the density of states. For h = 0 Eq. (5) gives the supercurrent of a long clean SNS (non-ferromagnetic) junction, as considered in Ref. 31, which we follow in the general case. The expression is even in ω; for zero temperature (case of interest here) the summation can be replaced by an integration over frequencies.
We subsequently introduce a new integration variable ω = ∆ sinh u and arrive at the intermediate expression For long contacts, ∆d 1,2 ≫ v F , the first term in the argument of the hyperbolic tangent can be disregarded. Using the identity we obtain the final expression for the supercurrent, For h = 0, we return to the clean long SNS contact, where . This describes the wellknown sawtooth current-phase relation found earlier in Ref. 32.
For strong magnetic fields, h ≫ v F /|d 1 − d 2 |, the integral over dx in Eq. (7), which corresponds to summing over all possible trajectories in the ferromagnets can be calculated in the saddle-point approximation. As a result we find the current-phase relation We note, first of all, that the amplitude of the supercurrent oscillations as a function of ϕ decreases algebraically with the exchange field, as v F /h|d 1 − d 2 |. This is a direct consequence of the fact that we summed over all possible trajectories, and hence averaged over the different phases acquired during propagation in the ferromagnetic domains along these trajectories. Secondly, as far as the phase-dependence of the supercurrent is concerned, it is in general neither sinusoidal, nor saw-tooth-like. In Fig. 2, we plot j(ϕ) for various values of h|d 1 −d 2 |/ v F ∼ 10, such that the saddle-point approximation is reasonable. We see that, as a function of the exchange field, the supercurrent changes sign at a given phase difference. Thus, depending on the parameter h|d 1 − d 2 |/ v F , the junction either favors a 0-state or a π-state. We finally note that for d 2 = 0, Eqs. (7) and (9) give the supercurrent for a (single-domain) clean long SFS junction. This is, to our knowledge, a new result as well. It implies in particular that a clean SFS junction can also be a π-junction, in accordance with previous results for different types of SFS hybrid structures.
The important conclusion for the general case is that for a two-domain contact the result is exactly the same as for a SFS junction with the thickness d 1 + d 2 and the effective exchange field h ef = h|d 1 − d 2 |/(d 1 + d 2 ). In particular, if the thicknesses are the same, d 1 = d 2 , the magnetic field drops out -we obtain the sawtooth current-phase relation (8) like for a SNS contact. In the language of Eilenberger equations, this statement is obvious: Indeed, the only quantity sensitive to the magnetic field is the phase α accumulated along the trajectory. Since each trajectory is a straight line, each layer contributes with a weight proportional to its thickness and with the sign depending on the direction of the exchange field. This result is readily generalized to the case of many ferromagnetic layers in the antiparallel configuration [33].
III. DISORDER AVERAGING
Now we discuss how our two main observations for the supercurrent -power-law decay with magnetic field and the independence on the magnetic field in the symmetric case d 1 = d 2 -react to the presence of disorder. Before performing this difficult task in Section IV by solving the Usadel equations, we try to use an easy way to understand the effect of impurities in this Section. We introduce randomness in the thicknesses of the layers (surface randomness). This simple and transparent calculation provides us with results which are clear qualitative predictions to be compared with the conclusions extracted from a more complicated analysis of the Usadel equations.
We start from Eq. (7), and imagine that the interfaces as presented in Fig. 1 are not straight, but exhibit small fluctuations in position. Since there is no scattering at the interfaces, the only effect of such fluctuations is that the thicknesses of the layers become random variables, and the supercurrent (7) must be averaged with respect to this randomness. Let us take a Gaussian distribution for the difference d 1 − d 2 , where a ≪d 1 ,d 2 has the meaning of a typical scale of the interface fluctuations, andd i are the averaged values both thicknesses. Averaging Eq. (7), we obtain , the integral is calculated in the saddle-point approximation, and only the term with k = 1 survives, Thus, the averaging procedure brings out two, qualitatively new features: (i) at high fields, the current-phase relation becomes sinusoidal; (ii) the amplitude of the supercurrent oscillations decays exponentially, rather than algebraically with h. In addition, the exchange field still modulates the phase of the oscillations, and can drive the contact to a π-state. The property (i) stems from phaseaveraging over diffusive trajectories and is a common feature of all long disordered SNS junctions (cf Ref. 34). Eq. (12) does not apply to the symmetric cased 1 =d 2 . In this situation, for h ≫ v F /a, we havē We see that even for the symmetric case the exponential dependence on magnetic field persists. It reflects the fact that a quasiparticle moving along a single trajectory spends in general unequal times in both layers, and thus the contribution of each trajectory is magnetic field dependent. However, there is no additional oscillating factor due to the magnetic field: a symmetric junction is never in the π-state. These features are confirmed qualitatively in the next Section, where we analyze the behavior of a symmetric diffusive SFFS junction, using the Usadel equations.
IV. DISORDERED SFFS CONTACT FROM USADEL EQUATIONS
We now consider a diffusive SFFS junction in the symmetric case d 1 = d 2 = d/2. The junction is again assumed to be long, d ≫ ( D/∆) 1/2 , with D being the diffusion coefficient.
If the exchange magnetic energy does not exceed the inverse elastic scattering time, h ≪ /τ , the Green's function is almost isotropic, and the system can be described by Usadel equations, Here, as usual [34], and it actually only depends on the distance z from the ferromagnet-ferromagnet interface (Fig. 1). The upper/lower signs describe the regions −d/2 < z < 0 and 0 < z < d/2, respectively, and∆ = i∆ exp(iχ) in the superconductors. In the following, we suppress the spin index σ where it does not lead to ambiguities. Following Ref. 31, we solve the constraint by introducing two complex-valued fields θ and η, G = cos θ, F = sin θe iη , F + = sin θe −iη .
The equation for η in the ferromagnets becomes with the boundary conditions η(±d/2) = ∓ϕ/2. The first integral yields where I is an unknown constant. The current is expressed via this constant, To ensure the current conservation, I must be the same in both ferromagnetic layers. It is important, however, that we do not assume that the current is conserving -it follows naturally from the consistency of our solution. Using Eq. (16), we also write the equation for θ in ferromagnets, with the first integral Now, for the long junctions, the boundary conditions for θ at z = ±d/2 are essentially the same they would be at the interface between a semi-infinite superconductor and a semi-infinite ferromagnet. To find these boundary conditions, we write the corresponding equation for the superconductors, Taking into account that in the bulk superconductor θ = π/2, in the bulk ferromagnet θ = 0, and requiring the continuity of θ and θ ′ at the interface, we obtain the following boundary conditions, ω ± ihσ(1 − cos θ) + ∆(1 − sin θ) + I 2 sin 2 θ = 0, at z = ∓d/2. Although our equations describe the behavior of an SFFS junction for an arbitrary relation between h and ∆, we concentrate in the following on the case T, h ≪ ∆. As we show below, in this situation the current I is exponentially small, and the boundary condition for θ reduce to θ(z = ±d/2) = π/2. Since the Usadel equations posses obvious symmetries θ σ (ω) = θ −σ (ω) + π, η σ (ω) = η −σ (ω) + π, in the sequel we only consider ω > 0.
The field θ must rapidly decay away from superconductors and stay exponentially small within the ferromagnets. We start first solving Eq. (20) at z ≪ d, where θ ≪ 1, and the trigonometric functions can be expanded. Then Eq. (20) can be integrated. The solution is too cumbersome to be written down here, its asymptotics for |z| → ∞ are with the notations θ 0 = θ(z = 0), γ = θ ′ (z = 0), and Next, we solve Eq. (20) close to the interfaces, |z − d/2| ≪ d. We assume that I/θ 0 , γ are both exponentially small (to be checked later) and obtain Far from the interface, θ ≪ 1, the solution becomes exponential. Matching the exponential asymptotics of Eqs. (21) and (22), we find the condition We now integrate Eq. (17). Since θ(x) grows exponentially away from x = 0, the sine in the denominator can be replaced by its argument. We then find with η 0 = η(0). We proceed by calculating the four quantities I, θ 0 , η 0 , and γ. The result is Note that I does not depend on σ. It can be easily checked that I/θ 0 and γ are exponentially small, which justifies the approximations we have made to arrive at Eq. (25). Now we calculate the supercurrent according to Eq. (18). For high temperatures T ≫ D/d 2 , h only the term with ω = πT is important, and we obtain where we introduced j 0,diff = 128( √ 2 − 1) 2 eν D 2 /d 3 . In high magnetic fields h ≫ T, D/d 2 the terms with ω < h contribute, We note the two main features of the solution in the diffusive case. First, the current-phase relation is sinusoidal. This corresponds to the result for the long diffusive SNS contact [34]. Then, the supercurrent decays exponentially with magnetic field, in contrast to the power-law decay in the clean case. Similarly, we can treat a single-layer SFS junction of a thickness d. The result for h ≫ D/d 2 , T reads Thus, comparing Eq. (28) with Eq. (27) we see that a long diffusive SFS contact can be a π-junction, depending on the thickness of the ferromagnet, whereas a similar symmetric SFFS contact with anti-parallel configuration of the domains is not a π-junction.
V. DISCUSSION
We considered the behavior of the supercurrent in long SFS junctions. We obtained new expressions for single-domain ballistic and diffusive contacts and confirmed that the 0 to π transition can be induced in these systems. However, our main focus is on the situation when the ferromagnetic region is split into two ferromagnetic domains with equal but opposite magnetization. In the ballistic case, this system behaves as a singledomain SFS junction, with the effective exchange field h ef = h|d 1 − d 2 |/(d 1 + d 2 ). Such a system exhibits a non-sinusoidal current-phase relation, and a power-law decay of the supercurrent with thickness and exchange field. If the thicknesses of the both domains are the same, the effective field vanishes. Disorder, considered both as geometrical fluctuations of the thickness, or randomly positioned impurities, restores exponential decay and sinusoidal phase dependence of the supercurrent. A system with two domains of the same width is never in the π-state.
To obtain these results, we made a number of simplifying assumptions. The superconductor-ferromagnet interfaces, as well as the boundary between the two ferromagnetic domains, are assumed to be ideal (no scattering) and sharp. This can be realized in multilayered structures, where the ferromagnetic layers can be artificially constructed and kept very clean. Another, more attractive option, is real ferromagnetic domains. A domain wall has a finite width, typically of order of the mean free path, or wider. This induces reflection of electrons from the domain wall, and additionally generates the triplet pairing between electrons and holes. These factors need to be taken into account for a quantitative comparison between theory and experiment. However, we do not expect them to add qualitatively new features into the picture we presented. | 2019-04-14T02:02:52.888Z | 2003-06-27T00:00:00.000 | {
"year": 2003,
"sha1": "f40e23bc76146466b000611743e14cb9cb51f076",
"oa_license": null,
"oa_url": "https://repository.tudelft.nl/islandora/object/uuid:2a73a43e-cda6-4293-a05b-e26bc4dd5999/datastream/OBJ/download",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f40e23bc76146466b000611743e14cb9cb51f076",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
52095773 | pes2o/s2orc | v3-fos-license | On the precise value of the strong chromatic-index of a planar graph with a large girth
A strong $k$-edge-coloring of a graph $G$ is a mapping from $E(G)$ to $\{1,2,\ldots,k\}$ such that every pair of distinct edges at distance at most two receive different colors. The strong chromatic index $\chi'_s(G)$ of a graph $G$ is the minimum $k$ for which $G$ has a strong $k$-edge-coloring. Denote $\sigma(G)=\max_{xy\in E(G)}\{\operatorname{deg}(x)+\operatorname{deg}(y)-1\}$. It is easy to see that $\sigma(G) \le \chi'_s(G)$ for any graph $G$, and the equality holds when $G$ is a tree. For a planar graph $G$ of maximum degree $\Delta$, it was proved that $\chi'_s(G) \le 4 \Delta +4$ by using the Four Color Theorem. The upper bound was then reduced to $4\Delta$, $3\Delta+5$, $3\Delta+1$, $3\Delta$, $2\Delta-1$ under different conditions for $\Delta$ and the girth. In this paper, we prove that if the girth of a planar graph $G$ is large enough and $\sigma(G)\geq \Delta(G)+2$, then the strong chromatic index of $G$ is precisely $\sigma(G)$. This result reflects the intuition that a planar graph with a large girth locally looks like a tree.
Introduction
A strong k-edge-coloring of a graph G is a mapping from E(G) to {1, 2, . . . , k} such that every pair of distinct edges at distance at most two receive different colors. It induces a proper vertex coloring of L(G) 2 , the square of the line graph of G. The strong chromatic index χ s (G) of G is the minimum k for which G has a strong k-edge-coloring. This concept was introduced by Fouquet and Jolivet [19,20] to model the channel assignment in some radio networks. For more applications, see [4,29,32,31,24,36].
As demonstrated in [18], there are indeed some graphs reach the given upper bounds.
According to the examples in [18], the bound is tight for ∆ = 3, and the best we may expect for ∆ = 4 is 20.
The following results are obtained by using a discharging method: Theorem 2 (Hudák et al '14 [23]). If G is a planar graph with girth at least 6 and maximum degree at least 4, then χ s (G) ≤ 3∆(G) + 5.
And the bounds are improved by Bensmail et al.
It is also interesting to see the asymptotic behavior of strong chromatic index when the girth is large enough.
The concept of maximum average degree is also an indicator to the sparsity of a graph. Graphs with small maximum average degrees are in relation to planar graphs with large girths, as a folklore lemma that can be proved by Euler's formula points out.
Many results concerning planar graphs with large girths can be extended to general graphs with small maximum average degrees and large girths. Strong chromatic index is no exception.
In terms of maximum degree ∆, the bound 2∆ − 1 is best possible. We seek for a better parameter as a refinement. Define An antimatching is an edge set S ⊆ E(G) in which any two edges are at distance at most 2, thus any strong edge-coloring assigns distinct colors on S. Notice that each color set of a strong edge-coloring is an induced matching, and the intersection of an induced matching and an antimatching contains at most one edge. The fact suggests a dual problem to strong edge-coloring: finding a maximum antimatching of G, whose size is denoted by am(G). For any edge xy ∈ E(G), the edges incident with xy form an antimatching of size deg(x) + deg(y) − 1. Together with the weak duality, this gives the inequality By induction, we see that for any nontrivial tree T , χ s (T ) = σ(T ) attains the lower bound [18]. Based on the intuition that a planar graph with large girth locally looks like a tree, in this paper, we focus on this class of graphs. More precisely, we prove the following main theorem: Theorem 11. If G is a planar graph with σ = σ(G) ≥ 5, σ ≥ ∆(G) + 2 and girth at least 5σ + 16, then χ s (G) = σ.
We also make refinement on the girth constraint and gain a stronger result in Section 4.
The proof of the main theorem
To prove the main theorem, we need two lemmas and a key lemma (Lemma 18) to be verified in the next section.
The first lemma can be used to prove that any tree T has strong chromatic index σ(T ) by induction.
Lemma 12. Suppose x 1 x 2 is a cut edge of a graph G, and G i is the component of G − x 1 x 2 containing x i joining the edge x 1 x 2 for i = 1, 2. If for some integer k, deg( is a strong k-edge-coloring of G.
The following lemma from [30] about planar graphs is also useful in the proof of the main theorem. An -thread is an induced path of + 2 vertices all of whose internal vertices are of degree 2 in the full graph.
Lemma 13. Any planar graph G with minimum degree at least 2 and with girth at least 5 + 1 contains an -thread.
Proof. Contract all the vertices of degree 2 to obtain G . Notice that G is a planar graph which may have multi-edges and may be disconnected. Embed G = (V, E) in the plane as P . Then Euler's Theorem says that |V | − |E| + |F | ≥ 2, where F is the set of faces of P . If G has girth larger than 5, Combining all these produces a contradiction: Hence G has a cycle of length at most 5. The corresponding cycle in G has length at least 5 + 1. Thus one of these edges in G is contracted from vertices in G, and so G has the required path.
These two lemmas, together with a key lemma to be verified in the next section, lead to the following proof of the main theorem: Proof of Theorem 11. Since the inequality χ s (G) ≥ σ(G) is trivial, it suffices to show that χ s (G) ≤ σ(G). That is, G admits a strong σ-edge-coloring ϕ. Suppose to the contrary that there is a counterexample G with minimum vertex number. Then there is no vertex x adjacent to deg(x) − 1 vertices of degree 1. For otherwise, there is a cut edge xy, where y is not a leaf. By applying Lemma 12 to G with the cut edge xy and using the minimality of G, we get a contradiction.
, which clearly has the same girth as G since the deletion doesn't break any cycle. And we have δ(H) ≥ 2, otherwise G has a vertex x adjacent to deg(x) − 1 vertices of degree 1, which is impossible. Lemma 13 claims that there is a path x 0 x 1 . . . x +1 with = σ + 3 and deg H (x i ) = 2 for i = 1, 2, . . . , . Now let G be subgraph obtained from G by deleting the leaf-neighbors of x 2 , x 3 , . . . , x −1 and the vertices x 3 , x 4 , . . . , x −2 . Consider the subgraph T of G induced by x 1 , x 2 , . . . , x and their neighbors, which is a caterpillar tree. By Lemma 18 that will be proved in the next section, T admits a strong σ-edge-coloring ϕ such that ϕ and ϕ coincides on the edges incident to x 1 and x . Gluing these two edge-colorings we construct a strong σ-edge-coloring of G. 3 The key lemma: caterpillar with edge pre-coloring All the graphs in this section are caterpillar trees. Let d i ≥ 2 for i = 1, 2, . . . , . By T = Cat(d 1 , d 2 , . . . , d ) we mean a caterpillar tree with spine x 0 , x 1 , . . . , x +1 , whose degrees Call the length of T and let E i be the edges incident with x i . See Figure 2 for Cat (5,3,2,4,5). Figure 2: The caterpillar tree Cat (5,3,2,4,5).
Collect all the tuples (C; α 0 , C 1 , C , α ) as P κ (T ), where the color sets , the set of all strong edge-colorings ϕ using the colors in C and satisfying the following criterions is denoted by C T (P ): Proof. For any P = (C ; α 0 , C 1 , C , α ) ∈ P κ (T ), we have to find a strong edge-coloring in C T (P ).
Case |C 1 ∪ C | > κ. Choose a κ-set C so that C 1 ∪ {α } ⊆ C ⊆ C 1 ∪ C , and a d -set C so that C ∩ C ⊆ C ⊆ C. By assumption, there is a strong edge-coloring ϕ in C T (C; α 0 , C 1 , C , α ). Let the edges in E with color C − C be E . Notice C − C and C are disjoint, so the colors in C − C are not appeared in ϕ. Hence we can change the colors of E to C − C and obtain a strong edge-coloring in C T (P ).
We now derive a series of properties regarding the two-sided strong edge-pre-colorability of a caterpillar tree and its certain subtrees.
Lemma 15. Suppose a caterpillar tree T contains T as a subgraph, and both have the same length. If T is κ-two-sided strong edge-pre-colorable, then T is also κ-two-sided strong edgepre-colorable.
For a caterpillar tree T , we define T and I T as follows. Call a vertex The value d * is critical in the sense that 1. If d i + d j ≤ σ + 1, then either d i or d j must be at most d * .
2. If d i + d j ≥ σ + 1, then either d i or d j must be at least d * .
Then T = Cat(d 1 , d 2 , . . . , d ) is a caterpillar tree isomorphic to a subgraph of T , with σ(T ) = σ(T ) − 1 due to the criticalness of d * and the choice method of S.
It is straightforward to see that T −1 = Cat(d 1 , d 2 , . . . , d −1 ) by the choice method of S.
Proof. For any P = (C; α 0 , C 1 , C , α ) ∈ P κ (T ), we have to show that C T (P ) is nonempty. Let I = I T . Our strategy is to search for a color β such that β ∈ C 1 if and only if 1 ∈ I; and β ∈ C if and only if ∈ I.
Suppose such a color β exists and β = α . By Lemma 16, T admits a strong (κ − 1)-edge coloring in C T (C − β; α 0 , C 1 − β, C − β, α ). Coloring the remaining edges with β then yields the required strong κ-edge-coloring in C T (P ). Notice that S being an independent set guarantees that the edges with color β form an induced matching. If it happens that β coincides with α , then we seek instead for strong-edge coloring in C T (C −β; C 1 −β, α 0 , C − β, α ) for arbitrary α ∈ C − α . We make use of the symmetry of pendant edges incident with x and still achieve the goal. Sometimes there is no suitable β. We alternatively consider T −1 . By finding appropriate d −1 -subset C −1 ⊆ C and α −1 with C −1 ∩ C = {α −1 }, there will be a β such that β ∈ C 1 if and only if 1 ∈ I; and β ∈ C −1 if and only if − 1 ∈ I.
We now prove the existence of β according to the following four cases.
Case 1. 1, ∈ I. In this case, C 1 ∩ C is nonempty since Pick β to be any color in the intersection.
Case 2. 1 ∈ I but / ∈ I. If C 1 − C is nonempty, then pick β to be any color in the difference. Otherwise, 1 ∈ I and / ∈ I imply d 1 ≥ d * ≥ d . On the other hand, C 1 − C = ∅ implies d 1 ≤ d . Thus the situation that C 1 − C is empty occurs only when d 1 = d = d * and C 1 = C . We consider the subtree T −1 . Choose α −1 to be any color in C − α . Let Case 3. ∈ I but 1 / ∈ I. If C −C 1 is nonempty, then let β be any color in the difference. Otherwise, d 1 = d = d * and C 1 = C . But d 1 = d * implies 1 ∈ I, a contradiction.
is nonempty, then pick β to be any color in the difference set. Now, suppose C = C 1 ∪ C . We consider the subtree T −1 .
Refinement of Lemma 18
We now discuss the optimality of Lemma 18. If we take more care about the base cases, there would be a refinement: Then T is κ-two-sided strong edge-pre-colorable for any κ ≥ σ.
If we take off the condition σ ≥ ∆ + 2 in Theorem 20, a weaker result can be obtained by using the following corollary of Lemma 19 in the proof of the main Theorem 11.
Corollary 21. Suppose T is a caterpillar tree of length satisfying Then T is κ-two-sided strong edge-pre-colorable for any κ ≥ σ + 1.
Proof. Add pendant edges at some vertices of T with degree δ(T ) such that the resulting graph T has σ( T ) = σ(T ) + 1 and σ( T ) ≥ ∆( T ) + 2. So T satisfies the requirements of Lemma 19, and hence it is κ-two-sided strong edge-pre-colorable for any κ ≥ σ( T ) = σ(T ) + 1. The corollary then follows from Lemma 15.
Consequences concerning the maximum average degree
The following lemma is a direct consequence of Proposition 2.2 in [14].
Lemma 23. Suppose the connected graph G is not a cycle. If G has minimum degree at least 2 and average degree 2|E| |V | < 2 + 2 3 −1 , then G contains an -thread.
A C n -jellyfish is a graph by adding pendant edges at the vertices of C n . In [9], it is shown that Proposition 24. If G is a C n -jellyfish of m edges with σ(G) ≥ 4, then χ s (G) = Adopting these results leads to a strengthening of Theorem 10.
Theorem 25. If G is a graph with σ = σ(G) ≥ 5, σ ≥ ∆(G) + 2, odd girth at least g σ , even girth at least 6, and mad(G) < 2 + Proof. In the proof of Theorem 20, alternatively use Lemma 23 to find an σ -thread in H. It should be noticed the girth constraints exist merely to address the problem that H may be a cycle. In this case, by Proposition 24, G still has strong chromatic index σ. Indeed, suppose H = C n and G is a C n -jellyfish. The case n is even is trivial. If σ ≥ σ(H) ≥ 5, n is odd and n ≥ g σ ≥ σ, then |E(G)| Hence χ s (G) = σ.
Similarly, Theorem 22 can be modified correspondingly. | 2015-09-25T15:28:35.000Z | 2015-08-12T00:00:00.000 | {
"year": 2018,
"sha1": "8609f1eead6a98a9ce3e91f39b8524769928fc2e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1508.03052",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8609f1eead6a98a9ce3e91f39b8524769928fc2e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
216217967 | pes2o/s2orc | v3-fos-license | Mathematical Aspects of Krätzel Integral and Krätzel Transform
: A real scalar variable integral is known in the literature by different names in different disciplines. It is basically a Bessel integral called specifically Krätzel integral. An integral transform with this Krätzel function as kernel is known as Krätzel transform. This article examines some mathematical properties of Krätzel integral, its connection to Mellin convolutions and statistical distributions, its computable representations, and its extensions to multivariate and matrix-variate cases, in both the real and complex domains. An extension in the pathway family of functions is also explored.
Introduction
In this paper, real scalar mathematical or random variables are denoted by small letters x, y, z, ... and the corresponding vector/matrix variables are denoted by capital letters X, Y, .... Variables in the complex domain are denoted with a tilde such asx,ỹ,X,Ỹ.... Constant vectors/matrices are denoted by capital letters A, B, ... whether in the real or complex domain. Scalar constants are denoted by a, b, .... If X = (x ij ) is a p × q matrix where the x ij s are distinct real scalar variables, then the wedge product of the differentials is denoted by dX = ∧ p i=1 ∧ q j=1 dx ij . If x and y are real scalar variables, then the wedge product of their differentials is defined as dx ∧ dy = −dy ∧ dx so that dx ∧ dx = 0, dy ∧ dy = 0. IfX is in the complex domain, thenX = X 1 + iX 2 where X 1 , X 2 are real and i = (−1). Then, dX = dX 1 ∧ dX 2 . The determinant of a p × p real matrix X is denoted by |X| or det(X) and when in the complex domain the absolute value of the determinant is denoted by |det(X)|. The trace of a square matrix A is denoted by tr(A). The integral (2) One can evaluate Equation (2) by using different approaches. One can interpret Equation (2) as the Mellin convolution of a product and then take the inverse Mellin transform to evaluate the integral. One can draw a parallel to the statistical density of a product of two positive real scalar random variables and then evaluate the density to obtain the value of Equation (2). One can treat Equation (2) as a function g(b) of b. Then, the Mellin transform of g(b) with Mellin parameter s is the following for γ > 0, δ > 0, a > 0, b > 0, η > 0: Integrating out b first and then x, we have the following: where (·) means the real part of (·). That is, Taking the inverse Mellin transform of Equation (3) we have g(b) or the integral in Equation (2) as the following: where the c in the contour is > 0. Note that Equation (4) can be written as a H-function.
For the theory and applications of the H-function, see [5]. When ρ = δ, we have Equation (5) reducing to a Meijer's G-function as the following: For the theory and applications of G-function, see [6].
Computable Series form for Equation (2)
Consider the Mellin-Barnes integral representation in Equation (4). This integral can be evaluated as the sum of the residues at the poles of the gammas Γ(s) and Γ( γ δ + ρ δ s). The poles of Γ(s) are at s = 0, −1, −2, .... When the poles of the integrand are simple. then the sum of the residues at the poles of Γ(s) is the following: The poles of Γ( γ δ + ρ δ s) are at γ δ + ρ δ s = −ν, ν = 0, 1, 2, ... or the poles are at s = − γ ρ − δ ρ ν and in the simple poles case the sum of the residues is the following: Hence, the sum of residues from (A) and (B) in the simple poles case is the following:
G-function in the Simple Poles Case
Let ρ = δ so that the H-function in Equation (5) becomes the G-function in Equation (6) and when γ δ is not an integer then the G-function has simple poles. Consider this case and it is available from Equation (7) by putting δ = ρ. Then, the gammas reduce to the following: where, in general, the notation (a) m = a(a + 1)...(a + m − 1), a = 0, (a) 0 = 1 is the Pochhammer symbol. Hence, K 2 in Equation (2) for this simple poles case and for δ = ρ is the following: where 0 F 1 is a hypergeometric series with no upper and one lower parameters. Observe that, in this simple poles case, Equation (2) or K 2 of Equation (8) is a linear function of Bessel series and hence it is appropriate to call Equation (1) as Bessel integral and Equation (2) as the generalized Bessel integral rather than calling them as ultra gamma integral or generalized gamma integral or anything connected with gamma integral.
1.5. Poles of Order Two, ρ = δ, γ δ = m, m = 1, 2, ... In this case, the poles at s = 0, −1, −2, ..., −(m − 1) are simple and poles at s = −m, −m − 1, ... are of order two each. In this case, we may write (2) as the following: Sum of the residues at the poles s = 0, −1, ... − (m − 1), coming from (9), is the following: For s = −m − ν, ν = 0, 1, ... or s = −ν, ν = m, m + 1, ... the poles are of order two and the residue, denoted by R ν , is the following: Let h(s) = Γ(s)Γ(m + s)(ab) −s . Then, Observe that d ds h(s) = h(s) d ds ln h(s) and (ab) −s = e −s ln(ab) . Note that Therefore, Then, in this case, (2) reduces to the following: where ψ(·) is the psi function or the logarithmic derivative of the gamma function, ψ(z) = d dz ln Γ(z). The most general case is to consider Γ(s)Γ( γ δ + ρ δ s) having some poles of order one and the remaining of order two. After writing this situation in a convenient way, one can use the procedure in Section 1.5 to obtain the final result. Since the expressions would take up too much space, it is not discussed here.
Krätzel Integral from Mellin Convolution
Let x 1 > 0 and x 2 > 0 be real scalar variables. Let f 1 (x 1 ) and f 2 (x 2 ) be real-valued scalar functions associated with x 1 and x 2 , respectively. Then, the Mellin transforms of f 1 and f 2 , with Mellin parameter s, are the following, whenever they exist: Then, That is, This Equation (12) is the Mellin convolution of the product involving two functions and Equation (11) is the corresponding integral representation. Let f 1 and f 2 be generalized exponential functions of the following types: Then, Equation (11) becomes the following: Here, (E) and (Fi) provide equivalent representations for g (u).
then the integral becomes Krätzel integral of (2) in Section 1. Hence, Krätzel integral is also available as a Mellin convolution of a product involving two functions, see [7].
is an arbitrary function, then Equation (11) becomes the following: where K −α 2,γ f in (13) is Erdélyi-Kober fractional integral of the second kind of order α and parameter γ, see [8]. Thus, the Mellin convolution of a product is also associated with fractional integral of the second kind. A general definition of all versions of fractional integrals in terms of Mellin convolutions of products and ratios is given in [8].
Krätzel Integral as the Density of a Product
Let x 1 > 0 and x 2 > 0 be two real scalar positive random variables, independently distributed with density functions f 1 (x 1 ) and f 2 (x 2 ), respectively. Due to statistical independence their joint density, denoted by f ( Let g(u, v) be the joint density of u and v. Then, and the marginal density of u, denoted by g 1 (u) is the following: Let f j (x j ) be a generalized gamma density of the form where c j is the normalizing constant. For the f j (x j ) in Equation (15), we have Equation (14) as the following: Observe that the two expressions for g 1 (u) in Equation (16) are not only generalized Krätzel integrals but they are also statistical densities of a product. We can evaluate the explicit form of the density by using arbitrary moments and then inverting the expression. Consider the (s − 1)th moments of x 1 and x 2 . Then, ] for the density in Equation (15), we have the following: Observe that in Equation (17) the explicit form of the normalizing constant c j is used, c j is such that E[x s−1 j ] = 1 when s = 1. Then, taking the product for (γ j + s − 1) > 0, j = 1, 2. Then, the density g 1 (u) is available from the inverse Mellin transform or by inverting Equation (18). That is, Note that Equation (19) is the explicit form of the Krätzel integral as well as the statistical density g 1 (u). Instead of generalized gamma density for f j (x j ), suppose that the density of x 1 is a type-1 beta density with the parameters (γ + 1, α) and f 2 (x 2 ) is an arbitrary density then f 1 is of the form Usually, the parameters in a statistical density are real. Then, g 1 (u) becomes the following: where K −α 2,γ f is Erdélyi-Kober fractional integral of the second kind of order α and parameter γ. From Equation (20), note that this fractional integral is a constant multiple of a statistical density of a product of positive random variables also. For generalizations of this result for the matrix-variate case, in real and complex domains, see [8]. By taking the density of a ratio of real scalar positive random variables, where the variables are independently distributed, with x 1 having a type-1 beta density with the parameters (γ, α) and x 2 having an arbitrary density we can show that the density of the ratio u = x 2 x 1 will produce a constant multiple of Erdélyi-Kober fractional integral of the first kind of order α and parameter γ, details or the generalizations of this result may be seen [8].
Krätzel Integral and Bayesian Structures
In a simple Bayesian structure in Bayesian statistical analysis, we have a conditional density of a random variables x, conditioned on a parameter θ, or written as f 1 (x|θ) or the density of x, given θ. Then, θ has its own marginal density denoted by f 2 (θ). Then, the joint density of x and θ is f 1 (x|θ) f 2 (θ). When both x and θ are continuous variables, we call this situation as a continuous mixture. When one variable is discrete and the other continuous, we call it simply a mixture density. Then, the unconditional density of x, denoted by f (x), is given by A general format of the structure in Equation (21) is of the following type: ..
For an application of this type of unconditional density for k = 3, see [9]. When all the densities involved in Equations (21) and (22) are continuous, we also call Equations (21) and (22) Then, the unconditional density is the following, denoting θ = v in the integral and denoting the unconditional density of x, again by f (x): where Observe that Equation (23) is of the same structure of the Krätzel integral of Equation (2) of Section 1. Note that, if we use the general structure in Equation (22) and consider all densities as generalized gamma densities, then we obtain a generalization and extension of Krätzel integral to a multivariate situation. Such generalizations is considered below in this paper.
Pathway Extension of Krätzel Integral
The author of [10] introduced a pathway model for rectangular matrix-variate case. By using a pathway parameter there, one can go to three different families of functions. When a model is fitted to a given data, then one member from the pathway family is sure to fit the data if the data fall into one of the three wide families of functions or in the transitional stages of going from one family to another family. The pathway model for real positive scalar variable situation is the following: When α < 1, then we can write α − 1 = −(1 − α) so that the model in (24) switches to the model and, further, 1 − a(1 − α)x δ > 0 in order to create statistical density out of f 4 (x). Its support is finite or it is a finite-range density, whereas in Equation (24) it is of infinite range and x > 0 there. When α → 1, both Equations (24) and (25) go to the model Thus, through the pathway parameter α one can move among the three families of functions f j (x), j = 3, 4, 5. Both Equations (24) and (25) can be taken as extensions of Equation (26). If Equation (26) is the ideal or stable situation in a physical system, then the unstable neighborhoods are given by Equations (24) and (25). The movement of α also describes the transitional stages. For the properties, generalizations and extension of the pathway model, see [11].The model in Equation (25) for γ = 1, a = 1, η = 1 and for α < 1, α > 1, α → 1 is Tsallis' statistics in non-extensive statistical mechanics [12]. Some properties and other aspects of the pathway model see [11,13]. The model in Equation (24) for a = 1, η = 1, α > 1, α → 1 is superstatistics (see [14]). Superstatistics considerations come from the unconditional density described in Section 4 when the conditional and marginal densities belong to the exponential and gamma families of densities. Consider the model in Equation (24) with different parameters, take f 1 and f 2 of Section 1, and consider Mellin convolutions. Let f 31 and f 32 be two densities belonging to Equation (24) with different parameters. That is, let Consider the Mellin convolution of a product or let x j > 0, j = 1, 2 be independently distributed real scalar positive random variables with the densities f 31 and f 32 of (27) respectively. Then, the density of u = x 1 x 2 , denoted by g p (u), where p stands for the pathway model, is the following: for α j > 1, a j > 0, δ j > 0, η j > 0, j = 1, 2. See also the versatile integral discussed in [15]. Various types of extensions of Krätzel integrals are involved in Equation (28). When α 1 → 1, the first factor or the density in (G) goes to the exponential form whereas the second part in Equation (28) remains in the type-2 beta family form. This is one extension. In addition, when α 2 → 1, the second part density in Equation (28) goes to the exponential form whereas the first part remains in the type-2 beta family of functions. When α 1 → 1 and α 2 → 1, Equation (28) goes to the format of the Krätzel integral in Equation (2) of Section 1. A model of the form in Equation (28) for the cases α j < 1, α j > 1, α j → 1, individually, is studied in detail in [15].
Connection to Kobayashi Integrals
In Equation (28), let α 1 → 1 and α 2 remain the same. Then, Equation (28) reduces to the following form: Observe that Equation (29) is a more general form of ultra gamma integral and Kobayashi integral. The Kobayashi form is available from the Mellin convolution of a ratio. Let u 1 = x 2 x 1 with x 1 = v, and let x 1 and x 2 be independently distributed pathway random variables as described in Section 5.
Then, x 1 = v, x 2 = u 1 v and dx 1 ∧ dx 2 = vdu 1 ∧ dv. Then, the pathway density of u 1 , denoted by g p1 (u 1 ), is the following for α 1 → 1: [16,17]). Some people call Kobayashi form as ultra gamma integral. Observe that Equation (30) is a much more general and flexible format and for varying α 2 we have three families of functions in Equation (30) including Kobayashi format. The Mellin transform of g p1 (u 1 ), with Mellin parameter s, is available from and these moments are available from the pathway densities of x 1 and x 2 with α 1 → 1.
Multivariate Extensions of Krätzel Integrals
Let us start with the case of three variables. Let x j > 0, j = 1, 2, 3 be three real scalar variables and let the associated functions be f j (x j ), j = 1, 2, 3, respectively. If x j > 0, j = 1, 2, 3 are real scalar random variables, independently distributed, then f j (x j ), j = 1, 2, 3 may be the corresponding densities. Let u = x 1 x 2 x 3 be the product and let v = x 2 x 3 , w = x 3 . Then, x 1 ∧ dx 2 ∧ dx 3 = 1 vw du ∧ dv ∧ dw. Mellin convolution of a product involving three real scalar variables is considered in [18]. Let where c j is a constant and it may be normalizing constant if f j in Equation (31) is a density. Then, the density of u or Mellin convolution of the product, again denoted by g(u), is the following: where Equation (32) is the general structure whatever be the f j s, and Equation (33) is the case when f j s belong to Equation (31). Then, Equation (33) can be taken as a bivariate version of the Krätzel integral. Observe that in the exponent we have v and w with positive and negative exponents. If we take u = x 1 x 2 x 3 , v = x 2 , w = x 3 , then the exponential part in g(u) is of the following form: In the format of Equation (33), we can take v = x 1 x 2 , w = x 2 or v = x 2 x 3 , w = x 1 . These produce two more different forms corresponding to Equation (33). We can also take u = x 1 x 2 x 3 = u 12 x 3 , u 12 = x 1 x 2 . We can get the density of u 12 first by using f 1 and f 2 . Let the density of u 12 be denoted as g 12 (u 12 ). Then, by using g 12 and f 3 , we can get the density of u. This produces another bivariate extension of the Krätzel integral. Follow the same procedure by taking u = u 23 x 1 , u 13 x 2 where u 23 = x 2 x 3 , u 13 = x 1 x 3 . In these cases, obtain the densities of u 13 and u 23 first and then proceed. These produce other different bivariate extensions of Krätzel integrals. For example, let u = x 1 x 2 x 3 = u 12 x 3 , u 12 = x 1 x 2 . Let the density of u 12 be g 12 (u 12 ). Then, from the two-variables case, Let the density of u be g(u). Then, However, we also have Substituting for g 12 from (J) into (H), we have the following and other forms from the symmetry also: A few such forms, as in (K), are described in [7] and hence these are not repeated here. From the products of four or more variables x j > 0, j = 4, 5, ..., k, we can have several different extensions of Krätzel integral for bivariate, trivariate and general multivariate cases. The method is similar to what is explained above and hence further discussion is omitted. Even though hundreds of different integral representations are available for the density of u = x 1 ...x k , the explicit evaluation of the density g(u) of u is possible by inverting the corresponding Mellin transform, namely and take the inverse Mellin transform of ∏ k j=1 M f j (s) to obtain the density g of u = x 1 x 2 ...x k .
Connections to Fractional Integrals
Let x j > 0, j = 1, 2, 3 be real scalar random variables, independently distributed with densities f j (x j ), j = 1, 2, 3, respectively. Let Let f 1 be a real scalar type-1 beta density with the parameters (γ + 1, α), or with the density: Let f 2 and f 3 be arbitrary densities. Then, Then, the density of u from (34), f 2 and f 3 , denoted again by g(u), is the following: If f 3 and the corresponding w are absent, then K −α 2,γ ( f 2 , f 3 ) = K −α 2,γ f 2 which is Erdélyi-Kober fractional integral of the second kind and of order α and parameter γ where the arbitrary function is f 2 .
Similarly, when f 2 and v are absent, we get Erdélyi-Kober fractional integral of the second kind of order α and parameter γ with the arbitrary function f 3 . Hence, Equation (35) is a bivariate generalization of Erdélyi-Kober fractional integral of the second kind. This generalization in Equation (35) is different from the multivariate case of Mathai [8] and multi-index case of Kiryakova [19]. Other extension to bivariate case of fractional integrals are available from the various representations in (K) of Section 6 by taking one or two, out of the three functions there, as real scalar type-1 beta densities. Let Then, the density of u 1 , denoted by g 1 (u 1 ), is the following: , be an arbitrary density and let f 2 (x 2 ) be a real scalar type-1 beta density with the parameters (γ, α). Then, from Equation (36), where K −α 1,γ f is Erdélyi-Kober fractional integral of the first kind of order α and parameter γ. Consider the generalization to three variables. Let du 1 ∧ dv ∧ dw and the marginal density of u 1 , again denoted by g 1 (u 1 ), is the following: where K −α 1,γ ( f 2 , f 3 ) of Equation (38) may be called Erdélyi-Kober fractional integral of the first kind of order α and parameter γ in the bivariate case or with two arbitrary functions. Here, the integrals are over 0 ≤ v ≤ 1, 0 ≤ w ≤ 1, 0 ≤ vw ≤ u 1 . This type of generalization is different from the ones available in the literature. Various definitions of fractional integrals, fractional derivatives, and fractional differentials equations and their properties may be seen in [20][21][22].
Krätzel Integral in the Real Matrix-variate Case
It is easier to interpret Krätzel integral in terms of statistical distributions. Let X 1 and X 2 be two p × p real positive definite matrix random variables with the densities f 1 (X 1 ) and f 2 (X 2 ), respectively. Density here means a real-valued scalar function f (X) of the positive definite matrix where, in Equation (39), A j > O is a p × p real positive definite constant matrix for j = 1, 2.. When p = 1, we have the corresponding scalar variable gamma density. The real matrix-variate gamma function Γ p (γ j ) is explained below. In the scalar case we have taken exponents δ j > 0, j = 1, 2 but if we take exponents in the matrix-variate case then the transformations will not produce nice forms for further derivations, see the types of difficulties from [23], and hence we have taken δ 1 = δ 2 = 1 in the matrix-variate case. Let us consider symmetric product U = X 2 > O is the positive definite square root of the positive definite matrix X 2 > O. We have taken the symmetric product because the transformations are on symmetric cases. Let V = X 2 . Then, from Mathai [23], we can derive dX 1 ∧ dX 2 = |V| − p+1 2 dU ∧ dV and then proceeding as in the scalar variable case, the density of U, denoted again by g(U), is given by the following: where f 1 and f 2 in Equation (40) are some general densities. Consider the case when f j (X j ) is a real matrix-variate gamma density given by the following: is the real matrix-variate gamma given by For the densities in Equation (41), with Γ p (γ j ) defined in Equation (42), the density of U is given by the following: This Equation (43) is the Krätzel integral in the real matrix-variate case. Note that, if A 1 is a positive scalar quantity, then it can be taken out of V and then V −1 will be obtained corresponding to the real scalar case.
The model in Equation (41) is also connected to Maxwell-Boltzmann and Raleigh densities in physics. Their matrix-variate, multivariate and rectangular matrix-variate extensions and some applications in reliability analysis are given in [24]. Their complex matrix-variate analogs can be worked out but they do not seem to be in print in the literature yet.
Extension to Rectangular Matrix-variate Case
Let X = (x ij ) be a p × q, q ≥ p matrix of full rank p where the elements x ij s are distinct real scalar variables. Let A > O be p × p and B > O be q × q constant real positive definite matrices. Let a prime denote the transpose, let tr(·) be the trace of (·), and let, for example, A for j = 1, 2. Let f (X). Then, (49) Then, f 3 (X), coming from Equations (46) and (47), is the real rectangular matrix-variate version of Krätzel integral. In a physical model building situation, if Equation (50) is the stable or ideal situation, then Equations (46), (48) and (49) describe the unstable neighborhoods. From the discussion in Sections 2 and 3, we can see that the model in Equations (46) and (48)-(50) can also be generated by M-convolution of product or density of a product in the real matrix-variate case. In Equation (50), for simplicity, we have taken the coefficient parameters as scalar quantities. We can evaluate the normalizing constants C, C 1 , C 2 , C 3 by using the following steps: Let from the general linear transformation (see [23] for the Jacobian in (L) and other Jacobians to follow).
Let the corresponding function f (X) be denoted by f 01 (Y). Then, Let the corresponding functions f 1 (X), f 2 (X), f 3 (X) be denoted by f 11 (Y), f 21 (Y), f 31 (Y), respectively. Note that Y has pq real scalar variables whereas S = YY , which is a p × p real positive definite matrix, has only p(p + 1)/2 elements. However, we can obtain a relationship between dY and dS (see [23]). It is the following: Let the corresponding functions of S be denoted by f 02 (S), f 12 (S), f 22 (S), f 32 (S), respectively. Then, for example, f 02 (S) is the following:
Multivariate Situation
In Equation (46) and Equations (48)-(50), let p = 1 and q > 1; then, Y is 1 × q and of the form Y = (y 1 , ..., y q ). Then, YY = y 2 1 + ... + y 2 q . Then, for p = 1, the constant matrix A is 1 × 1 and let it be a 3 > 0. Then, from Equation (51), Then, f 31 becomes the following: for −∞ < y j < ∞, j = 1, ..., q. We may call Equation (52) as the multivariate version of the basic Krätzel integral and f 01 for p = 1 as the pathway extended form of f 31 in Equation (52). Note that for a general p > 1 we do not take exponents for (A 1 2 XBX A 1 2 ) because in the general case matrix transformations create problems while computing the Jacobians. The types of problem is described in [23]. However, for the scalar cases in f 02 , f 12 , f 22 , f 32 , we can take arbitrary exponents. Hence, we have the general Krätzel integrals in the multivariate case as the following: s γ+ q 2 −1 e −a 1 s δ −a 2 s −ρ .
Since s is a real scalar variable here, one can use the scalar version of Mellin convolution of a product or density of product of Sections 2 and 3, go to the Mellin transforms to evaluate the normalizing constant. The same procedure works for all the models f 04 , f 14 , f 24 also.
Let M g (t) be the Mellin transform of g(b) with Mellin parameter t. Then, s γ+ q 2 −1 e −as δ −bs −ρ ds}db.
Evaluating the b-integral we have the following: That is, By taking the inverse Mellin transform, we have g(b) as the following: where H(·) is the H-function, see [5]. Then, the normalizing constant is the following: .
Note that, when ρ = δ, the H-function reduces to the G-function of the form G 2,0 0,2 ab 0, γ+q/2 δ . Then, replace the H-function by the G-function. Observe that, when p = 1, A is 1 × 1 and let it be a 3 > 0. This is the a 3 appearing above. | 2020-04-09T09:26:42.331Z | 2020-04-03T00:00:00.000 | {
"year": 2020,
"sha1": "32a4739a0a3889bf058bf9ecef95b2cf89856abf",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/mathematics/mathematics-08-00526/article_deploy/mathematics-08-00526-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1a7f3eb7763740d95c47f31f37e8f199b8b944b6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
149488175 | pes2o/s2orc | v3-fos-license | Paraphrasing Strategy in EFL Ecuadorian B 1 Students and Implications on Reading Comprehension
Reading comprehension in Ecuadorian students has been mostly managed at a literal comprehension level, leaving out inferential and critical comprehension. This is because most of the articles students read require a high level of literacy and a good domain of comprehension strategies. One of these strategies is paraphrasing; therefore, the purpose of this research was to analyze the effects of paraphrasing and its implications on reading comprehension skills in English as a foreign language. This study was developed in B1 students enrolled at the 6th level of English at Linguistics Competence Department at Universidad Nacional de Chimborazo UNACH with a sample of 50 students. A base-line pre-test and a posttest to an experimental and control group were applied. The project implementation took ten sessions and students learned the techniques to effectively paraphrase and the pitfalls they should avoid when applying this strategy. The analysis of T-student test yielded that the experimental group outperformed the control group. The main results showed that once students learn the techniques and correctly apply them, it helped them out to go beyond the literacy level, applying an authentic reading comprehension of the text. Pedagogical implications about paraphrasing and reading comprehension are presented in the discussion.
Introduction
Reading is a receptive skill and it is one of the four linguistic skills taught and learned in English as a Foreign Language.It implies not only the capacity to understand the words but also the ability to understand the context of those words in the text, process them, relate them with what the student already knows and finally construct meaning.Once the student has constructed their own knowledge from what he/she has read, real comprehension can be shown, the output can be clear and meaningful and questions about the passage can be answered easily.On the other hand, paraphrasing is to be proven as a strategy that implies its basic elements such as lexis, semantics and syntax which work together to display real comprehension.
The Current Situation of EFL Students in Regard to Reading Comprehension and Paraphrasing
Hirano 2015 as cited in (Alghail & Mahfoodh, 2016) determined that higher education nature of reading activities was one of the difficulties encountered by students in a Malaysian university.In addition, it is the fact that readings are much longer and more complex compared to the readings they were used to read in high school.
Obviously, foreign language nature summed up, since it is more difficult to read texts in L2 due to the wide range of specialized and academic vocabulary (Clarke, 2018).Finally, students' poor level of literacy also contributed to the drawbacks of academic reading.This study is not disconnected of the Ecuadorian reality.A student who starts college education may face many challenges related to reading due to the lack of reading culture which is evidenced in Ecuador since early childhood and throughout basic education and high school.For example, a study carried out by the University of Cuenca in 5th graders showed that multi-grade teaching practices in public schools, the few hours assigned to study L1 and the lack of interesting books caused poor reading skills development (Pedagogía, 2016).Additionally, findings related to high school students pointed out that adolescent girls showed a greater interest in reading over young boys, since the latter ones preferred to spend their free time in videogames and social networks (Education, 2017).It is obvious that this lack of scaffolding, has a negative impact on the good performance of undergraduate students in an academic setting (Anabelle, Cisneros, Shaw, & Maloney, 2010).
As a result of poor reading skills, Ecuadorian college students have made palpable, on their papers, the inconsistency of what they read and what they want to communicate related to the text, noticing the lack of reading comprehension skills, measured in the capacity to understand main ideas and point out specific details, comprehend the lexicon and make inferences for meaningful contributions (Tierney & Cunningham, 1980).
These relevant responses expected from students imply a whole process, which starts with reading, next paraphrasing and finally output.Nevertheless, most of students in Ecuadorian higher education settings admitted having developed a copy-page culture which has affected the quality of their papers and they have blamed this situation to academic grounds, firstly because of their poor reading skills and secondly because they have never been trained about how to use the paraphrasing strategy effectively.They have heard about paraphrasing and have thought it consisted basically on changing words, displaying, most of the time, a plagiarized version of the text.This background has affected negatively their reading comprehension skills since they show they are somehow good to answer questions related to specific details whereas they find it difficult to answer those which imply inferential and critical comprehension.
Research Background and Gaps
After some literature review regarding the effects of paraphrasing on reading comprehension, some aspects related to paraphrasing have been considered; for instance, RAP strategy (Read, Ask, Put), Neural Network Classification and other (Hagaman & Casey, 2017) ("Improving the reading comprehension of primary-school students at frustration-level reading through the paraphrasing strategy training : A multiple-probe design study," 2017) (Rajkumar, 2010) These studies refer to have worked with paraphrasing but with other types of populations such as young learners and learners with reading difficulties.There are also studies which population's first language is English and the results are good since they do not have the difficulties presented by the population of this study due to the L2 (Khrismawan & Widiati, 2013).The results of the mentioned studies, even though, are valid and can be generalized at any level of native English readers, still leave a gap to be researched.That is why this research pursues findings in students whose first language is not English and are learning it in a non-English speaking environment since just a limited number of studies have been found related to paraphrasing strategy with EFL students and its implications on reading comprehension.
Reading Comprehension and Paraphrasing
Reading comprehension is a metacognitive skill that has been studied for years and discussed for many authors.For a student to effectively comprehend a text many elements play a role, for example vocabulary, cognitive strategy instructions, metacognitive processes, motivation and self-regulation.Furthermore, it is necessary to realize that, most of the time, what is read is different of what is understood (Shi, 2012).For this reason, reading comprehension must take its first steps from early ages and it is the role of the class tutors to provide students with strategies that, with time and dedication, will become skills performed unconsciously by students when they become more experienced.
For a reader to be proficient, he/she should manage the three basic aspects of reading comprehension; literal, inferential and critical comprehension.Literal comprehension implies an understanding of what is exactly read on the text: main ideas, supporting details and lexis.Therefore, the reader can categorize, outline, and summarize.Similarly, inferential comprehension implies an understanding of what is meant or said between lines and again it may be present in main ideas, supporting details and lexis, so that the reader will be able to draw conclusions, predict, and determine the author's attitude and possible bias.Finally, critical comprehension involves a judgment of what the author says and an evaluation according to the reader's prior experience (Mistar, Zuhairi, & Yanti, 2016).Silva and Cain (2015) mention that when the reader comprehends a text, it is easy for him/her to decode words and convey meaning which is expressed using semantics, syntax and background knowledge in order to construct a pattern of knowledge (Silva & Cain, 2015).
Based on what has been mentioned and the experience of what is tested in the reading section of international tests such as TOELF, PTE academic and First certificate, the reader needs to understand the basic areas of the text; main ideas and details, lexical comprehension (literal comprehension) and inferential and critical comprehension.It is also based on the experience that students and teachers need to develop strategies that will allow them to increase their lexis, improve their use of grammar and activate their inferential and critical reading comprehension.
One of these strategies is paraphrasing, which is not only "put the text in your own words" as it was described by students under intervention, or at least, it is not as easy as it sounds.Two important contributions to this concept are added by Shi (2012); the reference of the source and the preservation of meaning.This last one was contributed by D' Angelo (1979) and cited by the same author (Shi, 2012).It is important to mention that paraphrasing intents to develop student's ability to process the given text and create an output that is more significant for them.In addition, the Latrobe University in the "Referencing and Paraphrasing Writing" states that "Poor paraphrasing is often the result of poor understanding of the text.Some students try to paraphrase at the sentence level rather than the ideas level".McNamara (2007) also finds that there is "a positive correlation between inaccurate paraphrases and poor comprehension of a text" (p.477).This approach can be used to monitor their level of comprehension of the text they have (Kletzien, 2009) and according to it, go back to the original source, check and increase comprehension if necessary.Some authors, as mentioned by Shi, 2012, describe the levels of paraphrasing which is closely related to the level of comprehension.For instance, if a student has not understood the text in a proper way, superficial paraphrasing will be carried out, thus it will consist only in a variation of words, re-order sentences, deletions or inclusions of too much of the original.In contrast, if the student level of comprehension increases, he/she will be able to provide substantial modifications, a reference of the author and include new characteristics by using inferential and critical thinking.
The Meta-cognitive Process of Paraphrasing
According to Kirkland & Saunders cited by Lee, Yee et al. ( 2012) paraphrasing uses a step by step metacognitive structure.A similar concept is introduced by Choy and Cheah who mention that students are now changing their thinking paradigms based on Blooms' taxonomy, it est.analysis, synthesis, and evaluation which play a very important role in todays' learning processes (Chee et al., 2012).Paraphrasing implies a complete cognitive and metacognitive process in order to restate a sentence or paragraph in such a way that the new text changes the lexical and syntactic but keeps the semantic aspect (Khrismawan & Widiati, 2013).In addition, paraphrasing includes key thinking skills such as comparing and contrasting, noticing similarities and differences, drawing conclusions and others Besides, a good level of vocabulary knowledge is a key component of comprehension and very important platform for paraphrasing and so is for reading comprehension (Faramarzi, Elekaei, & Tabrizi, 2016).
The metacognitive objectives of this strategy are to produce a reflection about the thinking process when paraphrasing and provide an evaluation of how effective the strategy is.Some of the questions for this step may be; did my paraphrased version work?What things can I change/ improve/ or avoid next time?Does the new version of the text influences me in the same way the original text does?Am I using the paraphrasing strategy effectively?When students answer these questions, they explore a new way of comprehension.
Techniques and Pitfalls of Paraphrasing
Far from what it has been understood; paraphrasing is more than only changing words; therefore, the complete definition includes techniques that take the strategy one step forward.Some authors, for instance, include techniques such as changing the sentence structure (making sentences shorter or longer and changing the order of sentences and clauses), providing a reference, avoiding leaving out information, changing numbers by fractions, using synonyms and changing the word class (Karapetyan, 2005).In contrast, Kennedy and Smith cited by Hayuningrum, Herdiansari (2012), in the article Students' Problems in Writing Paraphrases in Research Paper Writing Class mentions eight pitfalls that have to be sidestepped: (1) misreading the original which leads to change the semantics of the text, (2) including too much of the original, (3) leaving out important information, (4) adding opinion, (5) summarizing rather than paraphrasing, (7) substituting inappropriate synonyms, (8) and forgetting to document (Hayuningrum, n.d.).This author also mentions that in order to avoid plagiarism, the so called word-for-word copying and the patchwork paraphrase which is the result of the cut-and-paste practice must be avoided.
Hypotheses and Research Questions
The hypotheses proposed for this study was if paraphrasing strategy affects the process of reading comprehension in EFL students.
The research questions were: (1) What role does paraphrasing play in the EFL students' reading comprehension?Are there any changes in the reading skills after students learn to paraphrase?
(2) What are the students' common problems during the process of paraphrasing?
(3) What aspect of reading comprehension are mostly affected by paraphrasing?
Methodology
The study was quasi-experimental as there was no randomization in subjects' selection.Groups were kept intact as students enrolled in the English course of the Linguistic Competence Department at UNACH.Due to the need of solving problems related to paraphrasing and reading comprehension; paraphrasing strategy was the independent variable and reading comprehension the dependent variable.
This study emerged from the research project "Paraphrasing: a methodology strategy and implications on reading comprehension" presented to the ICIT Institute of Sciences and Technology at Universidad Nacional de Chimborazo UNACH which took part of a rigorous evaluation process, achieving all the requirements, and scoring a mastery criterion 90/100 thus, it was approved for its implementation.
Participant Characteristics
The sample consisted of two groups selected from English regular courses at Linguistic Competence Department at Universidad Nacional de Chimborazo.Health science students were chosen as the experimental group and Engineering students were the control group.Even though students came from a wide variety of social, academic and demographic backgrounds; no exclusion criteria, or restriction was used, so that all students registered in every group were taken as research subjects it.est.intact samples.They first signed the informed consent agreeing to be considered subjects of study.The total population was 50 students, 25 in each group.They ranged between 20 and 23 years old.In the experimental group male accounted for 48% and female 52%, whereas the control group females accounted for 35% and males 65%.Although, they all were registered in the 6 th level (B1), their English language skills were heterogeneous and did not match their entry profile required for B1 which was determined by the diagnosis test taken at the beginning of the course.
In addition, the researchers were experimented English professors who developed a wide range of research to determine the paraphrasing techniques to be implemented in the study.Furthermore, they had a self-training process as well as planning and discussion sessions before and during the intervention application.
The Instruments to Collect Data
A pre-test was applied as a base-line and a post-test to probe the hypothesis.The base-line test consisted on the four skills of English language (reading, writing, listening and speaking) and the two subskills (grammar and vocabulary) as well as a sample of paraphrasing.This test determined the level of entrance of the two groups and the ability to paraphrase a text.It is necessary to mention that this test also exposed the need to deal with the three levels of reading comprehension (literal, inferential and critical comprehension).The post-test included a reading text and 20 reading comprehension items.The test was based on a reading text from Top Notch 3 (B1+) Pearson textbook, which is the base of the syllabus of the regular courses of English at UNACH.This test included the three basic sections tested on reading comprehension tests: main ideas, supporting details and lexis to test literal comprehension; in addition, it incorporated inferential and critical thinking questions.The test was validated by language and linguistics experts from other universities in the region using the technique of expert judgement in order to assess test difficulties, effectiveness of distracters, power to discriminate and time required, all according to the objectives set for the study.Also, to statistically support this validation, the instrument was evaluated using the Cronbach's Alpha test which determined a value of 0.85 probing that it is reliable for research purposes.Finally, during the intervention, samples of paraphrasing were taken to analyze the most common problems faced by students during paraphrasing and the researcher also recorded student's behaviors in observation cards.
The Proposal Implementation Process
On one hand, regarding the experimental group, a discussion of the reasons why paraphrasing is important in order to elude summarizing and avoid intellectual plagiarism was carried out as first step.They understood how important the strategy is concerning to their major and the application to their future reading and writing works.
Five readings texts from the same textbook Topnotch 3 B1+ Pearson 3 rd edition were used in order to progressively apply the different paraphrasing techniques.These 5 readings were related to general topics such as the five most effective work habits, holidays around the world, world problems, job qualifications, medical discoveries and global warming.Vocabulary and grammar structures found in each reading were previously learned during the unit development according to the syllabus for the 6 th level.
In the class sessions, the teacher used the Presentation, Practice and Production method to introduce topics related to paraphrasing.The presentation stage involved to model a memorized statement related to the reading text worked in the session.Later, teacher asked students to state what they understood in their own words.Teacher wrote both sentences on the board, guided students to analyze the key elements and highlight them in the original text.Then, compare with the students' statement and check accuracy and completeness.Finally, there was a time for questions to monitor understanding.Lastly, depending on the number of the specific session, the teacher would introduce/review/or recall the concept of paraphrasing and metacognition.
The practice stage used the clickable audio version of the reading from the textbook for students to review pronunciation, and the teacher provided them with some examples and exercises taken from the reading for students to identify and apply the specific strategy taught at that time.When applying the strategy, students were asked to write their version directly in the handout.
During the production stage, students again listened to and read the text, at the same time twice.Then, students scanned for specific information and subdivided the text in smaller sections.Next, students were asked to provide an oral paraphrased version of the main idea and details where teacher evaluated comprehension.Students recorded their versions on their mobile phones and then listened to and compared it with the text to see how accurate they were in keeping the main idea and details (semantics).After checking them, students were asked to provide a written paraphrased version of the subdivided sections aiming to test vocabulary usage (lexis) and grammar structures (syntax).
The class session topics were regarded to specific techniques used in paraphrasing strategy: use of synonyms, change word class, switch the order of phrases and clauses, make long sentences shorter, use concrete ideas instead of abstract ideas, change sentence structure like using passive voice instead of active voice, for numbers, use fractions in its place and finally provide a reference.Besides that, students were trained on how to dodge out pitfalls such as: forgetting to acknowledge source or author, misreading the original, including too much of the original, leaving out important information, adding their personal opinion, summarizing rather than paraphrasing, substituting inappropriate synonyms, expanding or narrowing the meaning.Students, worked on teacher-made exercises related to each topic.Each session lasted 2 hours and there were 10 regular sessions and 1 review session.
Before ending the implementation process, in the review session, a series of one hundred extra sentences with different grammar structures were developed by the teacher and paraphrased by students in class and at home to consolidate knowledge.
On the other hand, for the control group, a talk about why reading strategies are important and how fruitful to manage them effectively may be for their majors, was conducted before starting the experiment.Some of the results stated were that they would be able to understand texts in a better way and make use of the content as required.This group worked the same reading texts with the same length of time, but on different days and with a different method.The teacher conducted pre-reading and post reading strategies as proposed by the textbook.These strategies included: describing pictures or visual aids, vocabulary preview, brainstorming and group discussion to elicit prior knowledge of the topic.Graphic organizers, questioning, and exit slips as post-reading strategies.Teacher also included some Spanish translation and some explanations of the culture behind the context of the reading when necessary.
Results
In order to test the general hypothesis, the Statistic T-student was applied according to two factors; the normal behavior of the data which was established by the test Kolmogorov-Smirnov in SPSS and the number of data which is greater than 30.
Recruitment
The research was carried out in a period of twelve months including four stages planning three months, researchers training two months, implementation 4 months and follow up three months.The implementation stage was applied in the same term for both experimental and control groups.
Analysis of the First Research Question
What role does paraphrasing play in the EFL students' reading comprehension?Are there any changes in the reading skills after students learn to paraphrase?
The analysis of the results of the posttest between the two groups was carried out and it is presented in table Nº1 The results show that the p-value is lower than the selected level of significance (0.05) which leads to reject the null hypothesis since there is an improvement of the first variable for the experimental group according to the table number 1.
Analysis of the Second Research Question
What are the students' common problems during the process of paraphrasing?
Data regarding this research question was collected only from the experimental group who were under intervention.Three samples of the same paraphrasing practice were gathered; number of errors in each technique were tabulated and shown in figure Nº1.The order of the techniques are presented.
Analysis of the Third Research Question
(3) What aspects of reading comprehension are most affected by paraphrasing?This table shows that the three aspects tested in reading comprehension were outperformed by experimental group in the posttest which P value is statistically significant.
Drawbacks of the Experiment
Even though the results were positive and the hypothesis was proven, it is necessary to identify some the downsides detected during the study and recorded in the teacher's observation cards: First, participants dedicated limited time for reading as it was mentioned in the statement of the current situation of Ecuadorian students.During informal conversations recorded in observation cards, they admitted not to read more than an hour every day and they do it only because they are required to.Moreover, they noted a restricted self-motivation for reading for pleasure in Language 2. They basically read only their notes or books for tests, lessons or because they must memorize some processes or procedures, therefore the group presented low ability to inferentially or critically comprehend.
Secondly, other problem that affected the study was the lack of commitment with self-improvement regarding the use of paraphrasing strategy.Students felt that paraphrasing in English was difficult and rejected to do practice by themselves, they wanted to count on the teacher's support and guide in every lesson.Only a half of students did assigned homework and the rest copied.This was notorious because they could not apply the studied techniques efficiently and needed more control and feedback.As a result, this fact affected negatively to the study.Furthermore, students' perceptions on paraphrasing summed up, since they though it consisted only on changing words by using synonyms, and sometimes, they were unwilling to take one step forward.In fact, the teacher had to push them by assigning grades to every practice they did in class and had to ask them to come to extra hours of practice to carry our intervention as planned.
Thirdly, their level of English worked against those whose level of entrance was lower than expected.They showed they struggled even with changing synonyms and depended on their mobile devices to carry out activities.They found it really difficult to apply techniques such as using a variation of words verb to noun.
Finally, as stated before, paraphrasing strategy seemed to be very difficult to apply for poor readers, so that the time of intervention was short compared to the high academic level required for developing paraphrasing strategy.Besides the work academic load demanded by the major preventing them from promoting other skills including the ability to read in English.
Discussion
Results of the base-line pretest showed that students in both experimental and control group performed similarly regarding the application of paraphrasing strategies; this finding is alike to the results of the study at the University of Iowa regarding the Effects of the Paraphrasing Strategy on Expository Reading Comprehension.In addition, these findings showed that learners take advantage from a paraphrasing strategy as an explicit instructions model (Hua, Woods-groves, Ford, & Nobles, 2014).These findings, suggest a coincidence with the results of the study conducted with university students at UNACH since the experimental group outperformed the media results in 2.5 of the control group in the post test after the intervention.Therefore, the findings of the study suggest that providing paraphrasing instruction in an explicit basis can improve reading comprehension of university students.
In the study: "Paraphrasing in English Academic Writing by Thai Graduate Students" findings revealed that, one of the most common errors in students paraphrasing practices is plagiarism because they do not cite properly, (Ideas, Paper, & Louis, 2017).This finding is also largely consistent with the study at the National Kaohsiung Normal University conducted in an EFL academic setting, which found that the most common error was inappropriately reference of source texts, both studies are consistent with the difficulty found on this research that is not providing a reference, assigned as the third most common paraphrasing difficulty.Even though it was the first one on the first sample taken.
Another study concerning Students' Behaviors and Views of Paraphrasing and Inappropriate Textual Borrowing in an EFL suggested that the most common errors was the patch-writing, i.e. students misused synonyms and syntactic adjustments, coming up with distortion of the semantics of the original text; (Liao & Tseng, 2010); this finding is entailing the results of this study concerning the most common difficulties related to substituting inappropriate synonyms and changing sentence structure both of them affecting the semantic completeness of the text.
Durkin and et al. (1995) state that effective reading involves a purposeful process aimed to activate cognitive and metacognitive procedures, that is critical thinking and that comprises understanding meaning of text, organizing information in a logical mode, and restructuring what is cited by (Hua et al., 2014).In the study regarding the components of paraphrase evaluations conducted at McnaMara University of Memphis, Tennessee, results indicated that semantic completeness and syntactic similarities showed higher relationship to paraphrase quality than lexical similarity (Systems, 2009).These findings showed consistency with this study in the way that main ideas, critical inferential thinking included in semantics revealed the highest mean in relation to lexical comprehension.
The implication of the results of this research toward the teaching of paraphrasing is that the techniques that the strategy implies are very valid to be taught at higher education level for EFL students as well as first language students because paraphrasing fosters reading at all its comprehension levels.Teachers are encouraged to implement this technique in their syllabus as part of developing reading and writing skills.Furthermore, paraphrasing could be applied at every level of education as it is reading, for example paraphrasing can start at the first levels during elementary instruction by using sentences or short quotes.As stated by Rachel Lynette (2014) in her teachers' blog there are some approaches that can be used to progressively enable students to paraphrase.However, instruction does not have to stop, but it has to continue until students 'comprehension meets the acceptable or desired level (Hagaman, Casey, & Reid, 2016).Therefore, this strategy does not have to be diminished or forgotten during college training since they are the most likely to commit plagiarism due to the nature of their assignments.
Regarding reading comprehension, this study suggests that teachers should pay especial attention to this skill of language, particularly in Ecuadorian context where it has been demonstrated that literal comprehension is mostly developed.Every teacher at every level is required to look for strategies that stimulate learners to activate the three levels of reading comprehension (literal, inferential and critical).Paraphrasing can be one of them as it has been proved in this study that it develops reading comprehension.
Conclusions
Explicit instruction on paraphrasing techniques and awareness of common errors conduct to effective reading comprehension.Future researchers should develop this line of research with a focus on simplification of strategies and determine criteria to achieve the paraphrasing quality of source texts.
The most common errors during the process of paraphrasing consist of syntactic similarities, plagiarism, and semantic completeness.The implementation of effective paraphrasing strategies might contribute to decrease plagiarism matters and increase academic upright standards.In addition, is important to establish a simple and accurate assessment process on paraphrasing usage in order to facilitate appropriate feedback.
The aspects of reading comprehension mostly affected by paraphrasing are semantic completeness and syntactic similarities; that is learners are able to step into inferential and critical comprehension.
Figure 1.Students' most common errors on paraphrasing
Table 1 .
Posttest results -experimental and control group
Table 3 .
Group statistics
Table 4 .
Independent sample test Comparison between the experimental and control group posttest | 2019-05-12T14:24:33.215Z | 2018-12-07T00:00:00.000 | {
"year": 2018,
"sha1": "92ec208450bffacfae21ff0337d0d4c06a8b97da",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/elt/article/download/0/0/37784/38180",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "92ec208450bffacfae21ff0337d0d4c06a8b97da",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
146807994 | pes2o/s2orc | v3-fos-license | A Polarization-insensitive and High-speed Electro-optic Switch Based on a Hybrid Silicon and Lithium Niobate Platform
We propose and demonstrate a polarization-insensitive and high speed optical switch unit based on a silicon and lithium niobate hybrid integration platform. The presented device exhibits a sub nano-second switching time, low drive voltages of 4.97 V, and low power dissipation due to electrostatic operation. The measured polarization dependence loss was lower than 0.8 dB. The demonstrated optical switch could provide as a building block for polarization-insensitive and high-speed optical matrix switches.
We propose and demonstrate a polarization-insensitive and high speed optical switch unit based on a silicon and lithium niobate hybrid integration platform. The presented device exhibits a sub nano-second switching time, low drive voltages of 4.97 V, and low power dissipation due to electrostatic operation. The measured polarization dependence loss was lower than 0.8 dB. The demonstrated optical switch could provide as a building block for polarization-insensitive and high-speed optical matrix switches.
Fast optical switch (FOS), with switching times in the micro-to nano-second region, is one of the key enabling optical components for optical packet switching (OPS) [1] and optical burst switching [2]. It brings the benefits of bit rate and format transparency which provide greater agility and flexibility to optical networks. Recently, the rise of high-performance computing and Data Centre Networks (DCNs) in recent years has created a need for FOS, which could enable high bandwidth, low latency, energy-efficient optical interconnect among the servers and racks [3][4][5]. Leveraging an advanced complementary metal-oxide-semiconductor (CMOS) manufacturing process, silicon photonics has emerged as a powerful platform for high-density photonic integrated circuits to the possibility of low-cost and high-volume production of photonic integrated circuits (PICs) [6][7][8][9][10]. In the past few years, silicon photonic switches have been reported exploiting the thermo-optic (TO) effect [11][12][13][14][15], the free-carrier dispersion effect [16][17][18][19][20][21], and micro-electro-mechanical-systems (MEMS) technology [22][23][24]. TO switches suffer from slow switching speed in the order of tens of microseconds or even milliseconds. To achieve nanosecond-scal e switching times, free-carrier dispersion effect through carrier injection or depletion is widely exploited for high-speed electrooptic (EO) silicon switch fabrics. Unfortunately, free-carrier dispersion is intrinsically absorptive, degrading not only the insertion loss but also the extinction ratio of the switches. MEMSactuated optical switch fabrics exhibit low insertion loss, excellent extinction ratio and large port number, but require a high turn-on voltage of >50 V, which complicates the driver design and limits its application. To date, silicon photonic switches with high switching speeds, high extinction ratio and low drive voltage remains a challenging research objective.
Previously, we have demonstrated ultra-high speed and low loss Mach-Zehnder (MZ) modulators based on hybrid integration of lithium niobate (LN) phase shifter with passive silicon circuitry [25]. The devices exhibited low insertion loss of <2.5 dB, a modulation efficiency of 2.2 Vcm, and EO bandwidth of more than 70 GHz. In this paper, we demonstrate a polarization-insensitive MZ switch based on hybrid integration of LN phase shifter with silicon photonic circuits. The presented devices show sub-nanosecond switching speed, low drive voltages of around 5.97 V and low polarization dependence of <0.8 dB. Moreover, the present devices feature energy-efficient electrostatic operation with no power dissipation when holding the switch in the cross or bar state. A schematic diagram of the polarization insensitive FOS unit is shown in Fig. 1(a). The device consists of a bottom silicon waveguide layer, a top LN waveguide layer and vertical adiabatic couplers (VACs) which transfer the optical power between the two layers. The top waveguides, formed by dry-etching of an X-cut LN thin film, serve as high-speed EO phase shifters where ultra-fast Pockels effect occurs. The bottom silicon circuit supports all of the passive functions, consisting of 3 dB multimode interference (MMI) couplers that split and combine the optical power, and twodimensional grating couplers (2D-GC) for polarization-insensi tive off-chip coupling. The VACs, formed by silicon inverse tapers and superimposed LN waveguides, serve as interfaces to couple light up and down between the silicon waveguides and LN waveguides. A mode calculation result (using finite difference eigenmode solver, Lumerical Mode Solution [26]) indicates that nearly 100% optical power can be transferred from silicon waveguide to the LN waveguide, and vice versa [25].
The input signal, coupled through the 2D-GC, is decomposed into two orthogonal polarization components, X-polarization (X-pol) and Y-polarization (Y-pol), and are coupled into a pair of orthogonal waveguides, both in the TE mode (see in Fig. 1(a)). Sharing the same polarization and dispersion, they are switched by the corresponding X-and Y-polarization MZ switches designed for only TE mode, and then sent to the two output 2D-GC (Output-1 and Output-2). The Cross section the device is shown in Fig. 1(b). The LN waveguides have a top width of w = 1 μm, a slab thickness of s = 420 nm, a rib height of h = 180 nm. The thickness of electrodes was set to t = 600 nm, and the gap between the waveguides and electrodes was set to 2.75 μm. The electrodes are designed in a single-drive push-pull configuration, so that applied voltage induces a positive phase shift in one arm and a negative phase shift in the other. The length of the arms of the MZ switches are designed to be 4 mm. The device fabrication process is shown in Fig. 2. The device was fabricated in a silicon-on-insulator (SOI) wafer with 3-μm thickness buried oxide (BOX) and 220-nm thickness silicon waveguide. Firstly, a shallow etched 70 nm 2D-GCs and a 220nm Si waveguide were defined by e-beam lithography (EBL) and inductively coupled plasma (ICP) using hydrogen bromide (HBr), successively. Then a X-cut LN on insulator (LNOI) wafer with silicon substrate, commercially available from NANOLN, was filp-bonded to the patterned SOI wafer through an adhesive bonding process using benzocyclobutene (BCB). After that, the substrate of the LNOI was removed by mechanical grinding and ICP. Then, the BOX layer was removed by a dry etching process. Hydrogen silsesquioxane (HSQ, FOX-16 by Dow Chemical) was then spin-coated on the 600-nm thick LN membrane followed by EBL patterning. Through plasma etching in an inductively coupled (ICP) etching system, the waveguide patterns are transferred into LN. Finally, a liftoff process was performed to produce the Au electrodes. The scanning electron microscope (SEM) image of the fabricated electrode and LN waveguide are shown in Fig. 3(a). Fig. 3(b) shows the cross-section of the fabricated LN waveguides with a sidewall angle of 60°. The total footprint of the device is about 6.0 mm×1 mm. The 2D-GCs are the key components for realizing the polarization insensitive operation [27][28]. The optical micrograph of the 2D-GCs used in the present device is shown in Fig. 4 (a). To measure the PDL, one of the most important performance metrics, two identical 2D-GCs were connected in a back-to-back configuration as shown in Fig. 4 (b). The measured coupling spectra for P-and S-polarization, illustrated in Fig.4 (c), indicate that the PDL is less than 0.8 dB for Cband. The P-or S-polarized input light was calibrated by measuring the transmission of a TE grating coupler for S-polarization, which was co-fabricated with the present device. As shown is Fig. 5, the measured coupling efficiencies is -6.9 dB at the central wavelength of 1547 nm and the 1-dB and 3-dB bandwidth are measured to be 27 nm and 43nm, respectively. The coupling efficiency of the present 2D-GCs is relatively low due to the unoptimized thickness of the BOX layer in the current SOI wafer (3 μm). The coupling efficiency can be significantly improved by using a substrate transfer technique as demonstrated in ref. [28]. The measured transmission of the X-and Y-polarization MZ switches at different driving voltages and a fixed wavelength of 1550 nm are shown in Fig. 5. Light wave from a wavelength tunable laser was coupled to the waveguides of the device via a polarization controller (PC) and a single mode fiber. Several TE grating couplers were co-fabricated on the chip in order to calibrate the input polarization states to be X-, or Y-polarization. The transmittances at the Output-1 and Output-2 ports, when the X-or Y-polarization was introduced to Input-1 port, are plotted in Fig. 5 as a function of the voltages applied to the electrode. The X-polarization switch takes the cross/bar state at a voltage of 8.59 V/3.62 V, while Ypolarization switch takes the cross/bar state at a voltage of 9.52 V/4.55 V. The measured Dc is 4.97 V. Thus, the circuit would give a polarization insensitive cross or bar state when both of the Xand Y-polarization switches take their cross or bar state. Both of the extinction ratio of the switches for X-and Y-polarization was measured to be > 40 dB, as shown in Fig. 5. The on-chip insertion loss of the polarization diversity switch was estimated to be around 2 dB by subtracting the coupling loss of 2D-GCs. It should be noted that the circuit is very energy efficient because it consumes power only when the state changes, and no power is consumed when holding the switch in the cross/bar state.
To examine the spectral response of the circuit, the transmittanc e spectra of the cross and bar ports in the cross and bar states for X-, Y-, and a mixed polarization are shown in Fig. 6. Extinction ratios of 26 dB and 28 dB have been achieved in the C-band at both cross and bar output ports for X-and Y-polarizations, respectively. In addition, Fig. 6(c) shows the transmission spectra for the S-polarization, which forms an angle of approximately 45 degrees to the X-or Ypolarization directions. An extinction ratio of at least 28 dB was obtained for S-polarization, which further confirms the broadband polarization insensitive operation of the circuit. Finally, we characterized the dynamic switching properties of the EO switches. A square-wave electrical signal with a repetition rate of 500MHz and a duty cycle of 50%, generated from an arbitrary signal generator (MICRAM), was applied to the electrode of the phase shifter through a RF probe. A 50 GHz broadband amplifier (SHF 807) was used to amplify the driving signal to the switch together with a DC bias. The optical output intensity is recorded using an oscilloscope (Tektronix DSA8300). As shown is Fig. 7, the rising and falling times were measured to be 100 ps and 312 ps, respectively, indicating an ultra-fast switching speed.
In conclusion, we have designed and demonstrated a polarization-insensitive and high-speed optical switch circuit based on the hybrid silicon and LN platform. The polarization-insensi tive operation was achieved with a polarization-diversity technique by using 2D-GC with low PDL of less than 0.8 dB in C-band. The demonstrated device exhibits switching speed of less than 1 ns, an insertion loss of 2 dB and a low drive voltage of around 4.97 V. The switch demonstrated here could provide as a building block for polarization-insensitive, high-speed and large-scale silicon photonic matrix switches. | 2019-05-07T01:27:39.000Z | 2019-05-07T00:00:00.000 | {
"year": 2019,
"sha1": "925222b583d402755a9dd457aa41814728ddc601",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1905.02315",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "925222b583d402755a9dd457aa41814728ddc601",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
201829295 | pes2o/s2orc | v3-fos-license | A Review of Cardiovascular Toxicity of Microcystins
The mortality rate of cardiovascular diseases (CVD) in China is on the rise. The increasing burden of CVD in China has become a major public health problem. Cyanobacterial blooms have been recently considered a global environmental concern. Microcystins (MCs) are the secondary products of cyanobacteria metabolism and the most harmful cyanotoxin found in water bodies. Recent studies provide strong evidence of positive associations between MC exposure and cardiotoxicity, representing a threat to human cardiovascular health. This review focuses on the effects of MCs on the cardiovascular system and provides some evidence that CVD could be induced by MCs. We summarized the current knowledge of the cardiovascular toxicity of MCs, with regard to direct cardiovascular toxicity and indirect cardiovascular toxicity. Toxicity of MCs is mainly governed by the increasing level of reactive oxygen species (ROS), oxidative stress in mitochondria and endoplasmic reticulum, the inhibition activities of serine/threonine protein phosphatase 1 (PP1) and 2A (PP2A) and the destruction of cytoskeletons, which finally induce the occurrence of CVD. To protect human health from the threat of MCs, this paper also puts forward some directions for further research.
Introduction
The cardiovascular system is considered a dynamic system. As the first organ in embryonic development, the heart provides nutrients and oxygen to all of the various organs and tissues. The normal functions of the cardiovascular system are mainly affected by genetic factors, environmental factors and the interaction between the two factors, which is considered a principal cause of cardiovascular disease (CVD) [1]. It is estimated that about 290 million patients suffer from CVD [2]. Additionally, the recent epidemiological survey indicated that the prevalence and mortality of CVD in China is still on the rise, and the mortality rate of CVD is the highest, in comparison to tumors and other diseases, accounting for more than 40% of the total number of deaths resulting from diseases in China [2]. The burden of CVD is increasing, which has become a major public health problem. Thus, it is important to take measures towards preventing and curing of CVD.
The increasing eutrophication and frequent cyanobacterial blooms in water bodies have become a severe public health concern globally. Cyanobacterial blooms are usually characterized by the mass formation of cyanobacteria, green algae and diatom, leading to the death of aquatic organisms due to hypoxia. In addition, some algae (blue-green algae) including Microcystis, Anabaena, Anabaenopsis, Oscillatoria and Nostoc [3][4][5][6] can produce various potent toxins, such as cyclic peptides, alkaloids and lipopolysaccharides, in the process of metabolism [7]. Subsequently, negative human and animal health effects may be induced when exposed to these toxins [8,9]. Among the cyanotoxins produced Figure 1. General chemical structure of MCs (microcystins). (A) 1-7 represent seven amino acid residues, respectively. X and Z in positions two and four are highly variable L-amino acids that determine the suffix in the nomenclature of MCs. (B) represents some of the most frequent MC congeners (adapted from Chen et al. [19]).
Studies have shown that human beings are exposed to MCs mainly through drinking water, body contact, long-term use of seafood and algal dietary supplements, where drinking water is the principal route [8]. To minimize the health hazard caused by MCs, the World Health Organization (WHO) specifies that the maximum allowable content of MCs in drinking water should not exceed 1 μg/L [20]. Research indicated that MCs negatively affect various human organs when exposed to it. Several organs or tissues including liver [21][22][23][24][25][26], kidney [25][26][27], nervous system [28][29][30], gastrointestinal tract [31][32][33], reproductive system [34,35] and cardiovascular [36] have been reported to be target organs for MC toxicity. This review will focus on the cardiovascular toxicity of MCs and provide some evidence of cardiovascular toxicity caused by MCs. To protect human health from the threat of MCs, this paper also puts forward some directions for further research. seven amino acid residues, respectively. X and Z in positions two and four are highly variable L-amino acids that determine the suffix in the nomenclature of MCs. (B) represents some of the most frequent MC congeners (adapted from Chen et al. [19]).
Cardiovascular Toxicity of MCs
Studies have shown that human beings are exposed to MCs mainly through drinking water, body contact, long-term use of seafood and algal dietary supplements, where drinking water is the principal route [8]. To minimize the health hazard caused by MCs, the World Health Organization (WHO) specifies that the maximum allowable content of MCs in drinking water should not exceed 1 µg/L [20]. Research indicated that MCs negatively affect various human organs when exposed to it. Several organs or tissues including liver [21][22][23][24][25][26], kidney [25][26][27], nervous system [28][29][30], gastrointestinal tract [31][32][33], reproductive system [34,35] and cardiovascular [36] have been reported to be target organs for MC toxicity. This review will focus on the cardiovascular toxicity of MCs and provide some evidence of cardiovascular toxicity caused by MCs. To protect human health from the threat of MCs, this paper also puts forward some directions for further research.
Direct Cardiovascular Toxicity
This implies that MCs can affect the cardiovascular system, including all tissues, cells, blood and vascular in the heart, directly, and result in abnormal structure and/or functioning of the cardiovascular system.
Transportation of MCs
Researchers have demonstrated that MCs can be transported into the various cells through organic anion transporting polypeptides (OATP) after exposure [63]. Almost every organ is capable of expressing the OATP family genes [8,64], though some OATP genes are expressed preferentially in specific tissues, or even selectively expressed [65]. Heart tissue is no exception [66]. Several OATP family genes including OATP4A1, OATP2A1, OATP2B1 and OATP3A1 have been reported to be expressed in heart tissue [67,68]. This signifies that MCs can be transported to and accumulated in cardiac tissue cells, and finally jeopardizing them, though the mechanisms of MCs entering cardiomyocytes await further exploration [69]. Moreover, MCs have been confirmed to enter into the cell depending on the degree of blood perfusion and types and expression levels of OATP carriers [70].
Cytoskeleton Disruption and Mitochondrial Dysfunction
The cardiomyocytes, fibroblasts, telocytes, mast cells, endothelial cells, white blood cells and other immunologically cells (including smooth muscle cells, adipocytes and pericytes) are documented as cells in the cardiac tissue [71]. Among the cells, cardiomyocytes are reported to be rich in mitochondria, accounting for about 40% of the volume of myocardial cells [71]. Kowaltowski et al. demonstrated that the large number of unsaturated fatty acids on the mitochondrial membrane made cardiomyocytes vulnerable to free radicals and prone to the occurrence of oxidative stress [72]. Oxidative stress is related to the pathophysiology of many cardiomyopathies, such as anthracycline mediated cardiomyopathy [73] and alcoholic cardiomyopathy [74]. Goffart et al. declared that mitochondrial defects contributed to cardiomyopathy and heart failure (HF) [75]. An increasing number of studies have also confirmed that MCs can cause oxidative stress imbalance in mitochondria, further resulting in infiltration of neutrophils in tissues, raising secretion of protease and producing substantial oxidative intermediates, thus conducing to cell aging and even death [76][77][78][79][80].
In 2001, for the first time, Zhang et al. [37] proved that MC-LR could induce cardiotoxicity. Short-term toxic effects of MC-LR on SD (Sprague-Dawley) rats by intraperitoneal (I.P.) injection with different doses of MC-LR was investigated, and the results indicated that MC-LR could damage the physical structure of cardiomyocytes and alter biochemical parameters, including lactic dehydrogenase, aspartate aminotransferase and creatine kinase (CK) [37]. Subsequently, a follow up study was conducted and MC-LR was confirmed to have an involvement in pericardial edema and tubular heart formation in loach, Misguruns mizolepis Gunthe embryos [38]. In a chronic study [39], Wistar rats were injected with 10 µg/kg MC-LR by I.P. every two days for eight months and MC-LR treated animals exhibited no remarkable change in appearance and morphology of the heart. Although the results of TUNEL (terminal deoxynucleotide transferase-mediated deoxy-UTP nick end labelling) assay demonstrated no alteration in apoptosis, the cytoskeletons of cardiomyocytes were destructed, which were characterized by a loss of cell crossstriations, lower myofibril volume fraction, enlargement of cardiomyocytes volume, decrease of myofibrillar volume and even fibrosis, and infiltration of mononuclear in the interstitial tissue [39]. In 2010, a similar chronic study was carried out using MC-YR by Suput et al. [40]. Results showed that the volume and density of myocardium decreased with fiberous proliferation, and a few of them were infiltrated by lymphocytes. In addition, larger cardiomyocytes and abnormal nuclear structure were found, but the TUNEL assay results revealed no increase in apoptosis. Taken together, these results suggested that long-term exposure to relatively low doses of MC-LR and MC-YR can induce myocardial atrophy and fibrosis.
Acute exposure to MCs has also been demonstrated to induce cytoskeleton disruption and mitochondrial dysfunction. In an attempt to investigate the toxicity of MC-LR to the heart by I.P. injection into rats with concentrations 0.16 LD 50 (14 g/kg) and 1 LD 50 (87 g/kg), Qiu et al. [41] reported myocardial infarction in almost all dead rats, and the surviving rats displayed a decrease in heart rate and blood pressure, which tend out to be associated with myocardial mitochondrial dysfunction. Further microscopic examination of pathological ultrastructure showed loss of adhesion between cardiomyocytes and swelling or rupture of mitochondria, and biochemical index test exhibited raising levels of CK and troponin I, which suggested an existence of cardiomyocytes damage. In addition, the level of lipid peroxide was notably increased, prompting the occurrence of severe mitochondrial oxidative stress. Additionally, the respiratory chain enzyme complexes I and III were found to be inhibited, indicating that the mitochondrial electron transfer chain was blocked. Zhao et al. [36] injected MC extracts (mainly containing MC-LR and MC-RR) into the abdominal cavity of rabbits at the dose of 12.5 and 50 µg MC-LR eq/kg bw (body weight) and detected the ultrastructure and enzyme activity in mitochondria after hours of injection. Morphological changes of cardiac mitochondria, increased concentrations of lipid peroxide and activity of succinate dehydrogenase were observed. Nicotinamide adenine dinucleotide (NADH) dehydrogenase was also found to be inhibited, which further affected the mitochondrial electron transfer chain. Moreover, MCs could alter the activities of Ca 2+ -Mg 2+ -ATPase in mitochondria, thus destroying the ion homeostasis, which may conduce to the loss of mitochondrial membrane potential (MMP), and finally succeed in damnifying mitochondria of myocardial cells.
Wang et al. in a recent publication have provided a new interpretation of the possible role of MCs in the toxicological mechanisms of vascular dysplasia in vivo and in vitro [42]. Zebrafish juvenile and human umbilical vein endothelial cells (HUVECs) were co-cultured with MC-LR at the dose of 0.1 µM and 1 µM respectively. In vivo, MC-LR resulted in angiodysplasia, damaged vascular structures, reduced lumen size and blood flow as well as vascular dysfunction, while in vitro, apoptosis, activity of caspase3/9, mitochondrial ROS and p53 were increased, whereas MMP and proliferating cell nuclear antigen were inhibited. To explore the effect of MC-LR on cardiopulmonary system, Martins et al. [43] injected 100 µg MC-LR/kg into trahira, Hoplias malabaricus and reported that MC-LR induced cytotoxicity by enhancing oxidative stress and activity of related enzymes, and consequently affected cardiopulmonary function. This was similar to what Qiu et al. demonstrated [41]. In addition, MC-LR has recently been confirmed to inhibit the heart rate of Japanese Medaka (Oryzias Latipes) [44]. Currently, Xu et al. [49] provided an explanation of the possible role of antioxidant enzymes in the toxicological mechanisms of MCs at the transcription level by treating H 9 C 2 (a kind of rat cardiomyocyte) cell lines with 10 µM MC-LR. The expression levels of cardiac rhythm and antioxidant genes were detected and the results suggested that the expression levels of rhythmic genes (baml1, cry1, cry2, per1, per2) were inhibited, while the antioxidant genes (catalase, ho-1, sod1, sod2) were upregulated. These results indicated that alteration in the rhythm of cardiomyocytes is one of the possible cardiac toxilogical mechanisms of MCs.
The above evidence suggests that MCs can directly lead to cardiac malformation in larva, and induce myocardial atrophy and fibrosis in adults, although the mechanisms of the effect need to be further explored. MCs may also induce cardiovascular toxicity by transforming the morphology of cardiomyocytes, cell proliferation and apoptosis, cytoskeleton and cell rhythm, as well as the ultrastructure, oxidative stress, membrane potential and enzyme activity of respiratory chain in mitochondria of cardiomyocytes.
Endoplasmic Reticulum Dysfunction
MCs are also known to give rise to CVD by damaging the endoplasmic reticulum (ER). ER is a tunneling system composed of membranes in eukaryotic cells, and it is an important organelle for protein synthesis, folding and secretion [81,82]. The stability of the ER environment is an essential precondition for realizing ER function. When the internal or external microenvironment of ER changes, the imbalance function of homeostasis may be caused, resulting in the occurrence of endoplasmic reticulum stress response (ERS) [83,84]. Ischemia-reperfusion injury, homocysteine and other chemicals' treatment, protein synthesis abnormality, protein folding capacity dysregulation, ER calcium metabolism disorder, physicochemical or genetic factors, such as the disturbance of lecithin synthesis, are capable of arousing ERS [85,86]. Moderate stress was able to protect cells through unfolded protein response, while prolonged or excessive stress could trigger ER CHOP, JNK and Caspase-1/2, Ca 2+ pathways to induce apoptosis [87]. This suggests that cardiomyocytes damage can be produced by interfering with ER stress-related pathways. Previous data indicated that ERS evoked from lipid overload, changes in redox state, free radicals and other physical and/or chemical factors, could cause apoptosis of endothelial cells and monocytes entering the vascular endothelium to engulf lipid and lipid foam cells formation, thus giving rise to CVD [88]. In summary, the occurrence and development of CVD such as atherosclerosis [89,90], diabetic heart disease [91], hypertension [92,93], myocardial hypertrophy and HF [94,95] are closely related to ERS.
In recent years, a considerable number of studies have demonstrated that MC-LR can induce ERS. Qi et al. [45] found deformed morphology, stunted growth, suppressed heart rate and increased apoptosis when zebrafish juveniles were treated with 4.0 mM MC-LR for 96 h. Further studies on the mechanism showed that the above phenotypes could be partially rescued by tauroursodeoxycholic acid (TUDCA, 20 mM, an inhibitor of ERS), which suggested that the developmental toxicity of MC-LR was possibly produced by activating ERS. That is, MC-LR could induce developmental toxicity and apoptosis by increasing ER oxidative stress. In addition, Cai et al. [96,97] demonstrated that the raise of ER stress and apoptosis had an involvement in neurotoxicity caused by MC-LR in rats. A study conducted by Zhao et al. [98] provided an explanation of the possible role of oxidative stress and ER stress in the toxicological mechanisms of MCs at the proteomic level, and the results revealed that MC-LR remarkably altered the abundance of 49 proteins that were involved in oxidative phosphorylation, cytoskeleton, metabolism, protein folding and degradation. Although there is no direct evidence that MC-LR can induce ERS in the cardiovascular system, it is reasonable to infer that MC-LR is capable of affecting cardiovascular function by activating ERS.
Inhibition of PP1 and PP2A
Previous research exhibited that the main mechanism of MCs producing their toxic effects is to inhibit serine/threonine protein phosphatase 1 (PP1) and 2A (PP2A) by interacting with the subunits of serine/threonine protein phosphatase (PP), which result in the disruption of the dynamic equilibrium of protein phosphorylation as well as expression and activation of their downstream proteins, and further leading to the cytoskeletal reorganization [8,[99][100][101][102][103][104]. PP regulates a series of processes in mammalian cells, including cell proliferation, division, signal transduction and gene expression [105]. In the cardiovascular system, reversible protein phosphorylation is central to a variety of cardiac processes such as excitation-contraction coupling, Ca 2+ handling, cell metabolism, myofilament regulation and intercellular communication [100]. PP1 and PP2A have been reported to regulate the cardiac function by dephosphorylating a disparate collection of target proteins, Cav1.2 and ATP sensitive Na + /K + channels [101,102,[106][107][108][109]. Overexpression of PP2A-A subunit or its active substitute PP2B was found to induce myocardial hypertrophy or HF [110][111][112]. Similarly, Meyer-Roxlau et al. [103] in a recent investigation reported that the increasing activity of PP1 contributed to cardiac hypertrophy, HF and atrial fibrillation. Furthermore, PP might as well directly dephosphorylate some signal molecules or transcription factors to give rise to the occurrence of CVD [113,114]. Since a considerable number of data have indicated that MCs inhibited the activities of PP1 and PP2A [8,19,22,29], it is profitable to infer that the inhibition activities of PP1 and PP2A is one of the possible cardiac toxicological mechanisms of MCs.
Hemodynamic Alterations and Vascular Lesions
The heart is known to be rich in vessels and bloods, and vessels are the carrier of blood flow. The heart supplies blood to the body, providing power for the body's normal metabolism [115,116]. Blood flow alterations and vascular lesions can jeopardize the function of the heart. LeClaire [46] noted a continuous decrease of heart rate, cardiac output (CO), stroke, oxygen consumption, carbon dioxide production, metabolic rate, accompanied progressive hypothermia and disrupted equilibrium of acid-base when male Fischer 344 rats were administered with I.P. injection of MC-LR at a dose of 100 µg/kg bw. Huang et al. [47] investigated the response indices to toxic MC-LR in the blood of mice with I.P. injection at different MC-LR concentrations and indicated that the phagocytic index, Toxins 2019, 11, 507 9 of 20 ROS, hematology of the majority of blood cells and volume of erythrocytes were influenced by the toxin. In addition, the alterations of some cytokines and ROS of leukocytes were observed [47]. Another in vivo study also showed a significant difference in red blood cell parameters (red blood cell counts, haematocrit values, mean corpuscular hemoglobin, mean corpusular volume and mean corpusular hemoglobin concerntration) when rats were fed with fish meat with and without MCs for 28 days [48]. In addition, in vitro investigation indicated that enzyme concentrations in blood including glutathione, superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPx) and glutathione s-transferase were altered. Increased hemolysis and pathological alterations in agglomerated and jagged erythrocytes were observed [53]. All these aforementioned results demonstrated that MC-LR possesses blood toxicity.
Vascular endothelial cells are between blood and vascular wall tissue, which are composed of a layer of monocytes. Endothelial cells are the key regulators of vascular dynamic equilibrium and have an involvement in cell mitosis, vascular regeneration, vascular osmotic pressure, inflammatory response and platelet activity [117,118]. Therefore, any change in the structure and/or function of vascular endothelial cells may cause a destruction of the vascular system [119]. HUVECs were treated with MC-LR at a dose of 40 µM for 24 hours and the proliferation, migration ability and capillary-like structure formation of vascular endothelial cells were reported to be inhibited [50][51][52], whereas the apoptosis of endothelial cells, production of ROS, oxidative stress, expression levels of inflammatory factors and endothelial cell adhesion factors were found to be increased [50][51][52]. In another in vivo study, dysplastic, dysfunctional, destroyed structure, narrowed size of vascular and slower blood flow were found when zebrafish juveniles were co-cultured with MC-LR at a dose of 4.0 mM for 96 h [45]. The evidence thus suggests that MC-LR can induce vascular toxic effects.
Liver Diseases Induced by MCs and CVD
The liver is one of the most vital target organs of MCs. MCs are transported into hepatocytes through OATP1B1 and OATP1B3, and exert hepatotoxicity [120]. A study carried out by Falconer et al. [121] showed that, under the induction of MC-LR, the cytoskeleton damage of isolated hepatocytes resulted in the loss of cell morphology, cell adhesion and cell death. The potential mechanism might be the inhibition of phosphatase, invoking hyperphosphorylation in a large number of hepatocytes, including keratin, which is responsible for microfilament orientation and intermediate filament integrity. As mentioned earlier, MC-LR can irreversibly inhibit the activities of PP1 and PP2A in hepatocytes, and may account for the disintegration of hepatocytes, apoptosis, necrosis, nonalcoholic steatohepatitis disease (NASH), intrahepatic hemorrhage, liver cancer and even animal death [122,123]. MC-LR may also induce hepatotoxicity by inhibiting the survival rate of hepatocytes, increasing hepatocyte apoptosis and ROS levels [124,125] as well as the activities of SOD, CAT, GPx and glutathione reductase in hepatocytes [126,127]. Several studies have shown that liver diseases are closely related to CVD. In a recent population survey, Mellinger et al. [128] reported that NASH may contribute to CVD including hypertension and atherosclerosis. Clinical data exhibited that NASH patients had higher carotid intima media thickness [129,130] and often experienced altered cardiac structure [131][132][133]. In addition, Boddi et al. [134] found that patients with ST-elevation myocardial infarction more often than not experienced NASH. The data suggest that MC-LR may induce NASH which has the potential to conduce CVD.
Krag et al. investigated 24 patients with liver cirrhosis and reported an increase in cardiac volume load and end-diastolic volume, as well as a decrease in left ventricular wall motion, CO and ejection fraction. Interestingly, myocardial perfusion was preserved [54]. Glenn et al. also exhibited a significant decline in diastolic reflux velocity, prolonged diastolic time and increased passive tension by assessing the relationship between liver diseases and CVD in cirrhotic animals [135]. Thus, liver disease is able to disrupt the cardiovascular system to cause diastolic dysfunction leading to CVD development. In a serum survey, Henriksen et al. [55] examined 51 patients with liver cirrhosis and revealed higher pro-BNP (pro-brain natriuretic peptide) and BNP (brain natriuretic peptide) in serum than those in normal subjects. Pro-BNP and BNP were generally indicators of abnormal QT-interval, heart rate and plasma volume in cardiac function [55]. In a recent study, Schimmel et al. [136] found that BNP levels in serum were related to cardiac function and severity of HF in patients with chronic heart failure (CHF). BNP levels were increased when cardiac systolic or diastolic dysfunction occurred, respectively, and, when both existed at the same time, the increasing levels were more significant [136]. Dong et al. [137] have also demonstrated that BNP levels in serum of cirrhosis were associated with ventricular wall thickness, diastolic dysfunction, stress-induced systolic dysfunction, hyperdynamic circulation and cardiac structural changes. Additionally, inflammation of liver, oxidative stress level of hepatocytes and apoptosis and proliferation of hepatocytes are able to increase the risk of CVD [138,139]. The evidence thus suggests that liver diseases caused by MCs can alter cardiac contractility, cardiac diastolic function and electrocardiograms through a variety of mechanisms, including autonomic nerve regulation, inflammation and changes in membrane channels, in order to give rise to CVD.
Intestinal Diseases Induced by MCs and CVD
The gastrointestinal tract is another target organ of MCs. Oral exposure to MCs is the principal route of exposure [10]. After ingestion, MCs are absorbed into the gastrointestinal tract, transported into gastrointestinal epithelial cells mainly through OATP3A1 and OATP4A1 [31], and finally exert their multi-organ and multi-tissue toxicity through blood circulation. Studies revealed that partially unabsorbed MCs accumulated in the gastrointestinal tract can exert gastrointestinal toxic effects [32,33]. The earliest investigation on the intestinal toxicity of MC-LR was done by Falconer et al. [140] in 1992, by treating intestinal cells isolated from chickens with MCs. The intestinal cells exhibited time-and dose-dependent deformation or even death, and one or more blisters grew on the surface of deformed cells after the induction of MCs. The study also pointed out that the production of gastroenteritis related to ingestion of MCs may reflect the injury of intestinal epithelial cells caused by MCs. Botha et al. also in an in vitro study displayed that MC-LR could decrease the viability and induce time-dependent apoptosis of CaCo2 cells (an intestinal cell line) after the cell line was exposed to 50 µM/L MC-LR [141]. Furthermore, an acute in vivo study indicated that MC-LR could induce time-dependent apoptosis of duodenal, jejunum and ileum cells [78]. In view of this, MCs may have the ability to induce time-and dose-dependent toxic effects on the gastrointestinal tract.
Results of immunohistochemistry in vivo studies exhibited that MC-LR was mainly accumulated in intestinal microvilli [78], mucous layer, villi epithelial, lamina propria cells, and mucus secreted by the goblet cells of the small intestine [32], and in the cytoplasm and around the nucleus [142]. A comparative research explored by Gaudin et al. [143] adopted the method of feeding and I.P. injection of MC-LR for mice simultaneously. Comet assay suggested that both routes of exposure could remarkably increase the damage of intestinal DNA in mice, whereas the toxicity of I.P. injection of MC-LR was more serious. In summary, current explorations have shown that MCs induce gastrointestinal toxicity mainly by destroying the physical structure [143], the immune system [144], the balance of water and electrolytes in cells [143] and altering the activity of digestive enzymes in the chorion of the intestine [145], or inducing oxidative stress and apoptosis in intestinal cells [146][147][148] or even altering the intestinal microbes [149,150]. Meanwhile, there is considerable evidence that patients with inflammatory intestinal diseases have a higher risk of CVD [56,57]. In the early 1990s, Levine et al. [151] reported that HF belonged to a kind of chronic inflammation, and the expression levels of pro-inflammatory factors in serum were closely concerned with the morbidity rate of HF. Anker et al. [152] also indicated that a large number of inflammatory factors first originated from the intestinal tract. This suggested that the destruction of intestinal physical structure and immune system might transform the permeability of intestinal epithelial cells, and be conducive to uncontrolled entry and exit of substances into and out of cells, and finally flow into the various organs through blood circulation, leading to CVD. Data in recent years showed that animals displaying intestinal microbial alterations, bacterial translocation and the presence of bacteria in circulation after the destruction of intestinal barrier function brought about CVD such as vasculitis, hypertension, atherosclerosis and CHF [153][154][155].
Kidney Diseases Induced by MCs and CVD
Although MCs are mainly accumulated in the liver and discharged through the bile duct, a small part of these toxins (about 9%) is filtered in the kidney and discharged through urine [156], which makes the kidney a potential target for MC toxicity. MCs can enter the kidney through OATP1 [157]. Menezes et al. [158] and Jia et al. [159] proved that MC-LR was capable of accumulating in the kidney and manifested its nephrotoxic effects. In a recent in vivo study, Wang et al. [160] confirmed the nephrotoxic toxicity induced by MC-LR. Zebrafish were treated with different doses of MC-LR and the pathological alterations in kidney tissue showed the existence of eosinophilic casts in renal tubules, abnormal renal tubules, decreased space intertubular and blood infiltration in renal cells. RNA-Seq analysis indicated disrupted renal gene expressions that had some involvement in various pathways, such as oxidative phosphorylation, cell cycle and protein processing in ER, concerned with apoptosis. TUNEL assay found the presence of renal cell apoptosis. Additionally, negative changes in the ROS level, apoptotic-related gene, protein expressions and enzyme activities revealed that MC-LR could induce production of ROS, subsequently triggering apoptosis via p53-bcl-2 and caspase-dependent pathway in the kidney. The data signifies that apoptosis may be a primary case of MC-LR-induced nephrotoxicity. Similarly, Nicole et al. in a recent review of the toxicology mechanism of MCs claimed that chronic exposure to low doses of MCs may pose a great risk to nephrotoxicity in mammals. MC-LR induced renal dysfunction, vascular and glomerular lesions and alterations in kidney tissues mainly by disrupting mitochondria and increasing ROS levels [161].
An in vitro exploration conducted by Dias et al. [162] demonstrated that MC-LR could induce the proliferation of renal cells through p38, JNK and Erk1/2 signaling pathways, leading to the emergence of renal tumors, after the monkey kidney cell line Vero-E6 were treated with different concentrations of MC-LR. Similarly, the human embryonic kidney 293 cell line (HEK293) and human kidney adenocarcinoma cell line (ACHN) were treated with different doses of MC-LR for 24 hours, and decreased viability as well as increased apoptosis of both cell lines were observed [27]. This suggests that MC-LR may contribute to chronic kidney disease (CKD) by damaging the renal cells.
Previous studies have shown that CVD is the most serious complication in patients with CKD, and its incidence rate is also increasing year by year. It is also the main cause of death in patients with CKD [58,59]. A cohort survey of patients with CKD indicated that CKD patients had higher blood pressure [60] and higher expression levels of biomarkers of inflammation-related factors, such as C-reactive protein, interleukin-6 and higher levels of endotoxin in serum, which can increase hemodynamic load and result in volume overload [61]. Moreover, due to long-term CKD, an iron metabolism disorder hinders the differentiation of red blood cells, and renal erythropoietin also decreases, leading to severe anemia in CKD patients [62]. A long-term increase in blood pressure, high expression levels of inflammatory factors, high volume load and anemia can inhibit the activation of xanthine oxidase, NADPH oxidase uncoupled nitric oxide synthase, and finally succeed in inducing oxidative stress as well as ROS level, decreasing antioxidant defense ability, nitric oxide and its activity, which give rise to left ventricular hypertrophy, vascular endothelial dysfunction, atherosclerosis and other CVD such as cardiac remodeling / fibrosis and HF [62,134].
Conclusions and Outlook
In this current paper, the effects of MCs on cardiovascular system were reviewed. Those studies have shown that cardiovascular toxicity is closely associated with MC exposure at various doses, pathways and in different species. Figure 3 outlines the possible mechanisms of cardiovascular toxicity of MCs. After exposure, MCs are transported into cells via OATP, and accumulated in the main organs and tissues. At this stage, MCs are preferentially concentrated in the liver, gastrointestinal tract, kidneys, cardiovascular and other organs. Having entered the cardiovascular system, MCs can induce cardiovascular toxicity directly by altering the morphology of cardiomyocytes, state of cellular apoptosis and proliferation, cytoskeleton, rhythm, the differential expression/activity of transcription factors, the ultrastructure, MMP and enzyme activity in respiratory chain in mitochondria of cardiomyocytes. MC exposure is also capable of raising the production of ROS and ER oxidative stress, resulting in cytoskeleton destruction, mitochondrial dysfunction and ER dysfunction. In addition, MCs are able to inhibit the activities of PP1 and PP2A, which lead to hyperphosphorylation of regulatory proteins, thus regulating cytoskeleton tissue, cell proliferation, apoptosis and CVD. MC exposure might also be associated with the damages of myocardial intercellular connection and vascular endothelial cellular cytoskeleton, or the reducing of cell vitality and increasing of apoptosis, oxidative stress levels as well as ROS levels in mitochondria and ER of blood cells and vascular endothelial cells. In addition to cardiovascular direct effects, MCs can indirectly induce CVD by destroying the structure and/or function of other organs including the liver, gastrointestinal area and kidneys. The review showed that exposure to MCs has influential toxic effects on the cardiovascular system of animals. Although there is no human data on cardiovascular toxicity of MCs, it is believed that cardiovascular toxicity caused by MCs may pose a great threat to human health, especially in view of the wide distribution and spread of MCs in the environment.
Conflicts of Interest:
The authors declare that there are no conflicts of interest. It is of interest that the gap in understanding cardiovascular toxicity of MCs needs to be addressed through further research. 1. Direct data of exposure to MCs and the identification of MCs in human serum are necessary for human epidemiological studies of cardiovascular effects so as to understand the toxicology mechanism. 2. Whether maternal exposure to MCs during pregnancy affects the development of the heart of the offspring needs further exploration. 3. Do other congeners of MCs also have the potential to induce cardiovascular diseases in addition to MC-LR, MC-YR and MC-RR? 4. Other environmental factors including heavy metals, trace elements and organic pollutants could intensify or attenuate the toxicity of MCs. 5. Advanced and accessible technologies are essential to degrade and remove MCs from water. 6. States should be encouraged to establish acceptable maximum concentrations of MCs in drinking water, recreational water and irrigation water, especially in remote areas where rivers, lakes and streams are their main sources of water. Public education on the toxic effects of MCs and their related diseases should be strengthened. 7. Effective drugs that inhibit the binding of MCs to PP should be invented and manufactured. 8. Due to the elusive mechanism of cardiovascular toxicity of MCs, it is necessary to explore its molecular toxicology mechanism to develop targeted drugs. In addition, the toxicity of MCs to cardiac development needs to be further studied in order to prevent the occurrence of congenital heart disease. | 2019-09-05T13:17:27.980Z | 2019-08-30T00:00:00.000 | {
"year": 2019,
"sha1": "28ecebea836f0f11f73ae2013dfc02a8f4e46835",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6651/11/9/507/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a167a75d6c243223406df79fd902b11739c7191",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3872457 | pes2o/s2orc | v3-fos-license | Trophic Cascades Induced by Lobster Fishing Are Not Ubiquitous in Southern California Kelp Forests
Fishing can trigger trophic cascades that alter community structure and dynamics and thus modify ecosystem attributes. We combined ecological data of sea urchin and macroalgal abundance with fishery data of spiny lobster (Panulirus interruptus) landings to evaluate whether: (1) patterns in the abundance and biomass among lobster (predator), sea urchins (grazer), and macroalgae (primary producer) in giant kelp forest communities indicated the presence of top-down control on urchins and macroalgae, and (2) lobster fishing triggers a trophic cascade leading to increased sea urchin densities and decreased macroalgal biomass. Eight years of data from eight rocky subtidal reefs known to support giant kelp forests near Santa Barbara, CA, USA, were analyzed in three-tiered least-squares regression models to evaluate the relationships between: (1) lobster abundance and sea urchin density, and (2) sea urchin density and macroalgal biomass. The models included reef physical structure and water depth. Results revealed a trend towards decreasing urchin density with increasing lobster abundance but little evidence that urchins control the biomass of macroalgae. Urchin density was highly correlated with habitat structure, although not water depth. To evaluate whether fishing triggered a trophic cascade we pooled data across all treatments to examine the extent to which sea urchin density and macroalgal biomass were related to the intensity of lobster fishing (as indicated by the density of traps pulled). We found that, with one exception, sea urchins remained more abundant at heavily fished sites, supporting the idea that fishing for lobsters releases top-down control on urchin grazers. Macroalgal biomass, however, was positively correlated with lobster fishing intensity, which contradicts the trophic cascade model. Collectively, our results suggest that factors other than urchin grazing play a major role in controlling macroalgal biomass in southern California kelp forests, and that lobster fishing does not always catalyze a top-down trophic cascade.
Introduction
Trophic cascades, in which predator-prey interactions control the composition and structure of ecological communities across two or more trophic levels in a food web have been reported in terrestrial, aquatic, and marine ecosystems [1,2]. In a top-down cascade, changes in the abundances of predators act to alter the abundances of grazers, which in turn affect the biomass of primary producers [3]. The degree to which predators indirectly influence primary producers depends upon biotic and abiotic conditions that vary in space and time in response to physical disturbance, the availability of resources to primary producers, and the behavior of individual consumers [4]. As such, our understanding of how and why trophic cascades vary spatially and temporally is far from complete, which limits our ability to successfully manage and protect natural ecosystems in the face of increasing threats from anthropogenic disturbances and socio-economic pressures.
In coastal marine ecosystems top-down trophic cascades have been linked to the removal of top predators through fishing [5][6][7][8][9][10][11][12]. Frequently cited examples of marine trophic cascades come from kelp forests, in which top predators, such as sea otters [10,11], fishes [6,7,12], and lobsters [5,7,[13][14][15], are reduced in abundance by humans, leading to a relaxation in top-down control on sea urchin grazers and a decline in macroalgal abundance due to enhanced herbivory. The trophic cascade triggered by fishing in kelp forests includes a fourth trophic level occupied by humans, and depends on strong top-down interactions involving: (1) humans capturing predators of sea urchins (e.g., lobsters, fishes, and sea otters), (2) predators consuming urchins, and (3) urchins grazing macroalgae. The importance of trophic cascades as the primary determinant of community structure in kelp forest systems has been challenged because macroalgal abundance can vary greatly across space and time for many reasons other than grazing intensity [16,17]. Therefore, the underlying cascade involving fishing, lobsters, urchins, and macroalgae may not be ubiquitous.
Weak top-down control implies that macroalgal abundance is unrelated to the abundance of urchins and their predators, and to fishing pressure on them. Nutrient availability, wave disturbance, sedimentation, and interactions among these factors are widely recognized as other drivers of macroalgal population dynamics [18,19]. When nutrient supply is sufficiently high, kelp production can overwhelm the capacity of grazers to control kelp abundance [20]. Populations of grazers can similarly be affected by factors other than predation and fishing, as recruitment variability [21,22], disease [23], storm disturbance [24], and hydrodynamic conditions [25,26] have all been shown to influence the local abundance of sea urchins. Larger-scale processes such as El Niño-Southern Oscillation events (ENSO) can have regional effects that permeate throughout the food web by altering species abundances and the interactions among species in different trophic levels [27].
Correlative evidence for the cascading effects of fishing in marine ecosystems [5,28,29] has fueled calls for more intensive conservation, including the establishment of marine protected areas that prohibit fishing [7,30,31]. Most studies examining the effects of marine reserves have shown increased biomass and diversity in no-fishing areas compared with fished areas, which has further validated pleas for increased conservation [30]. The vast majority of this work, although highly informative, did not explore the direct effects of fishing intensity on trophic cascades, but rather assumed that spatial variation in predator and grazer abundance, and therefore predation and grazing intensity, was due to the presence versus absence of fishing [23,26,32]. Assuming that comparisons of predator density inside versus outside of reserves provide a good estimate of fishing impacts can be problematic because unprotected areas often have large differences in fishing intensity, especially for lobster [33,34]. In addition, inherent differences in site-specific conditions may confound reserve-based assessments because factors such as depth, exposure, and sedimentation rates may help drive differences in the distribution and abundance of lobsters, urchins, and macroalgae between reserves and nearby fished areas [26,35]. Finally, the process of siting marine reserves tends to select areas of relatively high biodiversity, predator densities, and habitat quality for protection [36], which limit the ability to distinguish between the effects of fishing on community structure versus those caused by other factors. Because much of the ocean's nearshore habitats remain open to fishing, a more thorough understanding of the extent to which fishing triggers trophic cascades is warranted. Identifying the conditions that promote cascades, and determining whether or not they are ubiquitous, may usefully inform the design of marine reserve networks, especially those established to protect kelp forest communities [30].
The California spiny lobster (Panulirus interruptus) is the target of one of the oldest commercial fisheries in southern California. Data on commercial landings date back to the early 1900s and have averaged approximately 325 MT in recent years [37]. Spiny lobster populations are considered relatively heavily fished [38], although a recent stock assessment estimates that both total abundance and size structure have stabilized over the last decade [39]. Nevertheless, some believe that over the past century fishing has led to a decrease in overall abundance and individual size of spiny lobsters [38]. Such decreases have the potential to diminish the role of lobsters as effective sea urchin predators. The two main objectives of our study were to: (1) examine the patterns of abundance among lobster, sea urchin (Stronglyocentrotus spp.), and macroalgae in southern California giant kelp forests to evaluate whether they are consistent with the hypothesis that lobsters control urchins through predation, and urchins control macroalgae through grazing; and (2) determine whether the biomass of macroalgae was inversely related to the intensity of commercial lobster fishing as predicted by a top-down trophic cascade involving lobsters, sea urchins, and macroalgae. We used a correlative statistical approach to compare the abundance of organisms within three trophic levels, specifically California spiny lobsters (predator), red and purple sea urchins (grazers), and giant kelp and understory macroalgae (primary producers). As such, we did not directly test for the presence of the trophic cascade nor for the impact of fishing on the cascade, which would have required a large-scale, long-term field experiment. However, unlike most studies involving marine reserves, our analyses used sites that were explicitly selected to represent the range of natural variation in the region's kelp forests [19,40,41], which were subjected to varying levels of fishing intensity over an eight-year period. The results from our study provide a reasonable assessment of the strengths of the trophic relationships among lobsters, urchins, and macroalgae in southern California's giant kelp forests, as well as the extent to which lobster fishing triggers a top-down trophic cascade. Exploring whether ecological paradigms operate generally across space and time is necessary to advance ecology [16,[42][43][44], especially when conceptual models provide the framework for innovative marine resource management, including marine reserve and other spatial-based approaches [45].
Materials and Methods
Commercial fishery data on the number of lobsters caught and fishing effort, and ecological data on the abundance of sea urchins and macroalgae were used to address our two objectives. We examined whether patterns of abundance indicated the presence of a trophic cascade by evaluating whether the density of sea urchins was inversely related to lobster abundance, and simultaneously whether the biomass of macroalgae was inversely related to the density of sea urchins. We then evaluated whether lobster fishing influenced the trophic cascade by examining whether: (1) the mean abundance of sea urchins at a site averaged over the eight-year study was positively related to the mean intensity of lobster fishing, and (2) the mean biomass of macroalgae at a site was inversely related to the mean intensity of lobster fishing.
The data used in our analyses were collected from eight kelp forest sites located within a 50 km stretch along the mainland coast of the Santa Barbara Channel from 2001-2008 ( Figure 1). The kelp forest communities at these sites are monitored annually by the Santa Barbara Coastal Long Term Ecological Research (SBC LTER) project and were selected for long-term study to represent the natural range of variability in giant kelp forests in the region [19,46]. All eight sites were subjected to fishing during the eightyear study period. Oceanographic conditions during this time were generally representative of the region and did not include any major El Niñ o events [19].
No specific permits were required for fishery data or ecological sampling, or for access to sampling areas. We applied for, and were granted, a Human Subjects exemption from the University of California, Santa Barbara Institutional Review Board (IRB) for our interviews with fishermen. We satisfied all requirements for an exemption and obtained in-person verbal informed consent from all fisherman participants. We documented this by their participation and willingness to proceed with the interview process. All data have only been reported in aggregate and no personallyidentifying information is presented.
Ecological data
Data on the abundance of macroalgae and density of sea urchins were collected along fixed 40 m transects at the eight SBC LTER kelp forest sites in the summers (July-August) of 2001 through 2008 (n = 2 to 7 transects per site at water depths ranging from 4 to 14 m). The number of giant kelp (Macrocystis pyrifera) and understory kelp (e.g., Pterygophora californica and Laminaria farlowii) were counted in a 2 m wide area centered around each 40 m transect and their abundance was estimated as density (number m 22 ). The abundance of low lying understory species of brown, red, and green algae (which are difficult to count as individuals) was estimated as percent cover based on their presence superimposed upon a uniform grid of 80 points placed in a 1 m wide swath centered along each 40 m transect [46]. We converted data of macroalgal density and percent cover to biomass (g dry mass m 22 ) to obtain a single metric for macroalgal abundance for use in our analyses. This was done for giant kelp using the relationship between frond density and biomass derived by Reed et al. [47]. Percent cover and density data for understory species were converted to biomass using the species-specific relationships derived by Harrer [46]. Calcareous species such as upright and encrusting coralline algae were not included in estimates of macroalgal biomass because these species do not form an important part of the diet of sea urchins when non-calcified algae are present [20].
The red and purple sea urchins (Strongylocentrotus franciscanus and S. purpuratus) are the most abundant sea urchins in kelp forests off California and their extensive grazing on macroaglae and kelp forest community structure has been well documented [48][49][50][51][52]. The densities of red and purple urchins were measured in fixed 1m 2 quadrats distributed uniformly along each transect (n = 6 quadrats per transect). Purple urchins comprised more than 88% of all urchins counted at the eight sites during the eight-year study period. Two size categories of urchins were recorded, those #25 mm in test diameter (which represent individuals ,1 year old), and those .25 mm in diameter.
Shears et al. [26] examined trophic cascades associated with fishing and marine reserves on New Zealand rocky reefs, and hypothesized that small-scale topographic complexity of reefs influenced urchin abundance by providing refuge from wave disturbance and predators, including spiny lobsters. We evaluated the effects of small-scale topographic complexity in our study by examining whether urchin density was related to the level of reef rugosity. Rugosity was measured as the length of 1 cm-linked chain required to contour the bottom along a 10 m distance perpendicular to the transect (n = four 10 m distances per transect).
Lobster fishing data
Spiny lobsters forage actively at night and occupy cryptic habitats during the day rendering daytime visual survey data collected by SBC LTER inadequate for estimating the abundance of lobster. Consequently, we used lobster fishing data to estimate the abundance of legal sized (.83 mm carapace length) lobsters at our study sites. We did not collect data on sub-legal sized lobsters. The commercial and recreational spiny lobster fisheries in southern California have greatly reduced the relative abundance of large lobsters, and most of those caught are considered mediumsized (i.e., relatively close to 83 mm in carapace length) [7].
We worked with fishermen to identify lobster trapping areas that spatially overlapped with the eight kelp forest sites sampled by the SBC LTER ( Figure 1). This included overlaying maps of trapping areas obtained from fishermen's interviews with bathymetric data within a Geographic Information System to identify trapping areas (i.e., polygons) within the bounds of the kelp forests that were sampled by the SBC LTER. The specific methods and detailed results of the surveys with fishermen are reported elsewhere [53]. We then summarized catch data derived from logbooks that reported daily fishing effort and catch by trapping area for the eight fishing seasons. Logbook data were provided by the California Department of Fish and Game (CDFG), and permission for their use in this study was granted by individual fishermen. Our total of 2,484 individual trap ''samples'' (i.e., a trap pulled aboard by fishermen) across all SBC LTER sites accounted for 38% of the total fishing activity along the ,50 km section of the Santa Barbara Channel's mainland coast spanned by the SBC LTER study reefs.
We calculated the number of lobster caught each year within the bounded trapping area of each site as a proxy for annual lobster abundance at a site, and the cumulative number of traps set within the trapping area of each site in each year as a proxy for annual fishing effort. The number of lobster caught in traps is a useful means of estimating lobster abundance [54]. We constrained our estimate of lobster abundance to the number of legalsized lobsters because we had no data on the number of sub-legal sized lobster. We think this is reasonable because most of the predation of urchins by lobsters is probably done by legal-sized lobsters [52].
The lobster fishery in southern California is from October to March, yet for most seasons .80% of the annual catch is taken in the first six weeks of the fishing season. As such, we reasoned that number of lobsters caught during each fishing season represented a reasonable estimate of lobster abundance during the summer (i.e., the previous July-August) when data on sea urchins and macroalgae were collected. Based on our interviews with 21 fishermen, we were able to match the reported location of the catch to a polygon that contained the LTER sampling sites. The larger sampling area used to characterize the abundance of lobster at each site (mean polygon area 6 SD = 1.23 km 2 60.79 vs. two to seven 80 m 2 transects per site for urchins and macroalgae) was needed because lobsters are highly mobile foragers and occur at much lower densities than urchins and macroalage.
Fishing intensity (i.e., trap density) at a site was estimated as the average number of traps deployed and pulled (i.e., ''sampled'') per day within each fishing polygon during the first two months of each season. We scaled this to the area (km 22 ) of the fishing polygons, and refer to the metric as trap density. We constrained trap density to the first eight weeks of the season when fishing was most intense and most of the lobsters were caught. We reasoned that average trap density was a good proxy for fishing intensity because the number of lobsters caught per trap [i.e., catch-perunit-effort (CPUE)] was similar at all eight sites (1-way ANOVA; mean square = 0.09, F 7,56 = 0.312, P = 0.946). Rather than varying among sites, CPUE scaled well with the total catch (r 2 = 0.969), suggesting that fishermen effectively target areas where lobsters are abundant.
Statistical analysis
To address our question as to whether there was evidence for trophic relationships among lobster, urchins, and macroalgae that were consistent with a top-down trophic cascade, we used a regression approach that implements a three-stage joint iterated Generalized Least Squares (GLS) model. Our approach evaluated the degree to which urchin density was related in time and space to lobsters caught (i.e., our proxy for abundance), and simultaneously the degree to which macroalgal biomass was related to urchin density. We used three versions of the model, each of which was fully factorial because it considered the response among the three trophic levels at each site (n = 8) during each time step (n = 8 years).
In Model 1, we used site as a fixed effect to account for any unmeasured differences among locations (e.g., exposure to ocean swell, sedimentation, numbers of additional top predators) that may have influenced the relationships among lobster, urchin, and macroalgae. Fixing the effect of site accounted for potential underlying differences by subtracting -for each location -the mean of the dependent variable over all sites from the mean value at each site. Model 1 also used year (2001-2008) as a fixed effect, thereby accounting for any underlying differences due solely to temporal trends occurring at all locations. Thus, the regression Models 1-3 examined evidence for top-down control of urchins by lobsters, and macroalgae by urchin grazing in all sites and years simultaneously, accounting for correlation between sites and years.
The form of Model 1 was as follows: , and b = the correlation coefficient. The two dependent variables (urchins and macroalgae) were run simultaneously on independent regressors (urchins were regressed against lobster and macroalgae were regressed against urchins). The simultaneity of this analysis allows correlation across years and sites to be used in estimation, which is needed to evaluate the existence of a trophic cascade in which a change in one trophic level affects other trophic levels. If lobsters indirectly increase macroalgal biomass by consuming sea urchins as predicted by the trophic cascade, then we would expect a significant negative relationship between lobster and urchin abundances, and a significant negative relationship between urchin and macroalgal abundances in our regression model. Model 2 was similar to Model 1, but instead of fixing site, we used substrate rugosity as a covariate for each site. We ran this model because we hypothesized -based on results of Shears et al. [27] -that the physical complexity of a reef influences urchin abundance by providing them with physical refuge from lobster predation and physical disturbance. Therefore, Model 2 had site as part of the random (i.e., pooled) error and year as a fixed factor, thus producing a test of the degree to which urchin density was related to lobster abundance, and macroalgal biomass was related to urchin density and substrate rugosity.
The form of Model 2 was as follows: Model 3 was similar to Model 2 except that an additional covariate, water depth, was added. This model was constructed because water depth can influence the composition of rocky subtidal reefs in many ways, including modulating physical wave disturbance and light availability.
To address the question of whether lobster fishing intensity influences macroalgal biomass by altering the abundances of sea urchins, as predicted by the trophic cascade hypothesis, we compared the relationships between urchin density and lobster fishing intensity (i.e., number of lobster traps deployed and pulled km 22 of fishing area that overlapped with the kelp forests/rocky reefs sampled by the SBC LTER), macroalgal biomass and urchin density, and macroalgal biomass and lobster fishing intensity across all eight sites. For each site, we averaged data from all eight years for each of the three variables (lobster fishing intensity, urchin density and macroalgal biomass) and compared the relationships with simple linear regressions. We reasoned that mean trap density averaged across all years was a good indicator of the intensity of fishing at a site and that the time averaged means of urchin density and macroalgal biomass adequately characterized the abundances of primary producers and consumers at each site during the study period. If fishing triggered a trophic cascade, then we predicted that lobster trap density would be positively related to urchin density and inversely related to macroalgal biomass.
Results
Commercial lobster catch, which we used as a proxy for lobster abundance, as well as the density of urchins and biomass of macroalgae varied substantially across the eight study sites during 2001-2008. Urchin density was consistently low (,5 m 22 ) at five sites, Arroyo Hondo, Goleta Bay, Arroyo Quemado, Arroyo Burro, and Isla Vista (Figure 2). It was difficult to detect meaningful relationships visually between lobster and urchin data, except perhaps at Mohawk, Naples, and Carpinteria Reefs, sites that supported relatively high densities of urchins (10-45 m 22 ). The five sites with low urchin abundances had relatively high and stable lobster abundances. Across all sites and years, there were 5 to 24 times more purple urchins (S. purpuratus) than red urchins (S. franciscanus), and for both species, there were 7 to 73 times more large urchins (.25 mm in diameter) than small urchins (#25 mm in diameter).
Macroalgal biomass varied independently of urchin density across sites and years except at Naples and Carpinteria Reefs, where large declines in urchin density coincided with increases in macroalgae ( Figure 2). Macroalgae consisted primarily of giant kelp and non-calcareous understory algae (e.g., Pterygophora californica, Desmarestia ligulata, Chondracanthus spp., Rhodymenia spp.). Giant kelp accounted for 43-99% of the macroalgal biomass in a third of all samples (i.e., individual site-year combinations), 10-40% of macroalgal biomass in half of the samples, and ,10% of the biomass in 21% of the samples. Calcareous algae (which were not used in our analyses) comprised ,2% of the total algal biomass on average. The biomass of non-calcareous algae (kelp+noncalcareous understory) was unrelated to urchin density when examined over all sites and years (r 2 = 0.01, F 1,63 = 1.62, P = 0.207).
The three-stage least squares regression analysis was designed to detect relationships between the three focal trophic levels graphed in Figure 2. Results of the Model 1 regression indicated that urchin densities did not vary significantly with lobster abundance (Table 1A). However, there was a negative relationship between urchins and lobsters at several of the sites (Figure 2), although not statistically discernable from zero. Model 1 also found no significant relationship between urchin density and macroalgal biomass (Table 1A).
Results of Model 2 regression (which accounted for variation among years and site specific variation in reef habitat complexity) indicated that much of the variation in urchin density among sites was due to differences in reef rugosity (Table 1B): urchin density increased dramatically with this measure of substrate complexity. Model 2 also found no significant relationship between urchins and macroalgae, and revealed a weak negative relationship between macroalgal biomass and reef rugosity. Model 3, which incorporated water depth as an additional covariate, revealed nearly identical relationships to those observed in Model 2: depth failed to explain any significant among-site variation in urchin density and macroalgal biomass (Table 1C).
We found little evidence that the intensity of lobster fishing, as measured by fishing effort, induced a trophic cascade leading to low macroalgal biomass. However, results do suggest that lobster fishing released top-down control on urchin abundance. Specifically, the relationship between the daily mean density of traps fished and mean urchin density at each site over the eight year period remained statistically insignificant ( Figure 3A; r 2 = 0.2046, F 1,7 = 1.544, P = 0.26), but when Naples Reef, which had high urchin densities but the lowest trap density, was removed from the analysis, the relationship between fishing intensity and urchin density was statistically significant and strongly positive (r 2 = 0.719, F 1,6 = 12.797, P = 0.016). This relationship is consistent with the negative relationship between lobster abundance and urchin density found with the GLS regression, and suggests that fishing may reduce top-down control of urchin populations by lobsters at most of the study sites. Despite higher urchin densities in more heavily fished sites, no evidence emerged linking lobster fishing to declines in macroalgal biomass; indeed, the relationship between lobster fishing intensity and macroalgal biomass remained positive ( Figure 2C), although not statistically significant (r 2 = 0.3866, F 1,7 = 3.782, P = 0.100). A positive relationship in this case contradicts a trophic cascade triggered by lobster fishing. Finally, there was no significant relationship between mean urchin density and macroalgal biomass ( Fig. 3B; r 2 = 0.1258, F 1,7 = 0.8632, P = 0.389), again implying that urchin grazing did not generally control macroalgal abundance at our study sites during the eight year study period.
Discussion
Our results suggest that a trophic cascade caused by lobster fishing, in which lobster abundance is reduced leading to increases in urchins and subsequent decreases in macroalgae, is not ubiquitous in the Santa Barbara Channel marine ecosystem. While the density of urchins varied slightly with lobster abundance (as measured by lobsters caught), non-calcareous macroalgae biomass (which included giant kelp) remained largely unrelated to red and purple sea urchin density. Thus, the observed relationship between grazer and primary producer remained inconsistent with that expected in a trophic cascade. Sea urchin grazing was clearly evident at some of our sites but it accounted for relatively little of the observed spatial and temporal variability in macroalgal biomass. Variability in macroalgal biomass has been shown to be independent of urchin grazing in other temperate reef systems as well [55,56]. Variability between urchin abundance and macroalgal biomass in our data was undoubtedly driven by other unmeasured factors. Reed et al. [19] concluded that physical disturbance from waves was the major factor influencing the biomass of giant kelp, the dominant macroalgal species, at the same sites used in our study. Nutrient limitation and urchin grazing also have important influences on macroalgal abundance under some circumstances, including during ENSO events when nitrogen availability is low, and under conditions of severe urchin grazing, such as those experienced in urchin-dominated ''barrens'' [47]. What causes the development of urchin barrens in southern California appears to be complex interactions among several factors, including urchin density (as influenced by recruitment, predation, and disease), kelp detritus production, and oceanographic conditions that influence kelp recruitment, growth, and persistence [20,23,24].
Our analyses failed to detect strong evidence for the control of urchins by lobsters. However, urchin abundance tended to decline with lobster abundance across many sites (Figure 2), although the relationship was not statistically significant in our regression model. In contrast, urchin density increased across all but one site with increasing fishing intensity ( Figure 3). Top-down control of urchins by lobsters has been reported in studies that compared communities inside versus outside marine reserves in New Zealand [52] and the Santa Barbara Channel Islands [23], and from patterns observed in relatively long-term ecological data collected in Maine [5] and southern California [7]. Work in Alaska [29] also indicated that sea otters can control sea urchins. Results of our Models 2 and 3 indicated that if top-down control of urchins by lobster occurred, it was probably a context dependent relationship, a phenomenon first reported by Shears et al. [26]. Specifically, three of our sites, Mohawk, Carpinteria, and Naples Reefs, have topographically complex (or rugose) rock substrata, which is excellent habitat for both urchins [26] and lobsters [56]. We found that urchin density increased by 1164.3 individuals m 22 for every 10-cm increase in rugosity m 21 length of substrate. This rather dramatic effect of reef topography implies that predation is probably of relatively minor importance in controlling urchin abundance in habitats with many reef cracks and crevices.
Our results do not include estimates of small, sub-legal lobsters, which may prey preferentially upon small sea urchins. Had we included such data, the addition of small lobsters would have increased the density of lobsters at some sites, likely reducing the negative response of urchins to lobsters. In addition, most of the sites in our study are fished for red sea urchins, which may help explain why there were fewer red than purple urchins. If urchin fishing were not occurring at our sites, the negative relationship between lobsters and urchins may have been weaker as both urchins and lobster prefer reef habitats that provide similar types of shelter. Finally, prior studies that have reported strong topdown control of urchins by lobsters also report that urchin populations often display a bi-modal size structure, with many large and small urchins and relatively few medium-sized individ- uals, which are preferred by spiny lobster [51]. We found relatively few small urchins at all of our sites, which is not consistent with a bimodal size structure caused by lobster predation. Thus, explanations for the negative relationship between lobster and urchins that we observed should be made with caution, in part because like many ecosystems the Santa Barbara Channel is impacted by multiple anthropogenic disturbances. Our finding that lobster fishing did not trigger a cascade that reduced macroalgal abundance reflects our observation that both urchin density and macroalgal biomass increased with lobster fishing intensity ( Figure 3A,C). This result is consistent with previous findings that urchin grazing is not the primary factor controlling giant kelp biomass at our study sites [19]. A similar result is found in kelp forests where urchins are not important grazers [54], such as in southern Australia where kelp production is heavily influenced by anthropogenic nitrogen inputs [55]. Increases in macroalgae with increased fishing intensity of lobster would be expected if macroalgae were primarily controlled by sea urchins. Our interviews with Santa Barbara Channel fishermen indicated they usually target kelp forests for lobster fishing, which is supported by a quantitative assessment conducted by Guenther [C. Guenther, unpublished data] indicating that lobster catch increased with the amount of kelp surface canopy. Lobster trap fishermen also assert that they target areas with consistently high kelp cover [57]. This makes ecological sense if macroalgal biomass is predominantly greater in less disturbed areas because disturbance also negatively impacts lobster populations [53].
Overall, our results found support for the hypothesis that lobsters have top-down control on urchins through predation, a trophic interaction that has been reported previously [5,9,13]. However, we found no evidence that lobster fishing indirectly impacts macroalgal populations through increases in the abundance of sea urchins. Instead, our results support the theory that trophic-cascades are context dependent [58], and that although humans have profound impacts on the marine environment through fishing [10], those impacts remain heterogeneous across space and time. Our study highlights an opportunity for long-term ecological monitoring programs to incorporate fishing data where appropriate towards improved understanding of fishing's role in community ecology. Campbell et al. [59] caution ocean managers and conservationists from continuing down the traditional path of treating human behavior as external agents in ecological processes. A better understanding of site-specific processes and identification of the critical variables that make a system resilient or vulnerable to certain activities remains necessary for fostering positive progress in area-based ecosystem management. As resource agencies develop spatial ecosystem-based management we may benefit from enhanced knowledge of when and where human activities most influence ecosystem processes. | 2017-04-01T09:31:52.941Z | 2012-11-29T00:00:00.000 | {
"year": 2012,
"sha1": "e39ddadd8dfea48261b744994faa8aea4c0b7b50",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0049396&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5f6898212a7d11028c196beaa6d6cb1707b1776",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
227314435 | pes2o/s2orc | v3-fos-license | Convergent genomic and pharmacological evidence of PI3K/GSK3 signaling alterations in neurons from schizophrenia patients
Human-induced pluripotent stem cells (hiPSCs) allow for the establishment of brain cellular models of psychiatric disorders that account for a patient’s genetic background. Here, we conducted an RNA-sequencing profiling study of hiPSC-derived cell lines from schizophrenia (SCZ) subjects, most of which are from a multiplex family, from the population isolate of the Central Valley of Costa Rica. hiPSCs, neural precursor cells, and cortical neurons derived from six healthy controls and seven SCZ subjects were generated using standard methodology. Transcriptome from these cells was obtained using Illumina HiSeq 2500, and differential expression analyses were performed using DESeq2 (|fold change|>1.5 and false discovery rate < 0.3), in patients compared to controls. We identified 454 differentially expressed genes in hiPSC-derived neurons, enriched in pathways including phosphoinositide 3-kinase/glycogen synthase kinase 3 (PI3K/GSK3) signaling, with serum-glucocorticoid kinase 1 (SGK1), an inhibitor of glycogen synthase kinase 3β, as part of this pathway. We further found that pharmacological inhibition of downstream effectors of the PI3K/GSK3 pathway, SGK1 and GSK3, induced alterations in levels of neurite markers βIII tubulin and fibroblast growth factor 12, with differential effects in patients compared to controls. While demonstrating the utility of hiPSCs derived from multiplex families to identify significant cell-specific gene network alterations in SCZ, these studies support a role for disruption of PI3K/GSK3 signaling as a risk factor for SCZ.
INTRODUCTION
Human-induced pluripotent stem cell (hiPSC) technology, and further differentiation into specific brain cell types, is an emerging approach to study cellular models of schizophrenia (SCZ) and other psychiatric disorders. The functional significance of potential SCZ susceptibility loci [1] can be assessed using hiPSC technology [2,3]. However, current high costs and technical burdens have restricted hiPSC studies to small sample size, limiting the ability to identify significant disease-relevant genomic signals. A transcriptome study utilizing neurons derived from a childhood-onset SCZ cohort identified overlap of neuronal transcriptional signatures with postmortem adult brains but failed to identify definitive differentially expressed genes (DEGs) [4]. The authors concluded that they were underpowered to detect such differences.
Studies of cell lines derived from families with multiple affected individuals, in particular families from genetically homogeneous populations, may provide an important strategy to overcome the dilution of genetic effects typical of case-control studies, as persons with genetic diseases in these populations are more likely to share the same genetic mutations, and thus molecular and cellular profiles, than persons from genetically diverse populations, thus facilitating identification of causative gene networks [5]. Based on this, we hypothesized that generation of hiPSCs from subjects carrying common genetic alterations would empower the identification of global transcriptome alterations and gene network pathways in a cell-type-and disease-specific way.
The population of the Central Valley of Costa Rica (CVCR) is considered a genetically homogeneous population. It is well known in studies of complex illnesses due to its centralization of health care, large family sizes, and high rate of compliance of patients. For 200 years this population grew in relative isolation, ideal for studying the genetic factors in complex illnesses [6]. We have performed in-depth genetic studies of SCZ families from the CVCR and have identified several potential SCZ candidate genes [7][8][9][10][11][12][13][14][15].
We generated hiPSC-derived neuronal precursor cells (NPCs) from ten individuals from a multiplex CVCR SCZ family and from three unrelated CVCR individuals, and further differentiated these cells into neurons, for a transcriptome analysis based on the SCZ phenotype. We identified 454 DEGs in hiPSC-neurons in patients compared to controls, and the phosphoinositide 3-kinase/glycogen synthase kinase 3 (PI3K/GSK3) signaling pathway was among the most significant gene networks identified by enrichment analysis. Among the DEGs in the PI3K/GSK3 signaling pathway was serum-glucocorticoid kinase 1 (SGK1), an inhibitor of glycogen synthase kinase 3β (GSK3β) [16][17][18], which has been implicated in SCZ and in neuronal morphogenesis [19,20]. Based on this, we assessed the effect of inhibitors of SGK1 and GSK3, downstream effectors of PI3K, in hiPSC-neurons, and found differential sensitivity to these inhibitors in patients compared to controls. These findings support the notion that alterations in PI3K/ GSK3 signaling pathways may represent a mechanism of risk for SCZ.
Sample characteristics
De-identified cell lines were generated from a CVCR SCZ multiplex family comprised of ten individuals, four females and six males, of which four male siblings are affected with SCZ, and all females are unaffected. Cell lines from three additional unrelated CVCR individuals (two SCZ affected, one male and one female, and one unaffected female) were also generated (Fig. 1A). All subjects were carefully characterized in previous studies, according with the Principles of the Declaration of Helsinki, and lymphoblastoid cell lines (LCLs) were already generated from each subject, as previously described [8].
Reprogramming of human LCLs with episomal vectors LCLs (n = 13) were reprogrammed into hiPSCs using the Epi5™ Episomal iPSC reprogramming Kit (Thermo Fisher Scientific, Waltham, MA), containing Oct4, Sox2, Klf4, L-Myc, and Lin28. Nucleofection was conducted using the Cell Line Optimization Nucleofector™ X Kit for the 4D-Nucleofector™ System (program EW113, Lonza, Basal, Switzerland). Briefly, 2 × 10 6 cells from each subject were nucleofected with Epi5™, stabilized in RPMI1640 medium with 10% FBS (Gibco) and 1x Penicillin Streptomycin (PenStrep, Gibco-Thermo Fisher Scientific, Waltham, MA) for 3 days, and then transferred to 60 mm dishes coated with Membrane Matrix (Corning Matrigel hESC-Qualified Matrix (Corning) or Geltrex™ LDEV-Free Reduced Growth Factor Basement Membrane Matrix (Gibco-Thermo Fisher Scientific, Waltham, MA)). RPMI medium was then replaced by TeSR™-E8™ (StemCell Technologies, Vancouver, Canada) and 1x PenStrep (Gibco) every day until the appearance of colonies (15-30 days), at which time clones were manually chosen and expanded. iPSC clones from each of the cell lines were stained for the pluripotency markers SSEA4 and Oct4, and karyotyping analysis by standard G-banding technique was carried out by KaryoLogic, Inc. (Research Triangle Park, NC, USA). Details regarding clinical information for each donor and cell lines used for experiments described below are found in Supplementary Table S1.
RNA sequencing Cells from each cell type/subject, matched for passage and mycoplasma-free, were cultured in triplicate, and replicates were combined for RNA extraction. Due to budget limitations, for the LCLs and hiPSCs only, subjects were pooled into four groups based on phenotype, resulting in two unaffected and two affected groups for each cell type. In total, RNA from 32 distinct samples (LCLs (n = 4), hiPSCs (n = 4), hiPSC-NPCs (n = 13), and hiPSC-neurons (n = 13)) was sequenced. Total RNA was extracted and purified using the RNeasy Plus Mini kit (Qiagen, Hilden Germany). Quality and integrity were assessed by the Agilent Bioanalyzer 2100 system (Agilent Technologies, Santa Clara, CA) and agarose gel electrophoresis. A total of 1 μg RNA per sample was used for mRNA-seq library construction using NEBNext® Ultra™ RNA Library Prep Kit for Illumina® (Illumina Inc., San Diego, CA). Paired-end sequencing reads (150 bp) were generated on an Illumina HiSeq2000 platform (Q30 > 80%) (Illumina) at Novogene Bioinformatics Institute (Chula Vista, CA).
RNA data processing Raw mRNA sequence reads were pre-processed using cutadapt (v. 1.15) to remove bases with quality scores < 20 and adapter sequences [23], followed by alignment of clean RNA-seq reads to GRCh38.83 with STAR (v2.5.3a) [24]. Uniquely mapped reads overlapping genes were counted by htseq-count with default parameter using annotation from ENSEMBL v83. Only genes with >5 reads in at least one sample were retained. Reads counts were normalized to the aligned Reads Per Kilobase Million to obtain the relative expression levels.
RNA deconvolution analysis Cell type and developmental stage composition analysis was performed using a recent method [25]. Briefly, we adapted regression calibration matrices, originally created from human single-cell and brain homogenate RNA-seq data sets [26][27][28][29][30], using methods previously established for deconvolution of epigenetic data [31,32]. We then projected normal-transformed TPM of barcode genes for each sample into the design matrix with the "minfi" Bioconductor "projectCellType()" function [33], and calculated RNA fractions by normalizing the fitted model scores to the total scores for each line for all subtypes. Fetal developmental stage ratios were summed after fitting to create a single fetal ratio score for statistical testing and visualization (NCX_Fetal). Thus, this analysis subsets fractions of our data into RNA fractions from five different human ectoderm-derived cell types (NPC, neurons, astrocytes, OPCs, and oligodendrocytes) plus iPSCs, in six different human neocortical (NCX) developmental stages (iPSC, Fetal, Infant, Child, Teens, and Adult). Visualization of cell-type ratios and statistical comparisons was performed in R, and cell-type and maturity fractions were compared between groups using twosample t-tests with Holm-Sidak correction unless otherwise noted.
Differential expression analysis
We excluded Y chromosome genes to avoid bias effects due to unbalanced sex of samples. Retained genes (raw reads count) were submitted for differential expression analysis of cases compared to controls in each cell type with DESeq2 software Convergent genomic and pharmacological evidence of PI3K/GSK3 signaling. . . L Stertz et al. [34], which implements a model based on the negative binomial distribution. Resulting p values were adjusted using the Benjamini and Hochberg's (BH) approach [35] to control for false discovery rate (FDR). Genes with fold change (FC) > 1.5 and FDR < 0.3 were considered to be significant. Pathway analysis [36][37][38], principal components analysis (PCA), and linear mixed model analysis [39] were performed as described in Supplementary Methods. Enrichment analysis using publicly available data sets We assessed enrichment of genome-wide association study (GWAS) signals using Multi-markers Analysis of GenoMic Annotation (Supplementary Methods) and summary statistics for SCZ [1], bipolar disorder (BD) [40], SCZ + BD [41], OCD [42], suicide attempt in SCZ [43], PTSD [44], major depressive disorder [45], autism spectrum disorder [46], and ADHD [47]. We also assessed concordance of our gene set with gene expression data sets from human postmortem brain and hiPSC studies [4,[48][49][50].
Inhibition of SGK1 and GSK3 and neurite imaging Neuronal cell lines from six siblings in the multiplex family (three controls and three patients) were chosen to achieve the most homogenous genetic background possible for functional studies. NPC cell lines from these subjects, derived from the same iPSC clones used for the RNA-seq studies, were differentiated into neurons and at day 21 were treated with 0.5% DMSO (control), 20 μm GSK3 inhibitor CHIR99021 (Tocris), 20 μm SGK1 inhibitor GSK650394 (Selleck) for 24 h, or 20 μm CHIR99021 for 14 h followed by 10 h with 20 μm GSK650394. Details of the imaging protocol are described in Supplementary Methods. Average fluorescent intensity was calculated by ImageJ for both βIII tubulin and fibroblast growth factor 12 (FGF12) and analyzed to ensure that each measurement was independent of potential changes in the number or size of βIII tubulin positive neurites.
Reverse transcription quantitative polymerase chain reaction RT-qPCR was performed to assess expression of SGK1 in neurons from the same six siblings used for the pharmacological studies (Supplementary Methods).
RESULTS
Cell line characterization Karyotype analysis of hiPSCs revealed two patients and two controls to have 5% chromosome 1 aneuploidy. All other cell lines were normal (Table S1). Studies have reported widespread somatic mosaicism in the human body, suggesting reprogramming does not necessarily lead to de novo CNVs in iPSC [51]. Nevertheless, as the aneuploidy is found in equal number of patients and controls in a very small percentage of cells, we do not expect this to influence the study. hiPSC characterization and differentiation to hiPSC-NPC and hiPSC-neurons were performed for each subject ( Fig. 1 and Supplementary Fig. S1). Neurons exhibited higher expression levels of neuronal markers MAP2 and SLC17A7 (vGlut1) compared to LCLs, hiPSC, and hiPSC-NPCs ( Fig. 1H, I). No differences were found between HC and SCZ cell lines on the expression of NPC and neuronal markers.
RNA sequencing RNA-seq data were generated from a total of 32 samples (LCLs (n = 4), hiPSCs (n = 4), hiPSC-NPCs (n = 13), and hiPSC-neurons (n = 11)), representing four unique cell lines from 13 individuals. Two NPC cell lines did not differentiate into viable neurons (354 and 414) and were thus excluded from the hiPSC-neuron analysis. In all, 2.25 billion paired-end reads were obtained, and the median number of uniquely mapped read pairs per sample was 24.98 million at a 90.4-95.8% ratio, indicating a very small fraction of rRNA reads. A total of 30,500 genes (based on ENSEMBL v83 annotations) were expressed at a level deemed sufficient for analysis. Among them, 17,351 were protein coding genes, 3211 were lincRNAs, and the remaining were of various biotypes ( Supplementary Fig. S2). No expression of episomal vector genes was observed.
Sex validation and observation for contamination or aberrant Xinactivation Using XIST on chrX and the expression of six genes on chrY (USP9Y, UTY, NLGN4Y, ZFY, RPS4Y1, and TXLNGY), we confirmed that all samples show a correct expression pattern according to their gender (Supplementary Fig. S3). No other indication of contamination or aberrant X-inactivation was detected.
Cell type and developmental stage composition analysis Given that bulk RNA-seq analysis can reflect multiple constituent cell types across different developmental stages, we performed deconvolution analysis to calculate cell-type RNA fraction scores and NCX human developmental stage for each of our cell lines. We found that as hiPSCs differentiated into NPCs and neurons, the iPSC fraction progressively decreased while the more mature neuron RNA fraction increased ( Fig. 2A and Supplementary Fig. S4A). Specifically, hiPSC-neurons had a lower iPSC RNA fraction than hiPSC-NPCs and a higher neuron RNA fraction than hiPSC-NPCs. There was no significant difference in any cell-type RNA fraction between SCZ and HC (Fig. 2B). Developmental stage deconvolution also revealed a rise in RNA fractions representing the fetal and adult stages over differentiation (NCX_Fetal and NCX_Adult, Supplementary Fig. S4).
PCA of the expression data illustrates that hiPSCs, hiPSC-NPCs, and hiPSC-neurons separate along the first principal component (PC), which explains 26.81% of the variance (Fig. 2C).
Differential expression analysis
We compared cases against controls in NPCs and neurons separately. Analyses identified two DEGs (TXLNG and AP1S2) in NPCs and 454 in neurons ( Fig. 3 and Supplementary Table S2). Variance partition analysis showed that cell type had the largest effect on expression and explained a median of 12% of the expression variation (Fig. 2D). Of interest, among the DEGs identified in neurons were CACNA1C, C4A, and ZNF804A, previously identified as SCZ candidate genes by GWAS Fig. 1 Generation and neural differentiation of hiPSCs. A Family pedigrees from the Central Valley of Costa Rica. Individuals from four different families were included in this study. Squares represent males, and circles represent females. Subjects with a number identification represent those included in this study from whom hiPSC, NPC, and neuronal cell lines were made. In total, 39 cell lines were derived from 13 subjects. Subjects in shaded boxes were diagnosed as having schizophrenia (SCZ), subjects in white boxes represent healthy controls (HC). B Representative images of hiPSCs derived from SCZ patients and HC expressing pluripotent markers SSEA4 (green) and Oct4 (red). Gene expression (TPM) levels of pluripotency cell markers C DNMT3B and D NANOG is increased in hiPSCs when compared to LCLs, hiPSC-NPCs, and hiPSC-neurons. E Representative images of hiPSC-NPC positive for Nestin (green) and SOX-1 (red). F hiPSC-NPCs have higher expression levels of NPC cell marker Nestin (NES), with no significant differences between patients and controls. G Representative images of 3-week-old hiPSC-neurons positive for βIII tubulin (green) and MAP2 (red). The hiPSC-neurons have higher expression levels of H MAP2 and I SLC17A7 (vGlut1), with no significant differences between SCZ and HC. Comparison between groups: Mann-Whitney. No statistical test was performed on the pooled samples. Gene expression levels are expressed in TPM (Transcripts Per Kilobase Million). Line on plots represents median. [1,52,53]. Pathway analysis of the neuronal DEGs, using DAVID, EnrichR, and Webgestalt, all implicated a coherent set of KEGG networks related to the PI3K/GSK3 signaling pathway and to extracellular matrix (ECM) organization, significant after FDR correction (Fig. 3C), in which many of the same genes found in ECM pathways were found in the PI3K/GSK3 pathway. Among the genes identified as part of the PI3K/GSK3 pathway were SGK1, an inhibitor of GSK3β [16][17][18], overexpressed in patients. Also identified in this pathway were FGF12, a primary constituent of the voltage-gated Na + (Nav1.2) channel in the brain [54], and several ECM genes known to function via PI3K/ GSK3 signaling, including ITGA8 [55] and collagen subunits [56,57] (Fig. 3D). We evaluated concordance of our findings with two SCZ gene expression studies in the dorsolateral prefrontal cortex [48,49]. We identified 20 and 6 concordant genes, respectively, in each comparison (Supplementary Table S3). CREBRF was identified in all three studies, with the same direction of change (increase in patients). We also assessed concordance of our gene set with that from SCZ hiPSC-derived neural cells [4,50]. Of the 454 DEGs in our study, 323 were measured by Hoffman et al. [4], but there was no significant correlation between changes. Evgrafov et al. [50] reported a negative correlation for all genes in their data set and that of the Hoffman et al. [4] study. We found two genes in common with the study by Evgrafov et al., IGFBP5 and AHGAP26, with the same direction of change. Also, Evgrafov et al. identified WNT5A as one of their top DEGs, whereas we identified WNT5B, with the same direction of change. Both WNT genes are members of the canonical Wnt signaling pathway, which impinges on GSK3β.
Effect of SGK1 and GSK3 inhibition on βIII tubulin and FGF12 levels in neurites To establish a functional role for the PI3K/GSK3 pathway in SCZ, and specifically for SGK1 as part of this pathway, we assessed the effect of pharmacological inhibitors of SGK1 and GSK3 on levels of FGF12 and the cytoskeleton marker βIII tubulin in neurites. We chose to measure βIII tubulin because GSK3 plays a key role in neurite formation by directly influencing tubulin stability [20], and , ii β-III tubulin (green), iii FGF12 (red), and iv merge of all three channels with v zoom to ROI used for analysis of vi βIII tubulin and vii FGF12. Squares in panels (ii-iv) represent the ROI used for analysis based on the βIII tubulin fluorescent intensity threshold. A HC neurons treated for 24 h with DMSO control. B SCZ neurons treated for 24 h with DMSO control. C HC neurons treated for 24 h with GSK3 inhibitor CHIR99021. D SCZ neurons treated for 24 h with GSK3 inhibitor CHIR99021. E HC neurons treated for 14 h with GSK3 inhibitor CHIR99021 followed by 10-h treatment with SGK1 inhibitor GSK650394. F SCZ neurons treated for 14 h with GSK3 inhibitor CHIR99021 followed by 10-h treatment with SGK1 inhibitor GSK650394. G HC neurons treated for 24 h with SGK1 inhibitor GSK650394. H SCZ neurons treated for 24 h with SGK1 inhibitor GSK650394. Scale bars represent iv 20 μm or vii 10 μm. I βIII immunofluorescence in neurites is increased in SCZ patients compared to healthy controls when treated for 24 h with DMSO (twoway mixed model ANOVA with Sidak's multiple comparison's test, p = 0.0095). Twenty-four-hour treatment with 20 μm GSK3 inhibitor CHIR99021 increases fluorescent intensity in HC neurons, but not SCZ neurons, while 24 h with SGK1 inhibitor GSK650394 decreases fluorescent intensity in SCZ neurons, but not healthy controls (two-way mixed model ANOVA with Dunnett's multiple comparison's test, p = 0.0215 (HC GSK3) and p = 0.0124 (SCZ SGK1)). J FGF12 immunofluorescence in neurites is decreased in SCZ neurons compared to healthy controls when treated for 24 h with DMSO (two-way mixed model ANOVA with Sidak's multiple comparison's test, p = 0.0107). Treatment with SGK1 inhibitor GSK650394 decreases fluorescent intensity in HC neurons, while 14-h treatment with GSK3 inhibitor CHIR99021 followed by 10h treatment with SGK1 inhibitor GSK650394 increased fluorescent intensity of FGF12 in SCZ neurons (two-way mixed model ANOVA with Dunnett's multiple comparison's test, p = 0.0009 (HC SGK1) and p = 0.0073 (SCZ GSK3 + SGK1)). Data are mean ± SEM. # indicates p < 0.05 and ## indicates p < 0.01 by two-way ANOVA with Sidak's multiple comparison's test. * indicates p < 0.05 and ** indicates p < 0.01 by two-way ANOVA with Dunnett's multiple comparison's test. Dots represent measurements from single neurons.
specifically modulates βIII tubulin expression [58], a beta tubulin isoform selectively affected during neurite formation [59]. FGF12 was chosen as an outcome marker because of the suggestive transcriptome evidence of its involvement in the PI3K/GSK3 pathway, and our previous studies showing GSK3 as a regulator of intracellular FGFs [60]. Furthermore, FGF12 is involved in neurite formation [61], a process known to be altered in SCZ [62], and has high similarity in sequence and functionality to FGF14, which has been implicated in SCZ [63].
We found a significant correlation between RNA-seq and qPCR expression data for SGK1 in the hiPSC-neurons (r = 0.9751) (Supplementary Fig. S5). No differences between HC and SCZ were found in the baseline characterization of these cell lines using NeuN, MAP2, and βIII tubulin (Supplementary Fig. S5). Around 40% of cells were positive for MAP2, indicating a mixture of mature and immature neuronal cells in culture. We found significant differences in sensitivity to inhibitors in patients compared to controls. Patients treated with DMSO (vehicle) had increased βIII tubulin levels in neurites compared to controls (Fig. 4Avi, Bvi, I). SGK1 inhibition (which effectively abrogates SGK1-induced GSK3 inhibition) caused a decrease in neurite βIII tubulin in patients but not in controls, whereas GSK3 inhibition led to an increase in neurite βIII tubulin levels in controls but not in patients (Fig. 4Cvi, Dvii, Gvi, Hvii, I). Therefore, inhibition of SGK1 in patients led to reciprocal levels of βIII tubulin in controls treated with GSK3 inhibitor. In regards to FGF12, SGK1 inhibition led to decreased levels in control subjects but not in patients (Fig. 4Gvii, Hvii, J). The GSK3 inhibitor CHIR99021 did not cause any effect in either subject group (Fig. 4Cvii, Dvii, J). Overall, these results suggest differential sensitivity to inhibition of PI3K/GSK3 signaling in patients compared to controls, perhaps due to the overexpression of SGK1 that is found in these patients.
DISCUSSION
By generating hiPSCs derived from individuals from a genetically homogenous population, we detected significant gene expression alterations in pathways that may modulate SCZ risk. Overall, we found a stronger effect in hiPSC-neurons, compared to NPCs, consistent with the hypothesis that neurons are the cell type most relevant to SCZ risk [64].
Initial RNA-seq analyses validated the sex of each cell line and excluded the possibility of sample cross-contamination or aberrant X-inactivation exclusion. It is not uncommon for such errors to be introduced during this protocol [65], and only a few studies have reported such an in-depth validation analysis.
We found concordance of some of our DEGs with those identified in other SCZ genetic/transcriptomic studies, including differential expression of GWAS-associated genes CACNA1AC, C4A, GRAMD1B, PLCL1, and ZNF804A [1]. To our knowledge, this is the first report of altered expression of these genes in SCZ-derived neurons.
Pathway analysis is an agnostic way of grouping genes together based on literature and known gene functions, and the interpretation of these analyses should be guided by relevance to the disease of interest [70]. The PI3K/GSK3 signaling pathway was among the most significant canonical pathways identified by enrichment analysis in our study. This pathway, including GSK3, has been strongly implicated in the pathophysiology of SCZ and as a pathological mediator of genetic and environmental programming during development [19,66]. Among the genes associated with this pathway we found SGK1, a downstream effector of PI3K that inhibits GSK3 activity in an AKT-independent manner [16,17]. Based on the identification of SGK1 as a novel potential mediator of SCZ risk as part of the PI3K/GSK3 pathway, we performed pharmacological studies to validate its role in our cellular model. We found decreased sensitivity to PI3K/ GSK3 signaling inhibition in SCZ patients. Specifically, the findings that pharmacological inhibition of GSK3 with CHIR99021, which acts similarly to SGK1-induced GSK3 inactivation by phosphorylation, did not alter FGF12 or βIII tubulin expression in SCZ neurons points toward an already saturated inhibition of GSK3, perhaps due to the increased expression of SGK1 in these SCZ subjects.
GSK3 is a downstream effector of several signaling pathways, including the PI3K pathway, the mTOR pathway, and the Wnt canonical pathway, suggesting that GSK3 may function to coordinate and integrate signaling within these pathways [20]. SCZ is a complex psychiatric disorder in which different affected individuals may carry causative variants in any of a wide number of genes which impinge on common pathways, such as PI3K/ GSK3 signaling. In this regard, a recent transcriptomic study of SCZ neural progenitor cells identified the Wnt signaling pathway as the most enriched [50], with WNT5A as one of the top DEGs, whereas we identified WNT5B, with the same direction of change. Taken together with our functional studies, these findings in SCZ-derived neural cell lines strongly support a role for genes impinging on GSK3 function in risk for SCZ. Because of the genetic complexity underlying SCZ, we cannot determine whether the observed alterations are directly caused by any particular mutation(s) that may be carried by the subjects in our study. Further studies with larger samples sizes are needed to clearly determine which causative mutations may lead to PI3K/GSK3 signaling alterations.
Limitations of our study include the differentiation of single iPSC clones per subject and the relatively small sample size. Our functional studies focused on validation of SGK1 in the PI3K/GSK3 pathway. Future investigation is needed to validate and evaluate the potential role of other genes and pathways identified here. Moreover, our pharmacological experiments were limited to assessing the effects of inhibition of PI3K/GSK3 signaling on FGF12 and βIII tubulin. Additional studies are now warranted to determine the role of PI3K/GSK3 signaling on modulation of other genes identified in this study, and the relevance of these changes on cellular features and functions, including neurite morphology, and calcium and sodium channel activity.
To the best of our knowledge, this study is the first to identify PI3K/GSK3 signaling alterations in SCZ neurons. Overall, our results highlight that transcriptomic alterations in SCZ patients are cell type specific and demonstrate the potential of hiPSCs derived from subjects with a common genetic background to identify gene networks and signaling alterations that may underlie the molecular and cellular mechanisms in SCZ.
FUNDING AND DISCLOSURE
This study was supported by a University of Texas System (UT BRAIN) award (CWB) and a Brain and Behavior Research Foundation (NARSAD) Young Investigator Award (LS). JDR was supported by a National Institutes of Health training grant (T32ES007254). FL was supported by 1R01MH124351. ZZ and PJ were supported by National Institutes of Health grant (R01LM012806). The authors declare no competing interests. | 2020-12-07T14:43:22.772Z | 2020-12-07T00:00:00.000 | {
"year": 2020,
"sha1": "b6de7c7a1e68e4580b67fb86a77bb2a198ccec74",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41386-020-00924-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6de7c7a1e68e4580b67fb86a77bb2a198ccec74",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
215541211 | pes2o/s2orc | v3-fos-license | Generative Alignment and Semantic Parsing for Learning from Ambiguous Supervision
We present a probabilistic generative model for learning semantic parsers from ambiguous supervision. Our approach learns from natural language sentences paired with world states consisting of multiple potential logical meaning representations. It disambiguates the meaning of each sentence while simultaneously learning a semantic parser that maps sentences into logical form. Compared to a previous generative model for semantic alignment, it also supports full semantic parsing. Experimental results on the Robocup sportscasting corpora in both English and Korean indicate that our approach produces more accurate semantic alignments than existing methods and also produces competitive semantic parsers and improved language generators.
Introduction
Most approaches to learning semantic parsers that map sentences into complete logical forms (Zelle and Mooney, 1996;Zettlemoyer and Collins, 2005;Kate and Mooney, 2006;Wong and Mooney, 2007b;Lu et al., 2008) require fullysupervised corpora that provide full formal logical representations for each sentence. Such corpora are expensive and difficult to construct. Several recent projects on "grounded" language learning (Kate and Mooney, 2007;Chen and Mooney, 2008;Chen et al., 2010;Liang et al., 2009) exploit more easily and naturally available training data consisting of sentences paired with world states consisting of multiple potential semantic representations. This setting is partially motivated by a desire to model how children naturally learn language in the context of a rich, ambiguous perceptual environment.
In particular, Chen and Mooney (2008) introduced the problem of learning to sportscast by simply observing natural language commentary on simulated Robocup robot soccer games. The training data consists of natural language (NL) sentences ambiguously paired with logical meaning representations (MRs) describing recent events in the game extracted from the simulator. Most sentences describe one of the extracted recent events; however, the specific event to which it refers is unknown. Therefore, the learner has to figure out the correct matching (alignment) between NL and MR before inducing a semantic parser or language generator. Based on an approach introduced by Kate and Mooney (2007), Chen and Mooney (2008) repeatedly retrain both a supervised semantic parser and language generator using an iterative algorithm analogous to Expectation Maximization (EM). However, this approach is somewhat ad hoc and does not exploit a well-defined probabilistic generative model or real EM training.
On the other hand, Liang et al. (2009) introduced a probabilistic generative model for learning semantic correspondences in ambiguous training data consisting of sentences paired with observed world states. Compared to Chen and Mooney (2008), they demonstrated improved alignment results on Robocup sportscasting data. However, their model only produces an NL-MR alignment and does not learn either an effective semantic parser or language generator. In addition, they use a combination of a simple Markov model and a bag-of-words model when generating natural language for MRs, therefore, they do not model context-free linguistic syntax.
Motivated by the limitations of these previous methods, we propose a new generative alignment model that includes a full semantic parsing model proposed by Lu et al. (2008). Our approach is capable of disambiguating the mapping between language and meanings while also learning a complete semantic parser for mapping sentences to logical form. Experimental results on Robocup sportscasting show that our approach outperforms all previous results on the NL-MR matching (alignment) task and also produces competitive performance on semantic parsing and improved language generation.
Related Work
The conventional approach to learning semantic parsers (Zelle and Mooney, 1996;Ge and Mooney, 2005;Kate and Mooney, 2006;Zettlemoyer and Collins, 2007;Zettlemoyer and Collins, 2005;Wong and Mooney, 2007b;Lu et al., 2008) requires detailed supervision unambiguously pairing each sentence with its logical form. However, developing training corpora for these methods requires expensive expert human labor. Chen and Mooney (2008) presented methods for grounded language learning from ambiguous supervision that address three related tasks: NL-MR alignment, semantic parsing, and natural language generation. They solved the problem of aligning sentences and meanings by iteratively retraining an existing supervised semantic parser, WASP (Wong and Mooney, 2007b) or KRISP (Kate and Mooney, 2006), or an existing supervised natural-language generator, WASP −1 (Wong and Mooney, 2007a). During each iteration, the currently trained parser (generator) is used to produce an improved NL-MR alignment that is used to retrain the parser (generator) in the next iteration. However, this approach does not use the power of a probabilistic correspondence between an NL and MRs during training.
On the other hand, Liang et al. (2009) proposed a probabilistic generative approach to pro-duce a Viterbi alignment between NL and MRs. They use a hierarchical semi-Markov generative model that first determines which facts to discuss and then generates words from the predicates and arguments of the chosen facts. They report improved matching accuracy in the Robocup sportscasting domain. However, they only addressed the alignment problem and are unable to parse new sentences into meaning representations or generate natural language from logical forms. In addition, the model uses a weak bag-of-words assumption when estimating links between NL segments and MR facts. Although it does use a simple Markov model to order the generation of the different fields of an MR record, it does not utilize the full syntax of the NL or MR or their relationship. Chen et al. (2010) recently reported results on utilizing the improved alignment produced by Liang et al. (2009)'s model to initialize their own iterative retraining method. By combining the approaches, they produced more accurate NL-MR alignments and improved semantic parsers.
Motivated by this prior research, our approach combines the generative alignment model of Liang et al. (2009) with the generative semantic parsing model of Lu et al. (2008) in order to fully exploit the NL syntax and its relationship to the MR semantics. Therefore, unlike Liang et al.'s simple Markov + bag-of-words model for generating language, it uses a tree-based model to generate grammatical NL from structured MR facts.
Background
This section describes existing models and algorithms employed in the current research. Our model is built on top of the generative semantic parsing model developed by Lu et al. (2008). After learning a probabilistic alignment and parsing model, we also used the WASP and WASP −1 systems to produce additional parsing and generation results. In particular, since our current system is incapable of effectively generating NL sentences from MR logical forms, in order to demonstrate how our matching results can aid NL generation, we use WASP −1 to learn a generator. This follows the experimental scheme of Chen et al. (2010), which demonstrated that an improved NL-MR Chen and Mooney (2008) to initially estimate the prior probability of each event-type generating a naturallanguage comment.
Generative Semantic Parsing
Lu et al. (2008) introduced a generative semantic parsing model using a hybrid-tree framework.
A hybrid tree is defined over a pair, (w, m), of a natural-language sentence and its logical meaning representation. The tree expresses a correspondence between word segments in the NL and the grammatical structure of the MR. In a hybrid tree, MR production rules constitute the internal nodes, while NL words (or phrases) constitute the leaves. A sample hybrid tree from the English Robocup data is given in Figure 1. A generative model based on hybrid trees is defined as follows: starting from a root semantic category, the model generates a production of the MR grammar, and then subsequently generates a mixed hybrid pattern of NL words and child semantic categories. This process is repeated until all leaves in the hybrid tree are NL words (or phrases). Each generation step is only dependent on the parent step, thus, generation is assumed to be a Markov process. Lu et al. (2008)'s generative parsing model estimates the joint probability P (T , w, m), which represents the probability of generating a hybrid tree T with NL w, and MR m. This probability is computed as the product of the probabilities of the steps in the generative process. Since there are multiple ways to construct a hybrid tree given a pair of NL and MR, the data likelihood of the pair (w, m) given by the learned model is calculated by summing P (T , w, m) over all the possible hybrid trees for NL w and MR m.
The model is normally trained in a fully supervised setting using NL-MR pairs. In order to learn from ambiguous supervision, we extend this model to include an additional generative process for selecting the subset of available MRs used to generate NL sentences.
WASP and WASP −1
WASP (Word-Alignment-based Semantic Parsing) is a semantic parsing system that uses syntaxbased statistical machine translation techniques. It induces a probabilistic synchronous context-free grammar (PSCFG) for generating corresponding NL-MR pairs. Since a PSCFG is symmetric with respect to the two languages it generates, the same learned model can be used for both semantic parsing (mapping NL to MR) and natural language generation (mapping MR to NL), Since there is no prespecified formal grammar for the NL, the WASP −1 system learns an n-gram language model for the NL side and uses it to choose the most probable NL translation for a given MR using a noisy-channel model.
IGSL
Chen and Mooney (2008) introduced the IGSL method for determining which event types a human commentator is more likely to describe in natural language. This is sometimes called strategic generation or content selection, the process of choosing what to say; as opposed to tactical generation, which determines how to say it. IGSL uses a method analogous to EM to train on ambiguously supervised data and iteratively improve probability estimates for each event type, specifying how likely each MR predicate is to elicit a comment. The algorithm alternates between two processes: calculating the expected probability of an NL-MR matching based on the currently learned estimates, and updating the probability of each event type based on the expected match counts. IGSL was shown to be quite effective at predicting which events in a Robocup game Figure 2 shows a sample trace from the Robocup English data. Each NL commentary sentence normally has several possible MR matches that occurred within the 5-second window, indicated by edges between the NL and MR. Bold edges represent gold standard matches constructed solely for evaluation purposes. Note that not every NL has a gold matching MR. This occurs because the sentence refers to unrecognized or undetected events or situations or because the matching MR lies outside the 5-second window.
Generative Model
Like Liang et al. (2009)'s generative alignment model, our model is designed to estimate P (w|s), where w is an NL sentence and s is a world state containing a set of possible MR logical forms that can be matched to w. However, our approach is intended to support both determining the most likely match between an NL and its MR in its world state, and semantic parsing, i.e. finding the most probable mapping from a given NL sentence to an MR logical form.
Our generative model consists of two stages: • Event selection: P (e|s), chooses the event e in the world state s to be described.
• Natural language generation: P (w|e), models the probability of generating naturallanguage sentence w from the MR specified by event e.
Event selection model
The event selection model specifies the probability distribution for picking an event that is likely to be commented upon amongst the multiple MR logical forms in the world state s. The probability of picking an event is assumed to depend only on its event type as given by the predicate of its MR. For example, the MR pass(pink10, pink11) has event type pass and arguments pink10 and pink11.
Our model is similar to Liang et al. (2009)'s record choice model, but we only model their notion of salience, denoting that some event types are more likely to be described than others. We do not model their notion of coherence, which models the order of event types in the commentary. We found that for sportscasting the order of described events depends only on the sequence of events in the game and does not exhibit any additional detectable pattern due to linguistic preferences.
The probability of picking an event e of type t e is denoted by p(t e ). If there are multiple events of type t in a world state s, then an event of type t is selected uniformly from the set s(t) of events of type t in state s. Therefore, the probability of picking an event is given by: (1)
Natural language generation model
The natural-language generation model defines the probability distribution of NL sentences given an MR specified by the previously selected event.
We use Lu et al. (2008)'s generative model for this step, in which: where m is the MR logical form defined by event e and T is a hybrid tree defined over the NL-MR pair (w, m). The probability P (T , w|m) is calculated using the generative semantic parsing model of Lu et al. (2008) using the joint probability of the NL-MR pair (w, m), i.e. the inside probability of generating (w, m). The likelihood of a sentence w is then the sum over all possible hybrid trees defined by the NL-MR pair (w, m). 1 The natural language generation model covers the roles of both the field choice model and word choice models of Liang et al. (2009). Since our event selection model only chooses an event based on its type, the order of its arguments still needs to be addressed. However, Lu et al.'s generative model includes ordering the MR arguments (as specified by MR production rules) as well as the generation of NL words and phrases to express these arguments. Thus, it is unnecessary to separately model argument ordering in our approach. 2 1 Lu et al. (2008) propose 3 models for generative semantic parsing: unigram, bigram, and mixgram (interpolation between the two). We used the bigram model, where the generation of a hybrid-tree component (NL word or semantic category) depends on the previously generated component as well as the parent MR production. The bigram model always performed the best on all tasks in our experimental evaluation. 2 We also tried using a Markov model to order arguments like Liang et al. (2009), but preliminary experimental results showed that this additional component actually decreased performance rather than improving it.
Learning and Inference
This composite generative model is trained using conventional EM methods. The process is similar to Lu et al. (2008)'s, an inside-outside style algorithm using dynamic programming to generate a hybrid tree from the NL-MR pair (w, m), except our model's estimation process additionally deals with calculating expected counts under the posterior P (e|w, s; θ) in the E-step and normalizing the counts to optimize parameters. The whole process is quite efficient; training time takes about 30 minutes to run on sportscasts of three games in either English or Korean.
Unfortunately, we found that EM tended to get stuck at local maxima with respect to learning the event-type selection probabilities, p(t). Therefore, we also tried initializing these parameters with the corresponding strategic generation values learned by the IGSL method of Chen and Mooney (2008). Since IGSL was shown to be quite effective at predicting which event types were likely to be described, the use of IGSL priors provides a good starting point for our event selection model.
Our model is built on top of Lu et al. (2008)'s generative semantic parsing model, which is also trained in several steps in its best-performing version. 3 Thus, the overall model is vulnerable to getting stuck in local optima when running EM across these multiple steps. We also tried using random restarts with different initialization of parameters, but initializing with IGSL priors performed the best in our experimental evaluation.
Experimental Evaluation
We evaluated our proposed model on the Robocup sportscasting data described in Section 4. Our experimental results cover 3 tasks: NL-MR matching, semantic parsing, and tactical generation. Following Chen and Mooney (2008), the experiments were conducted using 4-fold (leave one game out) cross validation. Since the corpus contains data for four separate games, each fold uses 3 games for training and the remaining game for testing for semantic parsing and tactical generation. Matching performance is measured in training data, since the goal is to disambiguate this data. All results are averaged across these 4 folds.
We also use the same performance metrics as Chen and Mooney (2008). The accuracy of matching and semantic parsing are measured using F-measure, the harmonic mean of precision and recall, where precision is the fraction of the system's annotations that are correct, and recall is the fraction of the annotations from the goldstandard that the system correctly produces. Generation is evaluated using BLEU score (Papineni et al., 2002) between generated sentences and reference NL sentences in the test set. We compare our results to previous results from Chen and Mooney (2008) and Chen et al. (2010) and to matching results on Robocup data from Liang et al. (2009).
NL-MR Matching
The goal of matching is to find the most probable NL-MR alignment for ambiguous examples consisting of an NL sentence and multiple potential MR logical forms. In Robocup sportscasting, the MRs for a given sentence correspond to all game events that occur within a 5-second window prior to the NL comment. Not all NL sentences have a matching MR in this window, but most do. During testing, an NL w is matched to an MR m if and only if the learned semantic parser produces m as the most probable parse of w. Thus, our model does not force every NL to match an MR. If the most probable semantic parse of a sentence does not match any of the possible recent events, it is simply left unmatched. Matching is evaluated against the gold-standard matches supplied with the data, which are used for evaluation purposes only. The gold matching data is never used during training. Table 2 shows the detailed results for both English and Korean data. 4 Our best approach outperforms all previous methods for both English and Korean by quite large margins. Note 4 Since the Korean data was not yet available for use by either Chen and Mooney (2008) or Liang et al. (2009), we present the results reported by Chen et al. (2010) for these methods. (2008) that initializing our EM training with IGSL's estimates improves performance significantly, and this approach outperforms Chen et al. (2010)'s best method, which also uses IGSL.
English Korean Chen and Mooney
In particular, our proposed model outperforms the generative alignment model of Liang et al. (2009), indicating that the extra linguistic information and MR grammatical structure used by Lu et al. (2008)'s generative language model make our overall model more effective than a simple Markov + bag-of-words model for language generation.
Semantic Parsing
Semantic parsing is evaluated by determining how accurately NL sentences in the test set are correctly mapped to their meaning representations. Results are presented in Table 3. 5 6 For our model, we report results using the parser learned directly from the ambiguous supervision, as well 5 The best result of Chen and Mooney (2008) is for WASPER-GEN, and that of Chen et al. (2010) as results for training a supervised parser (both WASP and Lu et al. (2009)'s) on the NL-MR matching produced by our model. We also present results for training Lu et al.'s parser and WASP on Liang et al.'s NL-MR matchings. Our initial learned semantic parser does not perform better than the best results reported by Chen et al. (2010), but it is clearly better than the initial results of Chen and Mooney (2008). Training WASP and Lu et al.'s supervised parser on our method's highly accurate set of disambiguated NL-MR pairs improved the results. Retraining Lu et al.'s parser gave the best overall results for English, and retraining WASP gave the second highest results for Korean, only failing to beat the very best results of Chen et al. (2010). It is somewhat surprising that simply retraining on the hardened set of most probable NL-MR matches gives better results than the parser trained using EM, which actually exploits the uncertainty in the underlying matches. Further investigations of this phenomenon are indicated.
Comparing with the corresponding results for training WASP and Lu et al.'s supervised parser on the NL-MR matchings produced by Liang et al.'s alignment method, it is clear that our matchings produce more accurate semantic parsers except when training WASP on English.
Tactical Generation
Tactical generation is evaluated based on how well the learned model generates accurate NL sentences from MR logical forms. Without integrating a language model for the NL, the existing generative model is not very effective for tactical generation. Lu et al. (2009) introduced an effective language generator for the hybrid tree framework using a Tree-CRF model; however, we did not have access to this system. Therefore, for tactical generation, we used the publicly available WASP −1 system (Wong and Mooney, 2007a) trained on disambiguated NL-MR matches. This approach also allows direct comparison with the results of Chen and Mooney (2008) and Chen et al. (2010), who also used WASP −1 for tactical generation. Our objective is to show that the more accurate matchings produced by our generative model can improve tactical generation. (2008) The results are shown in Table 4. 7 8 Overall, WASP −1 trained on the NL-MR matching from our alignment model performs better than all previous methods. In particular, using the matchings from our method to train WASP −1 produces better tactical generators than using matchings from Liang et al.'s approach.
Discussion
Overall, our model performs particularly well at matching NL and MRs under ambiguous supervision, and the difference is larger for English than Korean. However, improved matching results do not necessarily translate into significantly better semantic parsers. For English, the improvement in matching is almost 10 percentage points in Fmeasure, but the semantic parsing result trained with this more accurate matching shows only 1 point improvement.
Compared to Liang et al. (2009), our more accurate (i.e. higher F-measure) matchings provide a clear improvement in both semantic parsing and tactical generation. The only exception is English parsing using WASP, which seems to be due to some misleading noise in our alignments. WASP seems to be affected more than Lu et al.'s system by such extraneous noise. However, in tactical generation, this extraneous noise does not seem to lead to worse performance, and our approach always gives the best results. As discussed by Chen and Mooney (2008) and Chen et al. (2010), tactical generation is somewhat easier than semantic parsing in that semantic parsing needs to learn 7 The best result of Chen and Mooney (2008) is for WASPER-GEN, and that of Chen et al. (2010) to map a variety of synonymous natural-language expressions to the same meaning representation, while tactical generation only needs to learn one way to produce a correct natural language description of an event. This difference in the nature of semantic parsing and tactical generation may be the cause of the different trends in the results.
Conclusions and Future Work
We have presented a novel generative model capable of probabilistically aligning natural-language sentences to their correct meaning representations given the ambiguous supervision provided by a grounded language acquisition scenario. Our model is also capable of simultaneously learning to semantically parse NL sentences into their corresponding meaning representations. Experimental results in Robocup sportscasting show that the NL-MR matchings inferred by our model are significantly more accurate than those produced by all previous methods. Our approach also learns competitive semantic parsers and improved language generators compared to previous methods. In particular, we showed that our alignments provide a better foundation for learning accurate semantic parsers and tactical generators compared to those of Liang et al. (2009), whose generative model is limited by a simple bag-of-words assumption.
In the future, we plan to test our model on more complicated data with higher degrees of ambiguity as well as more complex meaning representations. One immediate direction is evaluating our approach on the datasets of weather forecasts and NFL football articles used by Liang et al. (2009). However, our current model does not support matching multiple meaning representations to the same natural-language sentence, and needs to be extended to allow multiple MRs to generate a single NL sentence. | 2014-07-01T00:00:00.000Z | 2010-08-23T00:00:00.000 | {
"year": 2010,
"sha1": "c058a83962e55411b5615ebdfba470b892420c18",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "3c80e4328becc631ac7016a6d4707620b4b8cec3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
196558958 | pes2o/s2orc | v3-fos-license | A Pilot Study to Evaluate Appropriateness of Empirical Antibiotic Use in Intensive Care Unit of King Saud Medical City, Riyadh, Saudi Arabia
Background: Antibiotics are commonly administered therapies in ICU. There has been a concern over antibiotic misuse recently. ICU is both a victim and a contributor to the ongoing antibiotic misuse problem and a cause of emerging resistance among the pathogens commonly acquired in intensive care units. Because of high mortality associated with sepsis, it is a great challenge for intensive care physicians to select appropriate antibiotic sometimes without any culture and sensitivity. Similarly the time to deescalate also remains a tough call. Selection of appropriate antibiotics empirically has always been a topic of debate among Intensive Care and Infectious Disease practitioners. Objective: The aim of our pilot study was not only to assess the appropriateness of use of antibiotics in our ICU but to help us guide to design a bigger study and structure a stewardship program for ICU. Also to assess the differences among prescription of ICU and Infectious Disease Consultants. Methods: A prospective observational study in King Saud Medical City ICU following antibiotics started and stopped from 6th November 2014 to 23rd November 2014. Study included 23 adult patients admitted with different etiologies. All 23 patients’ records were shared with two alien referees (one was infectious diseases and other was ICU consultant) from other hospital. Prescribers were blinded to the fact that data was being collected for auditing and the referees were blinded to prescribers and to each other’s. Results: Total 46 antibiotics were used. 40 among them were started on empirically, 6 were culture based. 31 antibiotics were stopped by ICU. 28 among these 31 antibiotics were empirical. Most of included patients responded to combination or monotherapy. Piperacillin-Tazobactam was the most commonly prescribed antibiotic. No major difference was noted among the choice of intensive care or infectious disease consultant. Conclusion: Empirical antibiotics are vital for patients admitted in ICU. We need to follow hospital's anti-biogram and stewardship programs with prompt de-escalation wherever appropriate.
INTRODUCTION
Selection of antibiotics in the era of high resistance and lack of new antimicrobial development in intensive care settings is crucial [1,2]. Appropriate administration of antibiotics is major determinant for the outcomes in case of severe bacterial infections in intensive care (ICU) settings [3]. To avoid unnecessary antibiotic administration and increase therapeutic effectiveness usually locally accepted or national society based guideline or protocols are followed. Even well-developed guidelines or protocols may not translate into widely accepted treatment algorithms. Some deviation from guidelines and protocols is expected as medical decision making is usually guided by individual patient's characteristics and the judgment and experience of the caregivers [4].
Antimicrobials are the major drugs used in intensive care units (ICU), although their undiscriminating and prolonged use is one of the main factors involved in the emergence of multidrugresistant bacteria, whose incidence has grown in all continents [5]. Typical clinical signs of infection, such as fever or raised white blood cell count, are non-specific and can occur in many other conditions in the critically ill population. Similarly, although many biomarkers, e.g., C-reactive protein and Procalcitonin (PCT) [6], have been suggested to help diagnosis or to rule out infection, none is specific for infection and all can be altered in other conditions that commonly affect ICU patients. Diagnosis of infection still relies largely on culture-based techniques, which can take several days for a positive result to be available.
Moreover, in patients already receiving antibiotics, cultures may be negative [7]. The ICU is considered among most important sources of nosocomial infections [8]. The high prevalence of infections involves heavy consumption of antimicrobial agents which is 10 times more than in general wards [9]. In all these circumstances, actual implementation of antimicrobial therapy (AMT) prescription guideline or antibiotics stewardship is needed. However, it does not provide insight into the appropriateness of antimicrobial therapy and about determinants of inappropriate use [10].
We design this study proposal to determine the appropriateness of empirical antibiotics prescription in an intensive care unit.
PATIENTS AND METHODS
Study was conducted in King Saud Medical City, Riyadh KSA from 6th November 2014 to 23 rd November 2014. Total 26 patients of adult age from 18-90 years, and those who were started on antibiotics within first week of admission in ICU were included in study. All those below 18 years, patients with ICU stay<24 hrs, and those with DNR (Do not resuscitate) status were excluded from study. Data like age, gender, White cell count, C-Reactive Protein), Serum Lactate levels, Chest X-Rays, Cultures and Sensitivities, type of antibiotics, start of antibiotics, duration in ICU, discontinuation of antibiotics was collected. 3 patients did not completed follow-up, so they were excluded from study as well.
Study started after ethical committee's approval. Informed consents were taken. All data was also presented to alien referees to give detailed comments.
Statistical analysis
We performed prospective observational study. Statistical analysis performed by using IBM SPSS version 20.0. Type of antibiotics represented in percentages. P-value<0.05 is considered significant. Use of antibiotics, mentioned in frequency tables.
RESULTS
In this study we included 26 patients, 3 patients were excluded from study, as they did not complete follow up. Median age was 48 years, for 18-90 years of age. In our study 12 males and 11 females were included. Total 46 antibiotics were started for 23 patients. Among them 40 antibiotics were started on empirical basis with significant P-value. Only 6 antibiotics were started based on cultures (Table 1). In relation to these antibiotics about their stoppage, 7 patients died in ICU, 8 patients were discharged on antibiotics from ICU to general ward. We found that 31 out of 46 antibiotics were discontinued in ICU. P-value was significant for this group of antibiotics as it was <0.05. The duration of these 31 antibiotics was 2-15 days, with median duration of 6 days. This indicates that antibiotics provided appropriate cover and most of the antibiotics among them were based on empirical therapy in 28 out of the 31 patients. Only 3 were started based on available cultures (Table 1). The most commonly used antibiotics in our study were Piperacillin-Tazobactam 30.4% it was used on 14 patients along with other antibiotics. Macrolides were used in 9 (19.6%) individuals. Carbapenems and 3rd generation Cephalosporins were used in 6 (13%) and 5 (10.9%) patients respectively. Vancomycin and Linezolid were prescribed 3 times (6.5%) respectively. However, Sulfamethoxazole and trimethoprim was used in 1 (2.2%) patient.
OPEN ACCESS Freely available online
Gen Med (Los Angel), Vol. 7 Iss.
DISCUSSION
Empirical treatment should be based on regularly updated data on trends of incidence and susceptibility to antimicrobial agents in a particular setting [11]. Through the initiation of active empiric antibiotic therapy based upon local susceptibilities, daily evaluation of signs and symptoms of infection and narrowing of antibiotic therapy when feasible, providers can streamline the treatment of common intensive care unit (ICU) infections [12]. 40 Empirical antibiotic started in ICU in our study with significant P-value<0.00 Michael and colleague identified that estimates of the potential benefit of appropriate empirical antibiotic treatment vary widely in the literature [13]. Garnacho and colleagues [14] identified that Deescalation of antibiotics in ICU ranges from 10%-60% in critically ill patients. De-escalation refers to stoppage of antibiotic or switching to other agent with narrow spectrum. Among empirical therapy, we stopped 28 out of 31 antibiotics in our study in ICU with significant P-value and only 8/31 were discharged from ICU with antibiotics. In our study around 82% patients (15/23) were started with combination therapy as compare to monotherapy e.g., 17.1%. Similar results were seen in one study. Pierre [15] suggested that combination therapy mainly benefits the most severely ill patients and bacteremia patients.
Jose and colleagues [16] noted the most common initial antibiotics which were prescribed were Cefoperazone-Sulbactum or Piperacillin-Tazobactum. Our study also revealed similar pattern in choice of antibiotic used e.g., maximum patients were given Piperacillin-Tazobactam, Macrolides and Carbapenem.
Still there is no single recognized policy to identify about which antibiotic should be used at proper time, Consequently, antibiotic prescribing remains far from the guidelines, probably because intensive care physicians are receptive to different advice [17]. These circumstances urgently call for high-quality evidence in this field and further stress the importance of establishing local and national surveillance systems, as well as the development of multidisciplinary approaches to antibiotic management and guideline production. By adopting these guidelines common censuses can be adopted on wide range in order to streamline the antibiotics usage in intensive care settings.
CONCLUSION
Empiric antibiotics selection is a major undertaking on part of ICU physicians as it plays an important role in outcome of critically sick patients. No major difference was noted among the choice of intensive care or infectious disease consultant Referees are neither superior nor inferior to prescribers (ICU physicians in our study) but they had the privilege of looking retrospectively at the cases when things had become clearer.
They also were privileged to be away from the heat of the bedside situation, peer pressure, pressure from families of patients and medicolegal responsibility. We believe that this pilot will be of great help in designing a bigger prospective study. We understand that the sample was not powered enough to detect any statistically significant findings. The percentage of overall appropriateness is consistent with previously published bigger studies. It seems that our empirical choices were appropriate most of the time but our weakest points come from collection of proper cultures then deescalation/modification according to clinical and bacteriological data. | 2019-06-14T13:46:38.673Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "268898c76368ef1505fd7e83b424306e5ca152c6",
"oa_license": "CCBY",
"oa_url": "https://www.longdom.org/open-access/a-pilot-study-to-evaluate-appropriateness-of-empirical-antibiotic-use-in-intensive-care-unit-of-king-saud-medical-city-r.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a423f9ad8cdbca63d8101cb1eb7a38fae168eb2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
24990279 | pes2o/s2orc | v3-fos-license | 3D-2D transition in mode-I fracture microbranching in a perturbed hexagonal close-packed lattice
Mode-I fracture exhibits microbranching in the high velocity regime where the simple straight crack is unstable. For velocities below the instability, classic modeling using linear elasticity is valid. However, showing the existence of the instability and calculating the dynamics post-instability within the linear elastic framework is difficult and controversial. The experimental results give several indications that the microbranching phenomenon is basically a three-dimensional phenomenon. Nevertheless, the theoretical effort has been focused mostly in two-dimensional modeling. In this work we study the microbranching instability using three-dimensional atomistic simulations, exploring the difference between the 2D and 3D models. We find that the basic 3D fracture pattern shares similar behavior with the 2D case. Nevertheless, we exhibit a clear 3D-2D transition as the crack velocity increases, while as long as the microbranches are sufficiently small, the behavior is pure 3D-behavior, while at large driving, as the size of the microbranches increases, more 2D-like behavior is exhibited. In addition, in 3D simulations, the quantitative features of the microbranches, separating the regimes of steady-state cracks (mirror) and post-instability (mist-hackle) are reproduced clearly, consistent with the experimental findings.
I. INTRODUCTION
Over the last decades, the dynamic instability in mode-I fracture has been extensively studied [1]. These findings deviate from the two-dimensional classic model for mode-I fracture with a single crack that propagates in the midline of the sample, based on linear elasticity fracture mechanics (LEFM) [2]. This classic theory, which lacked a supplemental criteria for instability, predicts that a single crack will accelerate to a terminal velocity, which for mode-I fracture is the Rayleigh surface wave speed, c R . In fact, as long as a single crack does exist, the crack obeys LEFM predictions [3,4]. However, the experiments find that at a much lower velocity (≈ 0.36−0.42c R , for a short review, see for example [5]), a dynamic instability occurs, and small microbranches start to appear nearby the main crack [6][7][8][9][10]. The additional energy that has to be spent in creating the new surfaces prevents the crack from accelerating to the theoretical terminal velocity. LEFM-based universal criteria for branching [11,12] fail to describe the instability, predicting a much higher critical velocity than in reality. Moreover, when the small microbranches appear at v v cr , they present a clear 3D nature. However, when enlarging the driving displacement, the small microbranches reunite, creating 2D patterns (right before macro-branches appear), especially in PMMA [7][8][9][10].
Lattice models reproduce the existence of steady state cracks [16,17], and via a standard linear stability analysis, they predict the existence of a critical velocity, when FIG. 1. (color online) A snapshot of the (same) crack tip in steady-state crack using a perturbed hexagonal close-packed (hcp) lattice, from different viewing angles. Each atom shares 12 nearest neighbors, defining "bonds" that connect each other by a force law, and is allowed to move in all three coordinates. The crack creates mirror-like pattern. The left snapshot is a clear XY-plan view while the right has a slight tilt, showing how deep the system is.
the steady-state cracks becomes linearly unstable [18][19][20][21][22]. This critical velocity is found to be strongly dependent on the details of the inter-atomic potential, such as the degree of smoothness of the potential (as it drops to zero), or the amount of dissipation. Simple simulations that use these same potentials succeed in reproducing the steady-state regime, yielding the exact point of instability, and in reproducing the lattice models results, but fail to describe the behavior in the post-instability regime [21,23]. The early efforts on using a binary-alloy model for modeling brittle amorphous materials failed to achieve steady-state cracks at all [13], although more recent attempts have succeeded in yielding propagating cracks [14,15].
Recent studies using Zachariasen's [24] 2D continuous random network model (CRN) of amorphous materials, a model that also has recently received experimental support from direct imaging of 2D silica glasses [25], were used in describing the microbranching instability [26] (using O 10 4 2D particle mesh). The simulations reproduced qualitatively both the regime of steady-state propagating cracks and the fracture patterns of the microbranches. In addition, using perturbed lattice models, generated by adding a small amount of disorder to the bond lengths, supplemented by an additional 3body force-law which penalize rotation of the bonds away from the natural directions of the lattice, produces similar results [27]. Larger scale simulations (O 10 6 particles) using GPU computing yields various qualitative and quantitative results of post-instability behavior such as a sharp transition between the regime of steady-state and microbranching, the increase of the derivative of the electrical resistance across the crack with respect to time (which correlates experimentally with the crack velocity), the correct branching angle and also the power-law behavior of the branch shapes [28]. All of the theoretical models that were mentioned above employed a 2D description of the problem.
The large scale simulations allow us for the first time to perform three-dimensional (3D) simulations, attacking the microbranching phenomenon which is, at its heart, a 3D phenomenon [1,8,29], by taking the O 10 4 particle mesh and adding a third dimension with N Z ≈ 100. The two basic questions that we address using our 3D simulations are: (i) Checking the reliability of the previous 2D simulations, investigating how well the 2D description reproduces the behavior of the more realistic 3D models; (ii) Studying for the first time the direct 3D experimental features of the microbranches, which have not previously been modeled.
We note that several 3D fracture molecular-dynamics simulations, containing large numbers of atoms, have been studied previously using different potentials (for example, see [30][31][32]), but intensive study concerning the features of the 3D instability and the features of the microbranches have not yet been studied. It is important to note that atomistic simulations cannot reproduce the fracture patterns on the real physical length scales, of the experiments. However, they try to reproduce scaled results and scaled structures of the real fracture length scales.
II. MODEL AND GENERAL METHODOLOGY
Our simulations consist of ≈ 3 · 10 6 atoms, which include 1.7 · 10 7 bonds (central force-laws), and ≈ 3.4 · 10 7 3-body interactions (see appendix B for the exact parameter of the 3-body potential that was used). These simulations can be performed in reasonable run times by using parallel GPU computing (see appendix C). We used a perturbed hexagonal close-packed (hcp) structure, which is a 3D extension of the 2D perturbed hexagonal lattice that was studied in [27,28]. As in our 2D studies, the interactions are taken to be only between nearest-neighbors in the unperturbed hcp lattice, with an in-plane lattice constant of a = 4 and c = 8/3a (see Fig. 9 in appendix A). Every atom has 12 closest neighbors. We add a small amount of disorder to the bond lengths, a i,j = (1 + ǫ i,j )a where ǫ i,j ∈ [−b, b], and b is constant, and in this work is set to b = 0.1 (for the system shape, see Fig. 10 in appendix B). In most of our simulations, we employed a piecewise-linear radial force law (in this work, k r = 1) between the initially neighboring atoms. However, in some of them we used a more physical smooth force law, using a smoothness parameter α, which when α → ∞ reproduces the piecewise-linear model (see appendix B). In addition, we add a 3-body potential and Kelvin type viscosity, as described in detail in our 2D lattice studies [27,28]. We relax the system, and then we strain the lattice under a mode-I tensile loading with a given constant strain grip boundary condition corresponding to a given driving displacement ±∆ (which is normalized relative to the Griffith displacement ∆ G ) of the edges and seed the system with an initial crack. For a detailed discussion regarding the model and the governing equations, see appendixes A andB. The crack then propagates via the same molecular dynamics Euler scheme (the simulations were always stable using a reasonable value of dt, so we have not needed any more sophisticated numerical schemes). In Fig. 1 we present close-in snapshots of the (same) crack tip in a steadystate crack from different viewing angles. We can see that at low driving displacement the crack is actually 2D in nature. [8,10] (left) for increasing driving displacement. In color (right) we see our simulations XY plan view, where the color denotes the Z-location of the broken bonds (dark red for top edge and dark blue for bottom edge). We can see that despite the quite noisy simulations, in general the qualitative picture is quite good. The upper picture yields a mirror-like steady-state crack and is valid for all v/cR 0.7.
III. MICROBRANCHING INSTABILITY IN 3D-PERTURBED LATTICE
The crack velocity v (which we normalize to the Rayleigh wave speed c R ) increases with ∆/∆ G (see Fig. 2). We define the Rayleigh wave speed here as that calculated from c l and c t (the longitude and the transverse wave speeds) in the XY-plane ((0001) in the crystallographic notation), which is the major fracture surface in our simulations (there is a symmetry along the Z-axes in steady-state cracks, see appendix D). We can see that using a perfect non-perturbed lattice (in these simulations we used also k θ = 0, in addition to b = 0, but this result is valid for all value of k θ ), we get a (non-physical) velocity gap (like in 2D [18][19][20][21]), in which slow cracks are prohibited. However, adding disorder and the 3-body force-law, the velocity gap shrinks, and by using a finite value of α, the velocity gap shrinks dramatically with steady-state cracks in almost zero velocities, yielding the correct experimental behavior [1].
In. Fig. 3 we show several microbranching patterns (top views), both experimental (in PMMA) and from our 3D simulations using k θ /k r = 5 (where the color denotes the Z-location). The broken bonds are plotted in the fractured system, and their Z-location can be associated with the color, where dark red represents the top edge and dark blue for bottom edge. We see that below the critical velocity, in the regime of steady-state cracks, the crack has a "mirror" surface. Increasing the driving displacement, small microbranches appear nearby the main crack, while the size of the microbranches increases dramatically with the driving, yielding at first a "mist" surface and with large ∆, a "hackle" surface. Despite the noisy results (due to the relatively small size of the simulations), the pictures are qualitatively quite similar No. of broken bonds to the experimental findings, at least in the sense that the microbranches increase dramatically with the driving displacement, yielding eventually large macro-branches (in the simulations, a "macro-branch" is a branch that reaches the end of the sample, like in the experiments, on a different length scale). A quantitative (scaled) overview is presented in Figs. 4-5. We note that without a 3-body force-law we do not get the microbranching pattern, but rather a cleavage-like behavior (with or without the presence of disorder). Using too strong a 3-body force law (k θ /k r = 6.7), yields microbranches that propagate in straight lines with the natural angle of the lattice (60 • ), which is again non-physical.
The transition between the regime of steady-state cracks and the post-instability side-branching regime is very sharp in the 3D simulations. In Figs. 4-5 we present two quantitative parameters that demonstrate this sharp transition (in the small box there is a zoomed picture of the transition area). In Fig. 4 the total number of broken bonds as a function of the crack velocity is displayed. In the small velocity regime, only the bonds necessary for yielding a single main crack are broken. Beyond the critical velocity, the number of broken bonds increases linearly, as in the experiments [8], and broadly similar (although much sharper here) to what is seen in the hexagonal perturbed 2D lattice [28].
In Fig. 5 we measure δy, the width of the microbranching region, as a function of the crack velocity (see definition inside Fig. 5). δy is a second measure of the size of the microbranches. As above, a sharp transition can be seen between the single crack and microbranching regimes. We note that using the piecewise-linear force law, the critical velocity v cr seems to be very close to the Yoffe criterion (which is ≈ 0.73c R ). But, as we showed previously in 2D, the quantitative value of the v cr can be controlled via the inter-atomic potential parameters, such as α and η (see appendix B for explicit definitions of these parameters) [19,21,28]. We can see that using a finite value of α, the critical velocity decreases (see the small boxes in Figs. 4-5 for a given k θ ), to the exact value of the 2D simulations; in α → ∞ we reproduce the 2D critical velocity v cr ≈ 0.73c R (see Fig. 7 in [20]), while also with α = 5 we reproduce the 2D value, v cr ≈ 0.68c R (see Fig. 4(a) in [21]). That means that the critical velocity is not universal and is potential-dependent. Thus, for example we can vary the values of α and η to reproduce the exact experimental critical velocity of a given material, very much like we did in 2D [21,22]. In both Fig. 4 and 5, the results appear insensitive to the exact value of k θ , despite the fact that the microbranches in the two cases appear different.
In addition, we can cut thin horizontal slices from the XY fracture pattern, yielding 2D patterns and compare them to pure 2D fracture patterns [27]. In Fig. 6 we present two fracture patterns of a 2D perturbed hexagonal lattice and and two 2D slices of the 3D hcp perturbed lattice, one for relatively small driving and one for large driving displacement. We can see despite the relatively large noise (resulting from the breaking of one or few bonds) that characterizes the 3D simulations, the patterns are quite similar to pure 2D simulations for small driving displacement. This fact is encouraging and supports the assumption that for at least some features (e.g., XY-plane features of the microbranches), the 2D studies are relevant. However, for large driving displacement, the 3D patterns look rather different from the 2D patterns, though the fracture-pattern is still much more developed at large driving displacements. Nevertheless, we note that different horizontal slices of the same 3D fracture pattern (for different Z) yield different patterns. This fact indicates that for the 3D regime, as long as the microbranches are sufficiently small, there is no symmetry along the Z-axis. Note that the driving displacement v/c R required to produce a given amount of side- [8]), along with simulations results (where the color denotes the Z-location of the broken bonds for presentation reasons), for small driving on the left, and for large driving displacement on the right. Bottom row: The simulational XZ plane view. We see that very much like the experiments, at small driving displacement the microbranching is "3D" and for large driving displacement, the microbranches are "2D" in character.
branching is much greater than in 2D, since out of plane bonds are being broken as well.
IV. THE 3D-2D TRANSITION
Moreover, we can compare our 3D simulations to the 3D experimental properties of the microbranches. Experimental post-mortem pictures of the XZ-plane of the fractured surface by Sharon & Fineberg [8] reveal that nearby the origin of instability, the microbranches are localized in the Z-axis. At large velocities, the microbranches merge, creating a Z-plane quasi-symmetric pattern, yielding a 3D-2D transition [1,[8][9][10]. In PMMA (as opposed to glasses or gels), nice symmetric "2D"-like strips are created in association with the largest microbranches [8].
In Fig. 7 we present two experimental pictures of XZplane of the fracture surface that demonstrates the 2D-3D transition in PMMA, taken from Ref. [8]. Below, we depict XZ slices taken at a constant distance from the main crack (relative to the Y-axis) of our 3D simulations (the pictures from the main crack plane itself are too noisy, due to our finite size simulations). We see that the fracture patterns looks surprisingly similar. At small driving displacement (∆/∆ G = 2.5 in the simulations), right beyond the critical velocity, the microbranches are localized in the Z-directions, yielding purely 3D behavior in both the experiments and the simulations. Increasing the driving displacement further (∆/∆ G = 4 in the simulations), the microbranching increases in the Z-direction from top to bottom of the sample, yielding a 2D type behavior. The periodic stripes structure is a result of the periodic microbranches in the XY-plane (Fig. 3) [7]. After the onset of branching, the energy flowing into the crack tip is divided between the main crack and the daughter cracks. The daughter cracks, which compete with the main crack, have a finite (similar) lifetime, because the main crack can outrun them and screen the daughter cracks from the surrounding stress field. The daughter cracks then die and the energy that had been diverted from the main crack returns. The scenario then repeats itself, causing the branching pattern to be more or less, periodic.
As a matter of fact, these large microbranches result from the merging of several small microbranches, as we can see carefully in Fig. 7 (there is not a perfect symmetry along the Z-axis; for different Z, the microbranch propagates different distances). This behavior shares similar features with recent experimental work [29].
We can now quantify this 2D-3D transition (of course in normalized units). Looking carefully in the PMMA experimental results, we can see that the region of instability, v = v cr ≈ 340m/s ( Fig. 11(a) in [8]) is quite differ from the point of 2D-3D transition, v ≈ 550m/s (Fig. 19 in [8]), ensuring the fact that at first (near v ≈ v cr ) the microbranches are "3D", while only for larger velocities, they become "2D". In Fig. 8, we plot the width of the largest microbranch (in the Z-direction) in the 3D simulations for a given ∆/∆ G , along with the total number of broken bonds (from Fig. 4), both of them are normalized to their largest value. We plot them both as a function of ∆/∆ G and not as a function of v/c R , since the crack velocities are an output parameter (and in our simulations are much higher than the PMMA experimental results). For the experimental results, we used Fig. 17 in [1] for transferring the data from v/c R to ∆/∆ G .
We can see that the 3D simulations results are reproducing the 2D-3D transition almost perfectly. At ∆/∆ G ≈ 1.3, in both the experimental and the simulations results, small microbranches start to appear the main crack. Those microbranches are localized in the Z-direction, while only at ∆/∆ G ≈ 1.8 − 1.9, the width of the microbranches increases dramatically, yielding "2D-microbranches", when several microbranches re- unite, covering the whole Z-directions, yielding a 3D-2D transition.
V. SUMMARY AND FUTURE WORK
In conclusion, as long as we look in the XY-plane, the 3D simulations share similar features as the 2D simulations, and quantitative measures as to the total number of microbranches or the size of the opening of the microbranches as a function of crack velocity look the same. On the other hand, our current simulations also reproduce pure 3D features, especially the XZ-plane patterns, when the 3D-2D transition occurs. Thus, we believe that the lattice models and simulations offer a good theoretical framework for studying the microbranching instability, including the 3D effects. We are left with the following question. In 2D [28], enlarging the system allows quantitative study of the branches. How will behave the 3D system on a larger scale? The answers should be attainable within the scope of available supercomputers, using thousands of nodes, or tens of GPU's. nearest-neighbors of site i.
Appendix B: The equations of motion
In most of our calculations, between each two atoms there is a piecewise linear radial force (2-body force law) of the form: where: The Heaviside step function θ H guarantees that the force drops immediately to zero when the distance between two atoms | r i,j | reaches a certain value ε > a i,j (the breaking of a "bond"). In this work we set ε = a + 1. Alternatively to Eq. B2, we can use a smoother force law, which instead of a sharp failure at | r i,j | = ε, has a more realistic smooth transition wherein the force law drops to zero of the form [19,21]: where α is the smoothness parameter, such that when α → ∞ the force law reverts to the piecewise linear force law. The results in this paper refer to the piecewise linear model, unless mentioned otherwise.
In addition there is a 3-body force law that depends on the cosine of the angles between each set of 3 neighboring atoms, defined of course by: that acts on the central atom (atom i) of each angle, and may be expressed as: while the force that is applied on the other two atoms (atoms j, k) may be expressed as: Of course, the forces satisfy the relation: f θ i,(j,k) = −( f θ j,(i,k) + f θ k,(i,j) ). The 3-body force law drops immediately to zero when using a piecewise linear force law when the bond breaks (Eq. B2), or may be taken to vanish smoothly, using Eq. B3.
We note that in the 3D-case, there are a lot of possible angles between each set of 3 bonds. To shorten the run-times (the calculation of the 3-body force law is extremely time consuming), in most of our calculations, we do not include all the possible angles between triplets, but only 12 of them. We chose to take the 6 60 • angles inside the XY-plane (for reproducing the 2D-hexagonal problem that was studied before for N z = 1), and another 6 angles, three 60 • that connect each atom with its two neighbors that are located in the upper parallel plane, and three angles in the lower parallel plane (For convenience, see Fig. 9). However, in some of our calculations, we used all the 24 60 • angles, while the results do not vary qualitatively, and the fracture patterns remain similar. There is a certain preferred angle θ C for which the 3-body force law vanishes which is set to θ C = π /3.
In addition, it is convenient to add a small Kelvin-type viscoelastic force proportional to the relative velocity be-tween the two atoms of the bond v i,j [19][20][21]33]: with η the viscosity parameter. The viscous force vanishes after the bond is broken, governed by k ′ i,j . The imposition of a small amount of such a viscosity acts to stabilize the system and is especially useful in the relatively small systems simulated herein.
The set of equations of motion of each atom is then: f r i,j + g r i,j + j,k∈12 nn f θ i,(j,k) + j∈24 nn f θ j,(i,k) .
(B8) In this work the units are chosen so that the radial spring constant k r and the atoms mass m i is unity.
After defining the steady-state optimal length of each bond a i,j by Eq. A1, we first relax the system under the equations of motion, Eqs. B1-B8 with a small amount of viscosity, yielding the minimal-energy locations of the atoms in the lattice. In Fig. 10, we can see a small-scale 3D perturbed hcp using our model.
After relaxing the initial lattice, we strain the lattice under a mode-I tensile loading with a given constant strain corresponding a given driving displacement ±∆ of the edges and seed the system with an initial crack. The crack then propagates via the same molecular dynamics Euler scheme using Eqs. B1-B8. | 2017-05-24T20:02:24.000Z | 2015-07-02T00:00:00.000 | {
"year": 2017,
"sha1": "3065ea3b2526d6b956d7b1a6a9a13243271eee0d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1507.00785",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3065ea3b2526d6b956d7b1a6a9a13243271eee0d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
51919655 | pes2o/s2orc | v3-fos-license | Partial Discharge Analysis in High-Frequency Transformer Based on High-Frequency Current Transducer
High-frequency transformers are the core components of power electronic transformers (PET), whose insulation is deeply threatened by high voltage (HV) and high frequency (HF). The partial discharge (PD) test is an effective method to assess an electrical insulation system. A PD measurement platform applying different frequencies was set up in this manuscript. PD signals were acquired with a high-frequency current transducer (HFCT). For improving the signal-to-noise (SNR) ratio of PD pulses, empirical mode decomposition (EMD) was used to increase the SNR by 4 dB. PD characteristic parameters such as partial discharge inception voltage (PDIV) and PD phase, number, and magnitude were all analyzed as frequency dependent. High frequency led to high PDIV and a smaller discharge phase region. PD number and magnitude were first up and then down as the frequency increased. As a result, a suitable frequency for evaluating the insulation of high-frequency transformers is proposed at 8 kHz according to this work.
Introduction
To establish a smart grid, it is necessary to accomplish electronic power conditioning and control of electric power production and distribution [1]. In this trend, electronic-based power devices are migrating from the on/off control to a modern control, such as the direct control (DC) microgrid (DCMG) [2] and railway traction systems [3], etc. Power electronic transformers (PET), functioning as energy routers in the power grids at various voltage levels have gained widespread concern. Since PET is a combination of power electronics and high frequency (HF) transformers, it has a great capacity to convert electrical energy with different electrical characteristics and to make reactive power compensation for the system. To reduce the space share of PET, the operating frequency of PET is designed mainly to a few thousand hertz [4][5][6], much higher than the conventional power frequency of 50/60 Hz. The more severe working conditions of high power, high voltage, and high frequency threaten the insulation of PET [7]. As a core component of PET, HF transformers play a role in isolating and transmitting power, the insulation of which is particularly important. Damaged insulation of HF
Partial Discharge Measurement Setup
Insulation defects of HF transformers are caused by many factors, of which free metal particles often cause suspension potential or even suspension discharge. Suspension discharge is the greatest number of partial discharges [14,25]. A schematic diagram of a typical HF transformer is shown in Figure 1. insulation of HF transformers not only has low energy conversion efficiency but also leads to power failures of the power and electronic equipment [8], further causing the whole crash of PET. Partial discharge (PD) in high-voltage (HV) equipment is deemed as one of the most significant phenomena to be investigated for determining defects and degradation in electrical insulation and an apparatus's lifetime. Similar to the conventional HV power apparatus, scholars have paid attention to PD in high-frequency transformers to determine whether there is a fault and to evaluate the health status.
The frequency dependence of PD sources has been taken into account and a model developed in order to study the effect of applying higher frequency (50 to 600 Hz) on the behavior of PD activity [9]. Experimental research has been carried out [10] in the range of 50 to 1000 Hz. However, according to field experiences of oscillating waves ranging from 20 Hz to several hundred hertz, the frequency of the power source makes little difference to PD activities [11][12][13]. More research has been done at higher frequencies. A semi-square voltage of 2 kHz has been used in [14]; even tens of kilohertz (kHz) repetitive pulse-width modulation (PWM), such as HV pulses stressed on power electronic devices, is considered important for the reduction of the insulation reliability and its life cycle [10,15]. As to the high-frequency transformer in PET, the non-sinusoidal waveform is not suitable for the voltage step-up/step-down [7]. As a result, the sinusoidal waveform with more than 1000 Hz should attract more attention.
As is well known, PD detection and the diagnosis of low-frequency power transformers depend on various techniques on which there have been extensive studies [16][17][18][19][20]. Several PD detection methods have been developed according to the physical properties of the insulation system, which accompany PD activity, such as current pulse method [21], ultrasonic detection, ultra-high-frequency (UHF) detection, and the optical method [22,23]. However, the lower frequency limit of conventional pulse current methods is close to that in the high-frequency transformer, the ultrasonic detection is not sensitive enough because of the complex acoustic impedance, the UHF signals are affected by communication signals, and there is still no known experience with optical measurements in this kind of electrical equipment. In this sense, the wide-band current method is proposed to be a good choice for PD detection in high-frequency transformers [24]. This paper is structured as follows. In Section 2, the measurement setup is described. The denoising process of empirical mode decomposition is depicted in Section 3; the signal-to-noise (SNR) ratio of the PD signal is increased by 4 dB. In Section 4, PD results and discussions are described, covering which parameters were frequency-dependent and how the parameters (PD phase region, PD number, PD magnitude, etc.) varied at different frequencies. The conclusions about appropriate conditions for testing HF transformers are presented in Section 5.
Partial Discharge Measurement Setup
Insulation defects of HF transformers are caused by many factors, of which free metal particles often cause suspension potential or even suspension discharge. Suspension discharge is the greatest number of partial discharges [14,25]. A schematic diagram of a typical HF transformer is shown in Figure 1. The structure of HF transformers are similar to conventional transformers. HF transformers consist of a magnetic core, copper windings, and insulation parts. However, the special operating conditions place more demand on the core material, winding distribution, and insulation performance. To achieve high efficiency in energy conversion and high power density, a high frequency through the magnetic core has been selected and a compact winding design was considered [7,26]. A large number of coils in a restrictive volume makes the inter-turn insulation gain strong electrical stress, and additional insulation failures should be taken into consideration in HF transformers. Insulating cardboard coordination has been adopted as the insulation for transformers [27], and the hot spots (the most likely point of suspension discharge) were in the winding of HF transformers [28]. Therefore, a suspension discharge model under insulating cardboard was designed to imitate the suspended discharge of a high-frequency transformer winding [29], as shown in Figure 2. The structure of HF transformers are similar to conventional transformers. HF transformers consist of a magnetic core, copper windings, and insulation parts. However, the special operating conditions place more demand on the core material, winding distribution, and insulation performance. To achieve high efficiency in energy conversion and high power density, a high frequency through the magnetic core has been selected and a compact winding design was considered [7,26]. A large number of coils in a restrictive volume makes the inter-turn insulation gain strong electrical stress, and additional insulation failures should be taken into consideration in HF transformers. Insulating cardboard coordination has been adopted as the insulation for transformers [27], and the hot spots (the most likely point of suspension discharge) were in the winding of HF transformers [28]. Therefore, a suspension discharge model under insulating cardboard was designed to imitate the suspended discharge of a high-frequency transformer winding [29], as shown in Figure 2. In the test platform, voltage output was provided to suspend the discharge model in series with a resistance of 10 MΩ by the power supply (CTP2000, Suman, Nanjing, China). A high-precision highfrequency current transformer (HFCT), type iHFCT-54 (Innovit, Xi'an, China), connected to the oscilloscope (DLM2034 with a high sample rate of 2.5 GS/s and high bandwidth of 350 kHz, Yokagawa, Tokyo, Japan) collected the whole signals.
One of the core devices in the test platform is the PD model. The large potential difference between the two parallel brass plates with a 35 mm gap provided a strong electric field in the air medium. Thus, the metal suspended in the strong electric field by the insulating paperboard and support carried a floating potential. Suspension discharges were generated due to the large potential difference between the suspended metal and HV side brass plate, but with a small gap of 2 mm. The other core device was HFCT, shown in Figure 1. It is a magnetic core surrounded by a multi-turn conductive coil. After a discharge, a large amount of charge moves rapidly toward the defect until it discharges again. This process is cyclic and generates a high-frequency current in the circuit. The magnetic field generated by the rapid current change passes through the magnetic core, resulting in an induction voltage of the coil, which is the signal output of the HFCT. The iHFCT-54 sensor has high accuracy, and the detection frequency range can reach 0.3~100 MHz. There is no electrical connection between the measurement circuit and the measured current. With a front fastening, the iHFCT-54 used in the non-intrusive detection method can realize online monitoring of PD. The output characteristic of this HFCT is shown in Figure 3. In the test platform, voltage output was provided to suspend the discharge model in series with a resistance of 10 MΩ by the power supply (CTP2000, Suman, Nanjing, China). A high-precision high-frequency current transformer (HFCT), type iHFCT-54 (Innovit, Xi'an, China), connected to the oscilloscope (DLM2034 with a high sample rate of 2.5 GS/s and high bandwidth of 350 kHz, Yokagawa, Tokyo, Japan) collected the whole signals.
One of the core devices in the test platform is the PD model. The large potential difference between the two parallel brass plates with a 35 mm gap provided a strong electric field in the air medium. Thus, the metal suspended in the strong electric field by the insulating paperboard and support carried a floating potential. Suspension discharges were generated due to the large potential difference between the suspended metal and HV side brass plate, but with a small gap of 2 mm. The other core device was HFCT, shown in Figure 1. It is a magnetic core surrounded by a multi-turn conductive coil. After a discharge, a large amount of charge moves rapidly toward the defect until it discharges again. This process is cyclic and generates a high-frequency current in the circuit. The magnetic field generated by the rapid current change passes through the magnetic core, resulting in an induction voltage of the coil, which is the signal output of the HFCT. The iHFCT-54 sensor has high accuracy, and the detection frequency range can reach 0.3~100 MHz. There is no electrical connection between the measurement circuit and the measured current. With a front fastening, the iHFCT-54 used in the
Denoise Processing of PD Signal
A typical PD current signal is shown in Figure 4 when the power supply exerted a peak-to-peak amplitude of 20 kV and frequency of 4 kHz sinusoidal voltage. The output amplitude of HFCT is UHFCT.
PD activity was detected on both positive and negative axes in a period, according to Figure 4. UHFCT attained a peak-to-peak value of 1.61 V with noise of 0.232 V, which reduced the PD magnitude accuracy. An improved signal-to-noise ratio (SNR) of PD signal is required. Consequently, empirical mode decomposition (EMD) was used to improve the SNR in this manuscript because of its merits on processing nonlinear and non-stationary signal.
Process of Empirical Mode Decomposition
EMD has an advantage in dealing with nonlinear non-stationary signals because it has great selfadaptability. EMD is based on the Hilbert-Huang transform. The Hilbert-Huang transform assumes that all data contain different simple internal oscillation modes called intrinsic mode functions (IMFs) [29]. In this way, complex data are superimposed by many different IMFs whose amplitude and frequency vary as a function of time. Based on such an assumption, the process of EMD to process signals is shown in Figure 5.
Denoise Processing of PD Signal
A typical PD current signal is shown in Figure 4 when the power supply exerted a peak-to-peak amplitude of 20 kV and frequency of 4 kHz sinusoidal voltage. The output amplitude of HFCT is U HFCT .
PD activity was detected on both positive and negative axes in a period, according to Figure 4. U HFCT attained a peak-to-peak value of 1.61 V with noise of 0.232 V, which reduced the PD magnitude accuracy. An improved signal-to-noise ratio (SNR) of PD signal is required. Consequently, empirical mode decomposition (EMD) was used to improve the SNR in this manuscript because of its merits on processing nonlinear and non-stationary signal.
Denoise Processing of PD Signal
A typical PD current signal is shown in Figure 4 when the power supply exerted a peak-to-peak amplitude of 20 kV and frequency of 4 kHz sinusoidal voltage. The output amplitude of HFCT is UHFCT.
PD activity was detected on both positive and negative axes in a period, according to Figure 4. UHFCT attained a peak-to-peak value of 1.61 V with noise of 0.232 V, which reduced the PD magnitude accuracy. An improved signal-to-noise ratio (SNR) of PD signal is required. Consequently, empirical mode decomposition (EMD) was used to improve the SNR in this manuscript because of its merits on processing nonlinear and non-stationary signal.
Process of Empirical Mode Decomposition
EMD has an advantage in dealing with nonlinear non-stationary signals because it has great selfadaptability. EMD is based on the Hilbert-Huang transform. The Hilbert-Huang transform assumes that all data contain different simple internal oscillation modes called intrinsic mode functions (IMFs) [29]. In this way, complex data are superimposed by many different IMFs whose amplitude and frequency vary as a function of time. Based on such an assumption, the process of EMD to process signals is shown in Figure 5.
Process of Empirical Mode Decomposition
EMD has an advantage in dealing with nonlinear non-stationary signals because it has great self-adaptability. EMD is based on the Hilbert-Huang transform. The Hilbert-Huang transform assumes that all data contain different simple internal oscillation modes called intrinsic mode functions (IMFs) [29]. In this way, complex data are superimposed by many different IMFs whose amplitude and frequency vary as a function of time. Based on such an assumption, the process of EMD to process signals is shown in Figure 5. A variance S is set associated with the expected noise reduction. Then, the upper and lower envelope of the original signal is obtained by calculating the local maximum/minimum. The upper envelope is denoted as ai while the lower is bi. m is the arithmetic mean of ai and bi. The residual signal extracted from some information is represented by h. If the variance of all h obtained before SDi is less than S, then the first IMF will equal hi, and the hi+1 is the new pending signal S.
The total sum of IMFs can match the original signal perfectly. The IMF is especially effective on the local nonlinear distortion of the waveform, showing potential signaling processes and revealing the instantaneous change of the process as a whole.
Analysis of Noise Reduction on PD Signal
After PD current signal was carried out by EMD, the denoising results ( Figure 6) were obtained. As shown in Figure 6, the denoised signal has been corrected to some extent, in terms of Equation (1), for SNR. A variance S is set associated with the expected noise reduction. Then, the upper and lower envelope of the original signal is obtained by calculating the local maximum/minimum. The upper envelope is denoted as a i while the lower is b i . m is the arithmetic mean of a i and b i . The residual signal extracted from some information is represented by h. If the variance of all h obtained before SD i is less than S, then the first IMF will equal h i , and the h i+1 is the new pending signal S.
The total sum of IMFs can match the original signal perfectly. The IMF is especially effective on the local nonlinear distortion of the waveform, showing potential signaling processes and revealing the instantaneous change of the process as a whole.
Analysis of Noise Reduction on PD Signal
After PD current signal was carried out by EMD, the denoising results ( Figure 6) were obtained. A variance S is set associated with the expected noise reduction. Then, the upper and lower envelope of the original signal is obtained by calculating the local maximum/minimum. The upper envelope is denoted as ai while the lower is bi. m is the arithmetic mean of ai and bi. The residual signal extracted from some information is represented by h. If the variance of all h obtained before SDi is less than S, then the first IMF will equal hi, and the hi+1 is the new pending signal S.
The total sum of IMFs can match the original signal perfectly. The IMF is especially effective on the local nonlinear distortion of the waveform, showing potential signaling processes and revealing the instantaneous change of the process as a whole.
Analysis of Noise Reduction on PD Signal
After PD current signal was carried out by EMD, the denoising results ( Figure 6) were obtained. As shown in Figure 6, the denoised signal has been corrected to some extent, in terms of Equation (1), for SNR. As shown in Figure 6, the denoised signal has been corrected to some extent, in terms of Equation (1), for SNR. At t = 40 µs, the peak-to-peak amplitude of the original signal was 1.61 V. At t = 170 µs, the peak-to-peak amplitude of the original signal was 0.79 V. At t = 40 µs, the amplitude of the denoised signal was 1.32 V. At t = 170 µs, the amplitude of the denoised signal was 0.65 V. Therefore, noise reduction does not change the linear relationship between the current signal and PD magnitude. SNR of the original signal is shown in Equation (2): SNR of the denoised signal is shown in Equation (3): Comparing Equation (2) and Equation (3), SNR increased by 4 dB for PD current signals after the EMD noise reduction. After denoising, the relationship between PD current output by the HFCT and PD magnitude was obtained (Figure 7). At t = 40 μs, the peak-to-peak amplitude of the original signal was 1.61 V. At t = 170 μs, the peakto-peak amplitude of the original signal was 0.79 V. At t = 40 μs, the amplitude of the denoised signal was 1.32 V. At t = 170 μs, the amplitude of the denoised signal was 0.65 V. Therefore, noise reduction does not change the linear relationship between the current signal and PD magnitude. SNR of the original signal is shown in Equation (2) Comparing Equation (2) and Equation (3), SNR increased by 4 dB for PD current signals after the EMD noise reduction. After denoising, the relationship between PD current output by the HFCT and PD magnitude was obtained (Figure 7). The peak-to-peak UHFCT was 40 mV when applied at 20 pC charge to PD model. A linear relationship between PD magnitude (Q) and the response of HFCT (UHFCT) can be obtained.
Results and Discussion
This section describes how the applied frequency influences the PD characteristics. The statistics of the PD results for 100 periods applying variable frequencies of 4 kHz, 6 kHz, 8 kHz, 10 kHz, and 12 kHz are shown in this section.
Partial Discharge Inception Voltage at Different Frequencies
The partial discharge inception voltage (PDIV) at different frequencies was detected multiple times at each frequency. The amplitude of PDIV is Uinc, as shown in Figure 8. The peak-to-peak U HFCT was 40 mV when applied at 20 pC charge to PD model. A linear relationship between PD magnitude (Q) and the response of HFCT (U HFCT ) can be obtained.
Results and Discussion
This section describes how the applied frequency influences the PD characteristics. The statistics of the PD results for 100 periods applying variable frequencies of 4 kHz, 6 kHz, 8 kHz, 10 kHz, and 12 kHz are shown in this section.
Partial Discharge Inception Voltage at Different Frequencies
The partial discharge inception voltage (PDIV) at different frequencies was detected multiple times at each frequency. The amplitude of PDIV is U inc , as shown in Figure 8. At t = 40 μs, the peak-to-peak amplitude of the original signal was 1.61 V. At t = 170 μs, the peakto-peak amplitude of the original signal was 0.79 V. At t = 40 μs, the amplitude of the denoised signal was 1.32 V. At t = 170 μs, the amplitude of the denoised signal was 0.65 V. Therefore, noise reduction does not change the linear relationship between the current signal and PD magnitude. SNR of the original signal is shown in Equation (2) Comparing Equation (2) and Equation (3), SNR increased by 4 dB for PD current signals after the EMD noise reduction. After denoising, the relationship between PD current output by the HFCT and PD magnitude was obtained (Figure 7). The peak-to-peak UHFCT was 40 mV when applied at 20 pC charge to PD model. A linear relationship between PD magnitude (Q) and the response of HFCT (UHFCT) can be obtained.
Results and Discussion
This section describes how the applied frequency influences the PD characteristics. The statistics of the PD results for 100 periods applying variable frequencies of 4 kHz, 6 kHz, 8 kHz, 10 kHz, and 12 kHz are shown in this section.
Partial Discharge Inception Voltage at Different Frequencies
The partial discharge inception voltage (PDIV) at different frequencies was detected multiple times at each frequency. The amplitude of PDIV is Uinc, as shown in Figure 8. The measurement results in Figure 8 show that U inc was in a range at a fixed frequency. An increasing tendency of the range was evident while increasing the frequency. At 8 kHz, the distribution range of U inc was minimal with a high measurement accuracy.
Results of PD Spectrum at Different Frequencies
The basic parameters for characterizing PD patterns are phase angle (Φ) in degrees, discharge magnitude (Q) in pC, and number of discharges (N). A 3-D pattern is shown in Figure 9a and phase-resolved partial discharge (PRPD) is presented in Figure 9b. The measurement results in Figure 8 show that Uinc was in a range at a fixed frequency. An increasing tendency of the range was evident while increasing the frequency. At 8 kHz, the distribution range of Uinc was minimal with a high measurement accuracy.
Results of PD Spectrum at Different Frequencies
The basic parameters for characterizing PD patterns are phase angle (Φ) in degrees, discharge magnitude (Q) in pC, and number of discharges (N). A 3-D pattern is shown in Figure 9a and phaseresolved partial discharge (PRPD) is presented in Figure 9b.
PD Q-Φ Scatter Plot at Different Frequencies
The main detection parameter of PD is PD magnitude (Q), which is the basis of other detection parameters. The PD U-Φ scatter plot in Figure 10 is the distribution statistic diagram of discharge magnitude in each phase.
PD Q-Φ Scatter Plot at Different Frequencies
The main detection parameter of PD is PD magnitude (Q), which is the basis of other detection parameters. The PD U-Φ scatter plot in Figure 10 is the distribution statistic diagram of discharge magnitude in each phase. The measurement results in Figure 8 show that Uinc was in a range at a fixed frequency. An increasing tendency of the range was evident while increasing the frequency. At 8 kHz, the distribution range of Uinc was minimal with a high measurement accuracy.
Results of PD Spectrum at Different Frequencies
The basic parameters for characterizing PD patterns are phase angle (Φ) in degrees, discharge magnitude (Q) in pC, and number of discharges (N). A 3-D pattern is shown in Figure 9a and phaseresolved partial discharge (PRPD) is presented in Figure 9b.
PD Q-Φ Scatter Plot at Different Frequencies
The main detection parameter of PD is PD magnitude (Q), which is the basis of other detection parameters. The PD U-Φ scatter plot in Figure 10 is the distribution statistic diagram of discharge magnitude in each phase. (e) f = 12 kHz Figure 10. PD l Q-Φ Scatter Plot at different frequencies As shown in Figure 10, the scattered pattern of PD current in the semi-axes was an 'hourglass' shape. As the frequency increased, the 'hourglass' waist became thinner. This means that the PD magnitude in the low frequencies was more polarized.
PD N-Φ Spectrogram at Different Frequencies
The PD N-Φ spectrogram illustrates the PD proper phase displaying the occurrence time of PD. Figure 11 shows that almost no PD occurred near the power zero crossing point. PD current N-Φ spectra appeared as an 'M' shape, often called a "rabbit ear" shape. A transformation of the spectrogram in the semi-period from a right triangle to an acute triangle occurred with increasing frequency. From Figure 11, PD phase distribution information as a phase region and phase center is shown in Table 1. As shown in Figure 10, the scattered pattern of PD current in the semi-axes was an 'hourglass' shape. As the frequency increased, the 'hourglass' waist became thinner. This means that the PD magnitude in the low frequencies was more polarized.
PD N-Φ Spectrogram at Different Frequencies
The PD N-Φ spectrogram illustrates the PD proper phase displaying the occurrence time of PD. Figure 11 shows that almost no PD occurred near the power zero crossing point. PD current N-Φ spectra appeared as an 'M' shape, often called a "rabbit ear" shape. A transformation of the spectrogram in the semi-period from a right triangle to an acute triangle occurred with increasing frequency. From Figure 11, PD phase distribution information as a phase region and phase center is shown in Table 1. (e) f = 12 kHz Figure 10. PD l Q-Φ Scatter Plot at different frequencies As shown in Figure 10, the scattered pattern of PD current in the semi-axes was an 'hourglass' shape. As the frequency increased, the 'hourglass' waist became thinner. This means that the PD magnitude in the low frequencies was more polarized.
PD N-Φ Spectrogram at Different Frequencies
The PD N-Φ spectrogram illustrates the PD proper phase displaying the occurrence time of PD. Figure 11 shows that almost no PD occurred near the power zero crossing point. PD current N-Φ spectra appeared as an 'M' shape, often called a "rabbit ear" shape. A transformation of the spectrogram in the semi-period from a right triangle to an acute triangle occurred with increasing frequency. From Figure 11, PD phase distribution information as a phase region and phase center is shown in Table 1. (e) f = 12 kHz Figure 11. PD N-Φ Spectrogram at different frequencies. PD phase distribution was closely related to the polarity of power and frequency. The frequency increase led to three phenomena on the PD phase distribution: • Initial PD discharge phase in the positive semi-period gradually shifted to the right, the end PD phase fluctuated at 150°, and the PD phase region decreased; • Initial PD phase in the negative semi-period fluctuated at 200° and the end PD phase gradually shifted to the left, giving rise to the decreasing PD phase region; • Positive discharge center phase shifted to the right with increasing frequency; the negative discharge center phase was maintained around 230°.
PD Statistical Data at Different Frequencies
The average magnitude of each discharge (Qave) and N at two polarities of power were detected through statistics. The sum of PD magnitude in 100 periods was marked as Qall. The PD parameters are shown in Table 2. According to Table 2, Qave and N values at two polarities of power were approximated at each frequency. The polarity of power had little effect on PD number and magnitude. N and Qall in 100 periods are shown in Figure 12. PD phase distribution was closely related to the polarity of power and frequency. The frequency increase led to three phenomena on the PD phase distribution: • Initial PD discharge phase in the positive semi-period gradually shifted to the right, the end PD phase fluctuated at 150 • , and the PD phase region decreased; • Initial PD phase in the negative semi-period fluctuated at 200 • and the end PD phase gradually shifted to the left, giving rise to the decreasing PD phase region; • Positive discharge center phase shifted to the right with increasing frequency; the negative discharge center phase was maintained around 230 • .
PD Statistical Data at Different Frequencies
The average magnitude of each discharge (Q ave ) and N at two polarities of power were detected through statistics. The sum of PD magnitude in 100 periods was marked as Q all . The PD parameters are shown in Table 2. According to Table 2, Q ave and N values at two polarities of power were approximated at each frequency. The polarity of power had little effect on PD number and magnitude. N and Q all in 100 periods are shown in Figure 12. The wave of PD numbers and magnitudes exist in an ascent stage at lower frequencies and tend to decline at higher frequencies, resulting in a frequency-induced inflection point.
Frequency-Dependant PD Number and Magnitude
PD number and magnitude are the main parameters describing PD. Therefore, an analysis on changing the PD number and magnitude is described in this part.
Space charge distribution directly determines the characteristics of the partial discharge. The frequency mainly affects PD by affecting the polarization degree and the diffusion process of the space charge. Under AC voltage, the movement of charged particles occurs in the air medium between the electrodes in the discharge model, causing the air to polarize. The degree of polarization can be represented by g [20]: g is a frequency-dependent parameter. It tends to increase because of the increase in frequency. A high g means a strengthened air polarization, intensifying the unevenness of the electric field. PD occurs easily when the electric field is non-uniform. Therefore, high frequency results in a high PD number and magnitude. However, because the polarization process of space charge takes a certain amount of time. Once the period at high frequency less than polarization time, the polarization effect of space charge no longer occupies the dominant position in affecting the partial discharge at an overhigh frequency.
Discharge is a neutralization process of charged particles. When PD occurs, most of the charged particles are neutralized and release energy. A handful of charged particles are retained on insulated surfaces, called the retention effect [21]. Space charge diffuses along the cutoff surface under the action of an electric field between the electrodes. The dissipation process of space charges is described by Equation (5): where q N is the number of space charge; Δt is the discharge interval of PD (s); τ d is the time between the occurrence of the power voltage amplitude larger than PDIV and the first discharge [22]: The wave of PD numbers and magnitudes exist in an ascent stage at lower frequencies and tend to decline at higher frequencies, resulting in a frequency-induced inflection point.
Frequency-Dependant PD Number and Magnitude
PD number and magnitude are the main parameters describing PD. Therefore, an analysis on changing the PD number and magnitude is described in this part.
Space charge distribution directly determines the characteristics of the partial discharge. The frequency mainly affects PD by affecting the polarization degree and the diffusion process of the space charge. Under AC voltage, the movement of charged particles occurs in the air medium between the electrodes in the discharge model, causing the air to polarize. The degree of polarization can be represented by g [20]: where g is an equivalent parameter of space-charge polarization in air (fS/m); γ is the conductivity of air (fS/m) where γ = 0.0231 fS/m at 25 • C, 110 kPa; ε is the relative dielectric constant (F/m) where ε = 8.86 × 10 −12 F/m at 25 • C, 110 kPa; w is the applied electric field angular frequency. g is a frequency-dependent parameter. It tends to increase because of the increase in frequency. A high g means a strengthened air polarization, intensifying the unevenness of the electric field. PD occurs easily when the electric field is non-uniform. Therefore, high frequency results in a high PD number and magnitude. However, because the polarization process of space charge takes a certain amount of time. Once the period at high frequency less than polarization time, the polarization effect of space charge no longer occupies the dominant position in affecting the partial discharge at an over-high frequency.
Discharge is a neutralization process of charged particles. When PD occurs, most of the charged particles are neutralized and release energy. A handful of charged particles are retained on insulated surfaces, called the retention effect [21]. Space charge diffuses along the cutoff surface under the action of an electric field between the electrodes. The dissipation process of space charges is described by Equation (5): where N q is the number of space charge; ∆t is the discharge interval of PD (s); τ d is the time between the occurrence of the power voltage amplitude larger than PDIV and the first discharge [22]: where U is the amplitude of the source (V); V voi is the equivalent gas volume exposed to electric field (m 3 ); C d is the radiation ionization coefficient; Φ d is the radiation quantum flux density (Wb); ρ is the gas density (kg/m 3 ); p is the intensity of air pressure (Pa); U inc is the value of PDIV (V). β is a positive constant.
The value of C d × Φ d is 2 × 10 6 kg −1 ·s −1 in air and p = 110 kPa. The value of ρ/p is 10 −5 kg·m −3 ·Pa −1 at 25 • C. τ d is a monotonic decreasing function of U U inc . Frequency increase causes U inc and increases according to Figure 8; thus, U U inc decreases. Therefore, τ d increases with an increasing frequency. Meanwhile, a rapid change of voltage results in a decrease in ∆t. N q is a monotone decreasing function of ∆t over τ d . N q gradually increases due to the decrease of ∆t and the increase of τ d , meaning that space charges are intrinsically insulated. The polarity of the voltage changes after a half-cycle but space charge remains on the insulating surface, resulting in an electric field that is opposite to the power source and suppresses the partial discharge.
To sum up, as shown in Figure 13, a theoretical frequency (f 0 ) exists. The impact of the polarization effect and retention effect on PD magnitude are approximately equivalent but opposite. Therefore, the maximum PD magnitude was reached at f 0 .
where U is the amplitude of the source (V); Vvoi is the equivalent gas volume exposed to electric field (m³); Cd is the radiation ionization coefficient; Φd is the radiation quantum flux density (Wb); ρ is the gas density (kg/m³); p is the intensity of air pressure (Pa); inc U is the value of PDIV (V). β is a positive constant. The value of Cd × Φd is 2 × 10 6 kg −1 ·s −1 in air and p = 110 kPa. The value of ρ/p is 10 −5 kg·m −3 ·Pa −1 at 25 °C. τ d is a monotonic decreasing function of inc U U . Frequency increase causes inc U and increases according to Figure 8; thus, inc U U decreases. Therefore, τ d increases with an increasing frequency.
Meanwhile, a rapid change of voltage results in a decrease in Δt . q N is a monotone decreasing function of Δt over τ d . q N gradually increases due to the decrease of Δt and the increase of τ d , meaning that space charges are intrinsically insulated. The polarity of the voltage changes after a half-cycle but space charge remains on the insulating surface, resulting in an electric field that is opposite to the power source and suppresses the partial discharge. To sum up, as shown in Figure 13, a theoretical frequency (f0) exists. The impact of the polarization effect and retention effect on PD magnitude are approximately equivalent but opposite. Therefore, the maximum PD magnitude was reached at f0. The theoretical results of frequency-dependent PD are consistent with the test results. When < 0 f f , the polarization process is greater than the retention effect, so that the number and magnitude of PD increase with an increase in frequency. When > 0 f f , the polarization process was not accomplished in the semi-period. Therefore, the retention effect caused only the PD number and magnitude to increase with increasing frequency.
Conclusions
In this manuscript, a broadband HFCT was used to detect partial discharges at different frequencies from 4 kHz to 12 kHz. EMD was used for denoising pulses. As a result, the SNR of the data was increased by 4 dB. After further statistical analysis, the phenomena of PD Q-Φ scatter plot and PD N-Φ spectrogram in phase, number, and magnitude were analyzed. The PD phase region was monotone, decreasing to a frequency range of 4 kHz to 12 kHz. The PD number and magnitude The theoretical results of frequency-dependent PD are consistent with the test results.
When f < f 0 , the polarization process is greater than the retention effect, so that the number and magnitude of PD increase with an increase in frequency. When f > f 0 , the polarization process was not accomplished in the semi-period. Therefore, the retention effect caused only the PD number and magnitude to increase with increasing frequency.
Conclusions
In this manuscript, a broadband HFCT was used to detect partial discharges at different frequencies from 4 kHz to 12 kHz. EMD was used for denoising pulses. As a result, the SNR of the data was increased by 4 dB. After further statistical analysis, the phenomena of PD Q-Φ scatter plot and PD N-Φ spectrogram in phase, number, and magnitude were analyzed. The PD phase region was monotone, decreasing to a frequency range of 4 kHz to 12 kHz. The PD number and magnitude first increased and then decreased in the specific frequency range. Air medium polarization process and retention effect determined PD number (N) and magnitude (Q) at different frequency values. Finally, a frequency of 8 kHz was selected in this special case, being a suitable working frequency for detecting insulation defects with high precision of the acquisition of the PDIV values and obtaining the best amplification effect for insulation defects. Regarding the suspension PD model at various working frequencies, the PDIV value was lower and the PD magnitude and number were larger at the same voltage level. Thus, 8 kHz can be used to assess insulation status of HF transformers with the consideration of frequency-dependent effects. | 2018-08-04T11:30:56.822Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "76d44227d37b076093c19107c4f5b444c10f2389",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/11/8/1997/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "87a0758e2ac576d8c785d01b23cc8d1a5e4131f4",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
245058281 | pes2o/s2orc | v3-fos-license | The transformative role of the media in the formation of virtuous citizens: A contribution to reconciliation in a post-apartheid South Africa
The significant role that the media played during the apartheid years in South Africa is staggering. Abundant evidence suggests that, during those years, the print media served as instruments for propaganda, for the apartheid regime, while the Argus group, for instance, exposed the atrocities and human rights violations of the same regime. However, 24 years into democracy, what is the role of the media in a post-apartheid South Africa, where citizens still suffer from the ghosts of apartheid, the continued human rights violations, racial discrimination, and related issues that make it seem as if South Africa is “irreconcilable”? This question will be addressed by drawing from a recent study conducted by the author that has demonstrated the role of newspapers in moral formation as a positive perspective on the media in the process of reconciliation. The author argues that the print media play their role through their articulation of a good society, through regular reporting on issues related to reconciliation, which can be regarded as an exercise in vigilance to help their readers identify, and address immoral behaviour that may impede achieving reconciliation. Through such reporting, the audience could become virtuous people that will serve as assets in the process and journey for reconciliation in South Africa.
INTRODUCTION
Since the dawn of democracy in 1994, there have been too many reported human rights abuses in South Africa. In some instances, this reflects negatively on the country's journey towards reconciliation. Human rights violations, in terms of racial discrimination, ethnic conflict, and inequality, have been commonplace in South Africa since 1994. The aforementioned perpetuates, even after the operations of the Truth and Reconciliation Commission (TRC), which, with some successes, unfortunately, did not bring such incidents to an end. This reflects poignantly on the long road that still lies ahead for South Africa in terms of reconciliation. Nelson Mandela believed that the human rights atrocities, which, in part, also emerged as a result of the apartheid history in South Africa, could be resolved. He suggested that, in a post-apartheid South Africa, where racial or ethnic conflicts exist, there is a need for an "RDP of the Soul", the "need to build the moral and religious foundations of society", and to "strengthen the moral fabric of society" is imperative. 1 In pursuit of "moral regeneration", the role of parents, role models, schools, and religious communities is of paramount importance. Vosloo (1994) observes that moral formation takes place where role models, examples, heroes, saints, martyrs, significant adults (all inspiring figures) play a key role in guiding people through such processes of moral formation, providing direction, motivation, and inspiration. However, as will be the premise of this article, the media -South African newspapers, in particular -will be crucial in establishing a human rights culture in South Africa, and in serving as an agent for moral formation as part of the process and journey towards reconciliation. The media might not understand their role as such. This article will, therefore, explore the ways in which this could happen, in order to strengthen the role of other role players.
The African National Congress (ANC) released a statement on the complicity of the media with the apartheid regime, in terms of human rights violations: we believe the South African media played other (broader) roles during the apartheid era, and we believe these roles need to be examined. This examination is vital if we are to understand our past, to bring about reconciliation, and to broaden our understanding of basic freedoms … South Africa needs a watchdog media, not a lapdog media. The African National Congress believes the Truth and Reconciliation Commission can play a significant role in helping us to understand the role of the media in the past, which in turn can shape our understanding of the role of the media in the future (ANC 1997:2) The Chief Whip of the ANC in parliament again raised this sentiment in August 2017, which The Independent online newspaper presented under the rubric "Mthembu calls for media truth and reconciliation commission". The newspaper report refers to Mr Jackson Mthembu who called for "the formation of a media truth and reconciliation commission (TRC)" that will provide the media the opportunity to confess and apologise "for its role in human rights atrocities committed during the height of apartheid". The Independent newspaper online, further reports: Mthembu accused mainstream media houses of having been complicit in the acts of the apartheid regime to exploit, and to discriminate against the black majority. He said that the media did this by turning a blind eye to the atrocities of the apartheid regime, instead of exposing its wrongdoings.
It is apparent, in the view of the ANC and the Chief Whip, that there exists "pockets" of disappointment and discontent from some quarters in South Africa in the media for the negative role they played in contributing to, and at times escalating the number of human rights abuses and violations during the apartheid years, which caused a vacuum for national reconciliation.
However, to exclusively regard the media as an agent of division, conflict, and human rights abuses and violations is to do an injustice to the immense capacity that the media possess to steer processes of reconciliation. Krabill (2001:1) cites the words of the then Deputy Chairperson of the TRC, Advocate Alex Boraine: Without coverage in newspapers and magazines, without the account of proceedings on TV screens and without the voice of the TRC being beamed through radio across the land, its work would be disadvantaged and immeasurably poorer.
This article is thus based on the premise that the media can play an enriching role in South African society, as asserted by Boraine. It interrogates this role by assessing recent (2018) and available data on the concrete role that the media can play in processes of reconciliation.
The author starts with a brief discussion on the relationship between moral formation and reconciliation. In this section, the author succinctly discusses how moral formation can assist with reconciliation in South Africa. This is followed by a brief discussion on the conditions for moral formation based on the work done by Vosloo (1994), and subsequently a discussion on recent research (2018) that reflects on the concrete ways in which the media can contribute towards moral formation. This information will then be used to reflect on concrete ways in which the media can assist in reconciliation and as an agent of moral formation. It is, therefore, a question not only of how the media play a role in moral formation, but also of how the media contribute to the broader discussion on reconciliation in South Africa. It will become evident in the discussion on the role of the media in moral formation and reconciliation that the media have a very limited role to play among other role players such as the church, the community, the education system, parents, families, and role models. The author will reflect on this, so as not to overly romanticise the role of the media in reconciliation. This will be discussed in the last section of this article, in which more questions will be raised than answers.
THE RELATIONSHIP BETWEEN MORAL FORMATION AND RECONCILIATION
What is reconciliation? Koopman (2007a:97) defines reconciliation in terms of Pauline thought as "hilasmos" which has to do with "the expiation of wrongs, and stumbling blocks to atonement", but also as "katalassoo" which refers to "harmony in the relationship with the other". In concise terms, he states that reconciliation is "a life of embrace of the other, and the expiation of the stumbling-blocks to that embrace, namely sin".
Though Koopman refers to those stumbling blocks as "sin", they might also be described as an act of moral wrongdoing that hinders "embrace" and reconciliation, and creates further strife, tensions, separation, and divisions. That kind of behaviour (immoral, sinful) and attitudes lies at the heart of a need for moral character. What kind of person should I be? This is an appropriate question in a post-apartheid South Africa, where individuals have reason to hate, judge unfairly, discriminate, and exclude. In light of a rift between two parties, whether individual or group, there can be various ways to solve the issue, as well as conditions that can lead to the easing of tensions, and the removal of enmity. In this way, the change of character, as proposed by the former statesman in the opening paragraph, is crucial. Ackermann (1996:49) argues that "apartheid was a perfect system for creating apathy by its many mechanisms which prevented contact with people". Therefore, the author argues that one of the virtues the media allow their audience is to internalise respect for the other and embrace the other through mediated contact.
In his recent work, Reconciliation: A guiding vision for South Africa?, Conradie (2013) refers to the complexity of reconciliation, especially in terms of victim and perpetrator relationships. He argues that, at times, the victim can also be the perpetrator, and vice versa. Besides this complexity specifically to the reconciliation process in South Africa, he also raises the issue of justice as an integral part of reconciliation. He argues that reconciliation has become a contested term, because of an understanding that justice is a separable entity of reconciliation, whereas it is inherently part of the process and should be conceptualised as such. He addresses the issue of restitution and ways in which this could be achieved. Baron (2015) subsequently addressed this in that the focus on the TRC process in South Africa, in the pursuit of reconciliation, should have included "remorse and repentance". Burton (2013:87) responds to Conradie's work on reconciliation and accentuates the issue of "learning to listen to others": Therefore a prerequisite for seeking reconciliation is to develop a process of learning to listen across all sectors, creating opportunities and mechanisms for greater knowledge of self and of others. Being heard and understood, being truly seen and known, is a deep human need. It is the only way to transform toxic relationships.
Her suggestion would find concrete expression in the ways in which the printed media allow their readers to appropriate such a skill through their reports.
In terms of how reconciliation can be achieved, Van der Borght (2018) also argues that we need to seek resources within our own religious traditions. In his attempt to offer some solution for national reconciliation in South Africa, he argues that the Christian churches themselves have not contributed positively to the process of reconciliation in South Africa. He also argues that, instead of retracting from processes because of their "past track record" and guilt, 2 which the author also places some newspapers' reporting under apartheid, they should enter into a process of rediscovery of their contribution within their own tradition. This would be crucial, especially in light of MacIntyre (1981) who argues for the retrieval of the tradition of Aristotle, Thomas Aquinas and others who would argue for the formation of good character in an attempt to address immorality in society. In Van der Borght's suggestion, the churches would do well in this regard. However, the author argues that the media can be another role player in the formation of virtues that would enhance reconciliation in South Africa.
There is a need for the formation of moral character and virtues that enhance the aspect of "embrace" and "reconciliation". This can be either the change of moral character of the victim, the perpetrator, or those who are observing such tensions. The opposite would be to follow the Kantian approach that focused on the categorical imperative and making the right decisions, which is the outcome of the Enlightenment project, according to MacIntyre (After virtue 1981). However, South Africans need a moral character that is practised and lived during seemingly "irreconcilable" situations. Through their presence, they can invariably create an atmosphere that enhances reconciliation. In reference to the above, this article will address national reconciliation in South Africa, through being moral individuals. Koopman (2007b:107) argues, in another contribution, that in building a human rights culture, "we need right humans". This article will advocate that such kind of human beings, who "embrace" others, are desperately needed, if we envisage a reconciled country. However, it is crucial to understand the work of Koopman from the wells that he draws. He and other ethicists draw from the work of scholars who argue for the return of an ethics of virtue. This tradition argues that it is not only important to ask what a good society is, what is right or wrong, but primarily also what is a good person. Character and moral formation become an important task. The resurgence of such a conversation began with the work of MacIntyre (1981), who argues that there should be a retrieval of the tradition of Aristoteles and Thomas Aquinas that emphasised the training of virtues within communities. Hauerwas followed in this tradition and argued that the basis of these communities should be the story of Jesus. 3 The Dutch theologian, Johannes van der Ven, is a final example of someone who furthered the tradition of the training and education of good character. He addressed this in his work Formation of the moral self (1998). He identified a few ways in which moral formation takes place, namely discipline, socialisation, moral transmission, moral education, and moral clarification. These scholars provide a good framework of the work of training of moral character that was adopted by various theologians in South Africa. For example, Koopman and Vosloo (2002) build on the tradition of virtue ethics to develop their scholarly work on the South African context. 4 Conradie's (2006) work Morality as a way of life identifies the conditions for moral formation that will be used in this article to discuss the role of the media.
RESEARCH METHODOLOGY
For the purposes of this article, the author selected four South African weekly newspapers (The Sunday Independent, Sunday Times, Mail & Guardian, Rapport), and assessed the rhetorical strategies that each used to report on government corruption in 2016. Government corruption served as a case study on the role the media could play in moral formation. The author selected all government cases of corruption, from which he chose the four most reported cases of corruption by the four newspapers during the year, namely the building of President Jacob Zuma's homestead at Nkandla; the corrupt relationship between the Gupta family and state officials; the corruption at the South African Broadcasting Corporation (SABC), and the corruption reported at the Passenger Rail Agency of South Africa (PRASA). The author collected in total 342 reports on corruption, assessed each newspaper's reports, and compared the reports between the four newspapers. 5 The author adapted Lawrie's (2005) rhetorical model of analysis and assessed the rhetorical strategies that each newspaper used. This enabled the author to un-earth the role the print media play in moral formation. The author used a moral issue such as corruption to assess, in terms of the newspapers' reporting of these cases, in what way their reporting can contribute to the task of moral formation and reconciliation in South Africa.
THE ROLE OF PRINTED MEDIA IN MORAL FORMATION
The author wants to frame the media's role in terms of moral formation. Therefore, it is critical to discuss how moral formation takes place and what conditions should be met for it to take place. In this regard, the author refers to the list provided by Conradie (2006:77) on a degree of consensus between virtue ethicists in terms of what is needed for moral formation to take place. These conditions are as follows: • Where virtues are rooted in a more comprehensive vision of the good life, of a good society.
• Where virtues are usually embodied and carried through narratives, through paradigmatic stories.
• Where such paradigmatic stories are conveyed by "communities of character", namely groups, traditions and communities of people who live with integrity, honesty, and loyalty.
• Where conversion, transformation, and discipleship are necessary for those who participate in such "communities of character" (this also requires a long, intense, and often painful process of moral formation).
• Where regular exercises, rituals and (spiritual) disciplines are the context within which virtues can be internalised.
• Where role models, examples, heroes, saints, martyrs, significant adults (all inspiring figures) play a key role in guiding people through such processes of moral formation, providing direction, motivation, and inspiration.
• Where friendships (in various ways and forms) are crucial to sustaining people on this road of moral formation.
• Where credibility is born from the concrete practising of central convictions and virtues; such credibility eventually serves as the criterion for whether or not moral formation took place.
The above conditions for moral formation are not all relevant to the media. The author will, therefore, draw from a research study that outlines the role the media play as a basis for a discussion on the media's role in reconciliation. The results will be discussed through what emerged as part of a study of four South African weekly newspapers' reports on corruption between 1 January 2016 and 31 December 2016. The study showcases specifically the role of the print media in moral formation -in relation to the conditions outlined by Vosloo (1994).
DISCUSSION OF FINDINGS
The research reveals that the newspapers in the study reported frequently, but also placed issues of immoral behaviour on their front pages, therefore ensuring that their readership was constantly aware of immoral behaviour. 6 They also placed a particular emphasis on the effects and impact that such immoral behaviour has on the life of ordinary citizens. This may discourage and deter their readership from engaging in such behaviour.
6
In the conditions listed by him, Conradie (2006:77) argues that moral formation takes place "where regular exercises, rituals and (spiritual) disciplines are the context within which virtues can be internalised".
The media presented graphical information to make their readers aware and to expose the moral issues in question. They also provided their readers with an opportunity to live vicariously through others 7 and afford them an opportunity to reflect on issues of morality. 8 Furthermore, the media play a role in ensuring that their audiences become not only aware but also more vigilant and alert, which could increase their response(s) to such acts, by developing repugnance toward such behaviour, and a change of their attitudes and behaviour.
The media not only (in terms of the above) make audiences aware of moral issues, but also position their readers to judge on moral issues. Prinsloo (2007:212) argues that this is one of the functions of media texts. The media invite their readers to symbolically enter the inner sanctum of belonging by sharing the ideas and their orientation. The text is a product of a range of semiotic decisions that act to position the reader. It invites the reader to adopt one position and, at least implicitly, reject another. Rossi and Soukup (1994: 209) also argue in a similar vein that the media shape the readers' "early perceptions of good and bad" as well as "constitute a new, separate and powerful dimension of that (moral) formation".
In relation to the above, the media position their audience and allow them to identify and associate with values that are important for moral formation. 9 This was specifically reflected in the way in which some of the newspapers reported on the moral behaviour and lifestyles of public leaders and keeping them morally accountable. 10 They, therefore, positioned their readers to differentiate between showing respect for such leaders, but also to keep those leaders accountable, should they fail in their moral duties and responsibility. This might instil values of equality, moral accountability, and obligation in their readership, since such leaders are held to be as responsible as the remainder of society and are encouraged through the media's vigilance to uphold their moral duties about the society they 7 Kendall (2003:119) refers to this in his list on how media contribute to the change of behaviour. He argues that the media do so by informing their audiences about events; introducing them to a wide variety of people; providing their audience with a wide variety of viewpoints; making them aware of products and services; entertaining, and by living through other people's experiences. 8 Hoekstra et al. (1994:212-233) argue that the media serve as "distinctive sources for moral reflection". 9 Conradie (2006:77) argues that moral formation takes place "where virtues are usually embodied and carried through narratives, through paradigmatic stories". 10 See perhaps the Mail & Guardian's placing of the President continuously on its front pages.
serve. The media's vigorous reporting on such powerful figures might also inspire other members of society to blow the whistle on people in power and authority as well as other public officials in government departments and institutions. This may result in the internalisation of such values in shaping the audience's moral character and motivate readers to act in circumstances that require responsible ethical leadership.
The research reflected on how the media also go to great lengths to demonstrate the strong stance that certain individuals took against immoral behaviour. The reporting on such moral courage might inspire the readers to be vigilant and courageous. 11 The media's reporting reflects that it posits a particular vision of how a good society should function, and therefore, subsequently reports exactly on those instances that would endanger such a vision. 12 The media in South Africa reflect the kind of society and values that are envisioned in the South African Constitution. In this study, it became clear that the newspapers focus on issues in South Africa such as the unequal distribution of resources, poverty, and slow economic growth as some of the root problems for immoral behaviour. However, it should be noted that the research is clear that not all media focus on the same values or elements that are needed for South Africa to enhance a human rights culture. However, all their reports did, in general, reflect the media's commitment to the moral values enshrined in South Africa's Constitution.
It is also evident that citizens have their preferences in terms of which the media are aligned to their ideological position. This is evident in terms of the circulation figures of the selected newspapers in the study during 2016. Although the results show that there is a narrow gap between the ideologies of the studied newspapers, the difference in ideological positions could still be spottedin some instances. It is, therefore, also fair to argue that, in terms of issues of immorality, the various media's audiences might think differently on specific moral issues.
It is evident that the newspapers, at times, send mixed messages on issues of moral behaviour. 13 This confirms, to some extent, that the 11 Conradie (2006) argues, inter alia, that moral formation takes place because of the presence of role models within communities. He observes that moral formation takes place where role models, examples, heroes, saints, martyrs, significant adults (all inspiring figures) play a key role in guiding people through such processes of moral formation, providing direction, motivation, and inspiration. 12 See also Conradie's (2006:77) condition on "a vision for a good society" for moral formation. 13 For instance, the author noticed that, while the one newspaper focuses essentially on the corrupt nature of the individual, the other focuses on the circumstances that allowed corruption to take place.
media are at times seeking sensation (see also Verdoolaege 2005:182).
For instance, this research shows that, while a particular newspaper focuses on one prominent, public individual and his corrupt behaviour, the same newspaper will, in its next edition, argue that the readership should have sympathy with such an individual in terms of his family background. Presenting his family history with such detail evokes feelings of sympathy from the readers. This should not only be viewed in terms of sensationseeking, but also in terms of the critical issue, namely the immoral behaviour of that individual that will "fade" away in the midst of pertinent and atrocious evidence of immoral conduct. This and other similar cases serve as examples of how the media can easily jeopardise a golden opportunity to take a strong stance in their reporting on such immoral conduct. This will subsequently limit their role to maximally contribute to the formation of moral persons.
THE NEWSPAPERS' ROLE IN RECONCILIATION
The previous section discussed how the media can be an agent of moral formation. In this section, the author argues that, in doing so, the media, through their role in moral formation, are contributing to the reconciliation process in South Africa.
Reconciliation in South Africa addresses the atrocities of the past, due to "immoral behaviour", through immoral judgements, which undermine the human rights and dignity of South Africans. If we do not address issues of morality, we will never be able to succeed in reconciling South Africans. However, it is not only the decision-making processes, or because South Africa claims to have one of the best constitutions in the world which upholds and respects human rights, but it is also about building and forming virtuous citizens. In ethics, the questions are always raised: What are good/right decisions? What is a vision for a good society? We focus on the central question: What is a good person who embodies the values needed for reconciliation? The media have such a role to play.
The media can play the role of building and forming virtuous citizens when they make audiences aware of issues of conflict, racial discrimination, and prejudices that exist in various parts of South Africa. Reporting on these issues can give South Africans an idea as to how far the country has journeyed in terms of national reconciliation and the extent of the future journey. The media should not only make readers aware of issues of reconciliation, but also position their readers in such a way that they can judge on issues, events and occurrences, and the behaviour of citizens that are endangering the project of reconciliation. The media should, therefore, position their audiences in such a way that they will also have to "judge" in a moral, responsible way on issues of reconciliation in South Africa, in terms of how they present the cases about "race", "class", "identity", "gender", "ethnicity", "culture" and other issues that are related to the broader discourse on national reconciliation.
Behaviour and actions that have the potential to curtail reconciliation should be reported on regularly, in order to make South Africans vigilant. Through the use of metaphors and various rhetorical strategies, the media can allow their audiences to understand those "stumbling blocks" that prevent South Africans from embracing each other, irrespective of their diverse backgrounds and (racial and ethnic) identities. The study suggests that the media are intentional in terms of their rhetoric. Given this, the media should be intentional in terms of the role they could play in issues of reconciliation. In doing so, the media will be able to urge their audiences to change their attitudes and behaviour and become citizens who embody those values that are needed for reconciliation. The consumers of media will become sensitive to how they treat each other, as they will become the kind of persons who would work tirelessly and effortlessly but also intend to bring about reconciliation through their embodiment of reconciliation. Botman (1996:38) refers to the "metaphorical locking devices" in which those who supported apartheid would refer to certain issues as too "sensitive", too "delicate", and "emotional" to talk about, so that they can close the debate and not allow others to ruminate and ponder on such kind of conversations and stories about the "stumbling blocks" that hinder embrace. In terms of Botman's argument, it is clear that even the horrendous stories need to be told, as such cases, type of material and narratives are important for the healing of a broken, fractured, divided nation. Though being sensitive in presenting graphic details of events, the media may not only shock their audience, but also provide them with a deep sense of understanding of the state of affairs and journey towards reconciliation in South Africa. In this way, the media will indeed give South Africans sufficient space for discussion on the realities facing the moral fabric of society. This is important so that the media develop a collective moral consciousness in forging national reconciliation. These stories, through media institutions, are necessary for South Africans to understand and reflect on their role, as a collective, in sustaining the vicious system of apartheid. In this way, the media create a space, in which individuals will also be challenged to change their behaviour and create a new reality -a reconciled South African nation.
The media could play a role in helping members of society associate themselves with certain values that would also be crucial in the embodiment of reconciliation within a national context. In making people aware of the challenges and issues that hamper reconciliation, the media constantly remind all South Africans of their collective accountability in what transpired in South Africa. Ackermann (1996:50) asserts in this regard: "Accountability requires awareness", which is the opposite of "apathy, the opposite of being uncaring and uninvolved with one's neighbour, being out of the relationship". Through the audience's exposure to what occurred, and is still happening in the country, they become more sensitive towards the "other", 14 reflecting on their prejudices, and they are challenged to move beyond the borders of their perceptions of reality of the other towards an understanding of what it means to be human in this world. Ackermann (1996:50) argues that, in the process of reconciliation, we cannot only be accountable and faithful to the values and vision of the communities from which we come but to the vision of South Africa as a whole. Mangcu (2008:16) argues for "blackness beyond pigmentation". Similarly, Mbembe (2008:147) refers to a new "black solidary", after the 1994 negotiations that "will be rooted in a moral commitment to racial reconciliation and equal justice for all". He adds: Freedom for black South Africans will be meaningless if it does not entail a commitment to freedom for every African, Black or white.
The media is well positioned to present issues and stories that would forge such a new moral identity and character. We need such kind of individuals, people who will look beyond their pigmentation and who will care for the justice of all. The media could, through their reporting, fill such a role and build moral individuals.
Through reporting on "paradigmatic stories", which embody those virtues needed for reconciliation, the media would not only inspire but also allow audiences to internalise those virtues that are needed for reconciliation. Paradigmatic stories would include the everyday, meaningful stories to which the readers would relate, in order to make sense of their own context. Conradie (2013:30) raises the issue of storytelling and the establishment of common memories for healing the memories in victim/ survivor relationships.
Victims and perpetrators and those who thought they were innocent bystanders, now realize their complicity, and have an opportunity to participate in each other's humanity in story form (Botman 1996:37).
The media should, therefore, not hesitate to report on public and political leaders who do not further the agenda of reconciliation. 15 If the media vigorously and courageously report such stories, this will surely enact the kind of virtues of respect and tolerance that are needed for reconciliation. Recently, the media reflected their commitment, in their reporting, to issues of reconciliation. They reported on the premier of the Western Cape, Mrs Helen Zille's tweets on colonialism, as well as on the leader of the Economic Freedom Fighters (EFF), Mr Julius Malema and his inflammatory comments that were widely argued to incite racism when he referred to the song "Kill the Boer!". 16 These are some examples of ways in which the media can contribute in terms of exposing the "stumbling blocks" and create an opportunity for their readership not only to reflect on such issues, but also to act in a morally responsible manner.
It is also evident that the media have a choice to decide what they want to place on the front pages, and how many times they will focus on a particular story, and how they will construct the logic of each story and event. 17 The aforementioned forms part of the media's rhetorical strategies. Whether for better or worse, the media should understand their role in the country's journey towards reconciliation. They should, therefore, show their moral commitment in how they report on such stories, events and incidents that are curtailing the national reconciliation project. Newspaper journalists should increase their reporting on such issues, place such reporting on the front page and provide good reasons as to why South Africans should "embrace" one another, against the narrative of "racial discrimination", "strife" and "conflict" among racial and ethnic identities in South Africa. Their regular focus on such issues will be crucial for the development and formation of moral and accountable citizenship.
It is also crucial that the media allow their audience to envisage the negative impact when South Africans ignore or do not take reconciliation seriously. The media's reporting should indicate the political, economic, and social consequences. 18 The audience needs to see, through reporting, the negative impact that "racial conflict" and discrimination, as well as human rights abuses, have on the future of the country.
The media should not simply report on the negative stories and incidents and "blow-up" the issues (in its quest for sensation) in such a way that it hampers reconciliation. They should place such stories within the broader context of the vision of a multi-racial, multicultural and nonsexist society as enshrined in the Constitution. 19 This will allow citizens to strive towards such behaviour, become people who embody the values of the Constitution, and act as agents of a human rights culture.
The media's sensation-seeking as a marketing and business strategy can have negative implications for those exposed to such reporting, and derail the process of reconciliation in South Africa. 20 Verdoolaege (2005:182) reflects on such incidents in terms of how the media would focus, during the operation of the TRC, more on the perpetrators than the victims. Verdoolaege refers to the (broadcast) media's focus more on prominent figures such as F.W. de Klerk, and perpetrators such as Eugene de Kock, thus emphasising the tactics of torture, and the way he and others ill-treated the victims. He argues that presenting such graphic detail to the audience (broadcast on the South African TV screens) made the TRC process come across as a "theatrical representation". However, if the media were to report on issues by fairly representing all races, sexes, ethnicities, cultures, and religions in South Africa, it could cultivate, in its readership, a "multi-" perspective and approach towards the South African situation characterised by division, racial and ethnic conflict, prejudices, and discrimination.
IS THE MEDIA'S ROLE SUFFICIENT FOR RECONCILIATION IN SOUTH AFRICA?
This section provides some final remarks in terms of the limited role that the media can play in the process of reconciliation.
It is evident from this research that the media play a significant role in the process of reconciliation as an agent of moral formation. The media play their role by articulating a good society, by regular reporting on issues related to reconciliation, which can be regarded as an exercise in vigilance that will help their readers identify, and address immoral behaviour that may be "stumbling blocks" in achieving reconciliation. Through such reporting, the audience will become virtuous people who will serve as assets in the process and journey towards reconciliation in South Africa.
In conclusion, it is important to reflect on some of the limitations of the media's contribution in terms of moral formation. It is crucial to make the point that the media cannot act as role models and as communities of character, although they reflect on such communities and individuals in their reporting. 21 The media can only act as a role model of vigilance in exposing immoral behaviour. Therefore, it should be noted that the media are one of many players and agents of moral formation.
The author has argued that each media outlet/entity and its management have their own ideological positioning. Their role in reconciliation should always be viewed with caution. In a previous study, the author showed that newspaper editors and management have their own biases and that they report and focus on perspectives that they want their readers to know. One of those contrasts in the newspapers' reporting is the way in which The Sunday Times appeals to the readers to have sympathy with the then president, Jacob Zuma, in terms of the allegations of corruption at Nkandla. When they focus on his relatively poor family, the Sunday Independent would take a strong stance, and even place a rubric on its front page, in big, bold letters, "APOLOGY NOT ENOUGH". This is one of the many examples that show that not one of the four newspapers takes the same stand on issues. However, it should also be mentioned that this is not always too obvious, but a closer, analytic reading of the rhetoric employed enables one to unearth the different ideological positionings. However, newspapers have their own biases. The findings of this study support such a view -that the newspapers might also be involved in aligning themselves politically, and in supporting certain factions within a specific political party.
With this bias in mind, it is appropriate to argue that it is not possible that one media institution's reporting on an event or case relating to the discourse of reconciliation is comprehensive and the "only" truth to be told on the matter. This is reflected in the results of the recent study on four newspapers' rhetoric and reporting, which show that the media have a role to fulfil in moral formation. 22 It indeed showcases how the media did not report, in the same depth, breadth, and emphasis on a particular issue or person -which is also often noted in reporting on issues related to reconciliation. Therefore, the author argues that the media should be intentional in their reporting on issues of reconciliation and not leave such issues to "chance". This will play a role and influence how South Africans will perceive and internalise issues of reconciliation, including racism, classism, ethnicity and identity issues, and how reconciliation will become embodied. As stated earlier, this is not easy to address. Audiences should be cautioned, in this regard, that there are indeed "other" realities besides the ones to which they are exposed, in terms of their preference when it comes to different media reports. The audience should also be conscious and cautious of the media's primary business function -to sell their stories -and accept that the media are not always committed to enabling reconciliation. Therefore, the audience should be critical and aware that a particular media institution does not have the whole "story" or is not always sharing the "complete" picture. Readers should be sensitive, not to base their conclusions on one specific media report. Rather, they should read reports on events and incidents from various angles and expose themselves to different media reports. In the journey towards reconciliation, readers should be cautious about reading widely and critically in terms of the consumption of events and issues, presented by the media, that would have a direct effect and impact on South Africa's reconciliation journeybut also how they will think, act, and embody reconciliation.
Rather than physical contact, the media provide "mediated" contact between victim and perpetrator. This is crucial, and might be one of the limitations of all forms of media and their contribution in the national reconciliation process. Not only what one sees through the media has an impact; but more is needed than the media can provide -the physical contract of people from all races, cultures, identities. The South African TRC provided such a platform, but it should be continued through "inter-" and "intra-" contact, in order to forge a new national non-racial, non-sexist and united society.
Finally, it should be emphasised that the media's role does not guarantee what was argued in this article, because to be simply aware of moral issues and immoral behaviour and to take a certain position on such issues do not mean that citizens will act. 23 However, the possibility that an intentional media, who through the ways outlined in this article, can produce human beings that would embody those virtues that enhance reconciliation, is enough to take the positive role of the media seriously.
CONCLUSION
This article focused on the positive role the media can play in the formation of virtuous citizens who will embody the virtues of reconciliation. These individuals will enhance a spirit of reconciliation in the day-to-day spaces in which they find themselves. The media are forming the citizens in this regard, through their regular reporting, their vision of a reconciled society, through instilling vigilance, and sharing, through paradigmatic stories, the kind of virtues that are needed in order to reconcile the country in the aftermath of human rights violations and abuses, but specifically in South Africa, from racial, ethnic, and cultural conflicts and tensions. BIBLIOGRAPHY AckermAnn, D.
1996. On hearing and lamenting: Faith and truth telling. In: H.R. Botman & R.M. Petersen (eds.), To remember and to heal. Theological and psychological reflections on truth and reconciliation (Cape Town: Human and Rousseau), pp .47-56 23 Lawrie (2005:126) wrote on rhetoric and observed that a speaker (represented by the newspapers in this case) cannot predict whether his or her readership will take action or respond to a particular message: If anyone were ever to understand human motivation fully, that person would be in a position to manipulate other people at will. Perhaps it is fortunate that our understanding is limited and that our fellow human beings are always able to surprise us by confronting us with problems for which we have no ready-made solutions. | 2021-12-12T16:09:24.455Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1a0c080bedf2e820bc7f3804bee72219f6bd2dfa",
"oa_license": "CCBY",
"oa_url": "https://journals.ufs.ac.za/index.php/at/article/download/5836/4227",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6091bee7620e68b347bb617e49dd225ee972f7db",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
} |
250679236 | pes2o/s2orc | v3-fos-license | Ultrasonic investigation on the distorted diamond chain compound Azurite
The natural mineral Azurite [Cu3(CO3)2(OH)2] has been considered as a model substance for the 1D distorted antiferromagnetic diamond chain, the microscopic couplings of which, however, are still under discussion. Here we present results of the longitudinal elastic constant c22 down to 80 mK and magnetic fields up to 12 T. c22 reveals clear signatures of the magnetic energy scales involved and discloses distinct anomalies at the Néel ordering TN = 1.88 K. Based on measurement as a function of temperature and magnetic field, a detailed B-T phase diagram is mapped out which includes an additional phase boundary of unknown origin at low temperature (T < 0.5 K). Entering the new phase is accompanied by a pronounced softening of the c22 elastic constant. These observations, together with results obtained by spectroscopic investigations reported in the literature, reflect an unusual long-range magnetically ordered state at very low temperatures.
Introduction
Low-dimensional (low-D) quantum spin systems are of great interest in solid state physics due to the wealth of exciting phenomena originating from the interplay of reduced dimensionality, competing interactions and strong quantum fluctuations. Recently, great interest and controversy has surrounded the proposal that the spin S = 1 / 2 moments of the Cu 2+ ions in azurite [Cu 3 (CO 3 ) 2 (OH) 2 ] form a frustrated 1D distorted diamond chain [1][2][3]. The magnetic structure of azurite and the relevant microscopic couplings, however, have been disputed both in experimental and theoretical studies [4][5][6]. In addition, the detailed phase diagram at low temperature and high magnetic fields is still unknown and some recent experiments suggest that there exists a more complicated micromagnetic structure than has previously been thought [7,8].
Results and Discussion
Using a phase-sensitive detection technique, we have measured the relative change of the velocity of a longitudinal ultrasonic wave propagating along the spin-chain direction (b axis) of a high-quality single crystal of azurite. This geometry corresponds to the c 22 acoustic mode. The elastic constant can be calculated from the sound velocity ν and the crystal's mass density ρ by c 22 = ρν 2 . Measurements have been performed both as a function of temperature and magnetic field. The external field was applied either perpendicular or parallel to the b axis. Figure 1 shows the temperature dependence of c 22 together with the molar magnetic susceptibility χ mol . The latter has been determined by utilizing a homemade SQUID magnetometer. At These two findings suggest the presence of another, most likely magnetic phase transition at temperatures below 0.45 K. The different behavior of c 22 (T) at the two phase transitions indicates that different coupling schemes between the strain and the order parameter are realized here.
In order to obtain more information on the low-temperature region, the field (temperature) dependence of c 22 at various fixed temperatures (fields) has been determined. In figure 2(a), we show a selection of field sweeps for B // b axis. A pronounced increase of c 22 is observed at very low temperature of 0.13 K upon increasing the field. At a field of 1.15 T this increase is abruptly terminated and c 22 starts decreasing with further growing field. The position of this kink in Δc 22 /c 22 is shifted to lower fields with increasing temperature. Above a temperature T = 0.41 K, however, this anomaly can no longer be discerned. The temperature dependence of c 22 at different fields is shown in figure 2(b). The onset temperature of the softening gradually shifts to lower temperature with increasing the field. No softening can be observed within the accessible temperature range T ≥ 0.08 K for B ≥ 1.25 T. The positions of the kinks derived from these experiments are summarized in the B-T phase diagram for B // b axis in the inset of figure 2(b). The strong field dependence observed suggests that the c 22 anomaly signals a low-temperature phase transition of magnetic origin. To the best of our knowledge this is the first report of such an additional low-temperature phase in azurite. The field orientation was close to the setting employed in ref. [11], where the transition at 2 T was assigned to a spin-flop (SF) transition. For this field orientation, the 1/3 magnetization plateau is reached above 11 T [1]. To map out the B-T phase diagram, numerous measurements have been carried out in the field ranging from 0 to 12 T at temperatures varying from 0.072 to 3.6 K. A great deal of information can be obtained from such ultrasonic experiments, especially about the magnetoelastic coupling at the edges of the magnetization plateaus, see, e.g. [9]. Upon entering the plateau state of azurite above 10 T, the elastic constant increases considerably as shown in fig 3(a). Details of the elastic anomalies associated with the plateau phase will be published elsewhere [10]. Here we concentrate on the anomalies within the magnetically ordered state.
Generally, at a phase transition the ultrasonic attenuation (not shown) abruptly changes, whereas the elastic constant exhibits a softening. As displayed in figure 3(a), we find several distinct anomalies which become more pronounced and develop a fine structure with decreasing temperature. The feature around B = 2 T was assigned in ref. [11] to the transition from the antiferromagnetic to the SF state (for T < 1.6 K) or the paramagnetic (PM) state (for T > 1.6), see fig. 4. Figure 3(b) shows details of the data at 0.65 K and 0.31 K. At temperatures above 0.45 K, a single minimum is observed around 2 T which we tentatively assigned to the SF transition. For T < 0.45 K, however, the data reveal a splitting into two closely spaced minima. Note that these features for B ⊥ b occur in the same temperature region where the large softening was observed in c 22 (T), cf. fig.1, and have the same size as the elastic anomaly at the AF transition. The features between 8 and 10 T are attributed to the transition from the SF state, either to the plateau state (PL) via the PM state (T > 1 K) or directly into PM state (T < 1 K), cf. fig. 4. We stress that the phase boundaries obtained here are consistent with the ones derived from magneto-thermal measurement in ref. [11] at T ≥ 1 K and B ≤ 2.5 T. and T-(stars) sweep measurements. The broken line is a guide for the eyes.
Conclusion
From measurements of the longitudinal elastic constant c 22 (T,B), the low-temperature B-T phase diagram of azurite has been mapped out in detail. The measurements reveal an as yet unknown phase boundary at very low temperatures which is likely to be of magnetic origin. | 2022-06-28T02:07:07.685Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "34ab691b778a9af13e6b5dd81f0b28033ca721a2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/200/1/012226",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "34ab691b778a9af13e6b5dd81f0b28033ca721a2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237942298 | pes2o/s2orc | v3-fos-license | Human Trafficking: Empowering Healthcare Providers and Community Partners as Advocates for Victims
Human trafficking, also known as modern-day slavery, is a public health crisis and a growing worldwide crime exploiting approximately 40.3 million victims. A decade ago approximately 79% of human trafficking crimes were related to sexual exploitation and 18% were related to forced labor, but more recent reports show approximately 50% and 38%, respectively. Although sexual exploitation continues to make up the majority of human trafficking crimes, forced labor continues to grow at an alarming rate. The purpose of this paper is 2-fold. First, to empower healthcare providers and community volunteers serving potential victims of human trafficking in traditional and nontraditional settings with human trafficking identification training. This education should include the use of a validated human trafficking screening tool and the timely provision of resources. Second, to guide professional nurses in the holistic approach to caring for potential victims of human trafficking. The core values of holistic nursing practice and Watson's Theory of Human Caring are the pillars guiding mindful and authentic nursing care. Merging evidence-based practice with holistic care will boost victim identification and rescue.
Background
Healthcare providers are poised in therapeutic situations to identify victims of human trafficking. Tiller and Reynolds (2020) emphasized the importance of community caregivers in awareness and service to this population who, unfortunately, often do not receive needed resources. Religious and secular community volunteers may provide healthcare through medical mission's opportunities impacting individuals victimized by human trafficking. Community volunteer groups and healthcare providers conscientiously attempt to bridge the gap which keeps the underserved from receiving needed healthcare services, including those victimized by human trafficking. A literature review yielded a call for empowering healthcare providers to identify and serve victims of human trafficking; however, it does not address the implications for the role of community volunteer groups in identifying human trafficking victims. Community volunteer groups serving vulnerable populations who interact with the same group on a regular basis can build a rapport with the at-risk population served, increasing the likelihood someone will disclose they are in a trafficking situation, so prompt rescue can occur.
Although human trafficking is a serious human crisis, there is very little information, research, and data available to the public and professionals (Sweileh, 2018). Healthcare professionals and community partners lack awareness of human trafficking, and the signs victims exhibit while being trafficked, hindering identification and resource allocation to free them from servitude (Leslie, 2018).
Human trafficking is estimated to be a US$150 billion per year industry globally (Human Rights First, 2017). It entails control over another person, commonly a minor, that is obtained through threats, capture, trickery, and abuse, leading to the utilization of the individual for prostitution, forced labor, and organ harvesting (Chisolm-Straker et al., 2016). Statistically, of all reported victims of human trafficking, 70% involve women or girls (Gillipsie et al., 2019). Of the 23,078 contacts made in 2018 to the National Human Trafficking Hotline, 15,042 were women as compared to 2,917 men (Polaris, 2021a). These men and women contacted the hotline through emails, texts, phone calls, web forms, or webchats. The top five ethnic groups identified as victims of human trafficking are Latinos, followed by Asian, African/African Americans/Blacks, Whites, and multiracial/multiethnic (Polaris, 2021b).
For clarity, it is important to note that a person called a victim of human trafficking is actively being trafficked, as compared to a survivor who has been rescued and no longer being trafficking. In 2019, 533 human trafficking survivors were asked to identify the age at which they began being trafficked for sex as minors (Polaris, 2021b). As illustrated by Table 1, the vast majority were between 15 and 17 years of age. While being trafficked, these survivors were forced to perform sexual acts, engage in sexual intercourse with one or multiple perpetrators, or forced into pornography. These acts commonly occurred in homes, hotels, or cars (Polaris, 2021b).
It is essential to keep in the forefront that not all human trafficking involves sexual exploitation. Human trafficking may also involve those who are promised financial prosperity and appealing lifestyles to come work in the United States (North Carolina Department of Administration, 2021). For those taking jobs doing physical labor, identification papers, passports, and other legal documents are often taken away by their employer, trapping the worker in the region and work assignment without options or real freedom. Sampson (2018) discussed the collateral damage resulting from trafficking to include drug addiction, abuse, food insecurity, homelessness, crime, and unplanned pregnancies. Victims of trafficking face long-lasting effects even after being rescued. Some victims endure the psychological effects of the abuse experienced at the hands of many, resulting in post-traumatic stress disorder (PTSD), while others must face legal consequences from being forced to sell drugs for their traffickers or the physical effects of lingering drug addiction (Sampson, 2018). As a public health crisis, human trafficking results in human physical and emotional health deterioration as well as significant resource expenditures for law enforcement, social services, community health, and acute care settings, mental health services, and the legal system (court and prisons).
Human trafficking and exploitation cause persistent wounding. The community of study, located in the southeast US, is highly ranked nationally in reported cases of human trafficking. For this reason, the Project Leader sought to educate and empower community volunteer teams to identify and help persons victimized by trafficking in a rural community where conditions are favorable for increased trafficking (see Table 2). Because this community is at increased risk for a heavy presence of human trafficking, a local human trafficking rescue and prevention nonprofit organization was created
Literature Review
Information about human trafficking, its victims, and identification and recovery taskforces are reported in the literature. Search terms utilized to identify articles included sex trafficking, human trafficking, labor trafficking, human trafficking screening tool, trafficking screening, trafficking screening tool, trafficking identification, trafficking identification in healthcare, and human trafficking curriculum.
A common misconception about human trafficking is victims are kidnapped and held hostage, isolated from society. Surprisingly, research reveals victims are present in areas where rescue could occur yet are not identified (Chisolm-Straker et al., 2016). People victimized by human trafficking seek medical care while being trafficked, while others continue to attend school and work. Chisolm-Straker et al. (2016) conducted a retrospective study in the United States to describe the extent to which people victimized by human trafficking sought healthcare. Of the 173 individuals surveyed, 68% reported receiving medical care while being trafficked. The most common healthcare settings victims frequented included dentist offices, primary care providers, urgent/emergency care, and obstetrics/gynecology clinics.
The literature also revealed healthcare providers feel inadequate about 2 things: victim identification and safe provision of appropriate resources. Ross et al. (2015) conducted a cross-sectional study in England identifying that of 892 National Health Service professionals, 84% had contact with potential victims of human trafficking but did not feel knowledgeable nor empowered to meet their needs nor make referrals. Powell et al. (2017) conducted a mixed methods study in the United States reporting that healthcare providers do not feel adequately prepared to care for trafficking survivors. The researchers called for increased education for healthcare workers, to include signs of trafficking, risk factors, various types of trafficking, and referrals.
The use of effective and validated human trafficking screening tools was highlighted as an asset to victim identification. Most importantly, the effectiveness of screening is greatly impacted by the trust and comfort the victim can establish with the interviewer (Bigelsen & Vuotto, 2013). This sentiment is supported by Chisolm-Straker et al. (2019) who found homeless youth who underwent a multistep screening process were more forthcoming and honest later in the process after building a rapport with the screener. The Vera Institute of Justice (2014) created a validated and reliable human trafficking screening tool aiding in the identification of sex and labor trafficking of victims born in or outside the United States. In addition to using effective identification tools, healthcare providers must be prepared to provide victims with resources immediately (Schwarz et al., 2016). Resources may include referrals for rescue, safe housing, food, shelter, police protection, and mental health services.
Lastly, a bibliometric assessment of research activity and trends on human trafficking revealed the gross underrepresentation of health-related literature on the subject (Sweileh, 2018). Caregivers exploring the subject must rely on scholarly published data older than 5 years. These findings support the need to generate additional knowledge for healthcare providers and their community partners.
Method
This project collaborated with two local community volunteer groups that serve vulnerable populations. The audience received training on Human Trafficking 101 and the Trafficking Victims Identification Tool (TVIT) with the goal to use both within their mobile medical unit and 12-step recovery programs.
Sample and Setting
Ten members from the two volunteer groups completed the Human Trafficking 101 training. The first community volunteer group was primarily composed of registered nurses who take a mobile medical unit to a local soup kitchen at a minimum every other week, as well as to community events in lower income areas in the community of study. The community of study is in a rural county with alarming risk factors for trafficking (see Table 2). The mobile medical unit is stationed at these locations to screen individuals for health-related concerns. Populations benefitting from the mobile medical unit include the homeless, uninsured, economically disadvantaged, and individuals who rely on the free services provided by the mobile medical unit who otherwise would not receive medical care. On average, the mobile medical unit serves 12 individuals per outing. The second volunteer group serves those through a spiritual-based 12-step program for dependency and emotional affliction. This team consists of volunteer lay leaders, including local pastors and former addicts.
Institutional Review Board Approval
This project was approved by the institutional review board at the university in which the Project Leader was enrolled. Permission to conduct the project with the volunteer teams was received from the Chief Executive Officer for the mobile medical unit and the leader of the spiritual-based 12-step program.
Data Collection
This project used a pretest/posttest design. Participants were asked to complete the Perceived Competence Scale (PCS) before and after participation in the Human Trafficking 101 training. The PCS, created by the Center for Self-Determination Theory (2021), is a short 4-question survey that was customized to assess the perceived competence of an individual related to human trafficking. The questions are based on a 7-point Likert scale with answers ranging from "not true at all" with a score of 1 to "very true" with a score of 7. The scale allows for customization of the questions to fit the topic at hand. The customization of the questions was reviewed by an expert at the local human trafficking organization for face validity. The Center for Self-Determination Theory (2021) reports the alpha reliability for the perceived competence items as .90.
Analysis
The scores on the pre and post surveys were analyzed utilizing a paired samples t test. The local human trafficking rescue organization allowed the Project Leader to use their established Human Trafficking 101 curriculum for the training. The training content list is provided in Table 3.
The Human Trafficking 101 training was taught in its entirety by the Project Leader lasting approximately one hour. Training took place at a facility that partners with both community volunteer teams. The building was equipped with audio, video, and projection equipment, providing an intimate space to deliver the information, while allowing the participants to hear the Project Leader with ease and voice questions as needed. The training included a video from the local human trafficking rescue organization providing survivor and rescue stories. Each participant was given a presentation outline with ample space for note-taking.
After the content was presented, the participants asked questions and engaged in discussion, expressing shock about the information learned, the need for others in the community to hear the information, and one recounted general knowledge gained as a nurse. The participants were grateful for the instruction provided on what to do and not to do when a victim is identified (see Table 4).
As part of the training, participants were instructed on the use of the TVIT short version. The TVIT manual can be accessed at https://www. vera.org/downloads/publications/human-traffickingidentification-tool-and-user-guidelines.pdf. This tool is a validated screening tool created by the Vera Institute of Justice and is free for public use (Vera Institute of Justice, 2014). Each volunteer group was given a notebook containing copies of the TVIT, the TVIT instruction manual, a list of numbers for resources to contact whether a potential victim is identified (see Table 4), and a TVIT usage log. The purpose of the log was to help the volunteer groups track the number of times the TVIT was used, and number of victims identified in a 3-month period. The log was designed to exclude any identifying data that could impact patient confidentiality.
Results/Findings
A paired samples t test was calculated to compare the mean PCS pretest scores (X = 2.5, SD = 1.5) to the mean PCS posttest scores (X = 6, SD = 0.39). The findings represent a significant increase in perceived competence related to knowledge of human trafficking (t(9) = −6.567, p < .05). The TVIT data usage log did not reveal any screenings or identification of human trafficking victims over a 3-month period following the training.
Limitations
Limitations encountered during this project related to COVID-19 indoor gathering restriction mandates in place at the time of the training, limited privacy, and mobile medical unit repairs. The indoor gathering restrictions limited the ability to host a gathering that would include every volunteer in both ministries, approximately 35 people. The volunteers who were not able to attend were provided with a recorded training, informed consent, pre, and post competence scales via email. This approach allowed everyone involved in the ministries to access the information in a safe manner during the pandemic; however, it was not mandatory.
The mobile medical unit was unavailable for 2 weeks during the 3-month project period for repairs. The mobile medical unit continued to provide medical care as scheduled during the pandemic, but all medical screenings took place outdoors to allow social distancing with the use of masks, gloves, and sanitation of equipment. Although the mobile medical unit staff cautiously provided ample distance to provide privacy of those they care for, the care outdoors is viewed as a limitation. As discussed in previous paragraphs, the literature reveals victims of human trafficking hesitate to disclose their victimization due to fear of repercussions. Thus, the medical screenings taking place outside with limited privacy provided a potential barrier to victim identification.
Nursing Implications
The goal for rendering care for persons victimized by human trafficking begins with identification. Holistic nursing care principles should be the foundation by which nurses create environments conducive to building trust, giving hope, and initiating healing. Of the 5 core values guiding holistic nursing practice, 2 align best to aid in creating the desired environment. Implementing the Core Value Holistic Caring Process provides caring interventions fostering tranquility and peace (Mariano, 2022). Core Value Holistic Communication, Therapeutic Healing Environment, and Cultural Diversity creates authenticity and bond between the nurse and the potential victim (Mariano, 2022). Implementing these core values supports the call in the literature to foster trusting environments allowing potential victims to reveal their trafficking experience, resulting in rescue.
In addition to the core values of holistic care, Watson's Theory of Human Caring guides the nursing care for victims and survivors of human trafficking, ensuring an individual's mind, body, and soul is nurtured (Quinn, 2022;Watson, 2018). This approach is essential since human trafficking has a crushing and debilitating impact on holistic health and wellness. The mind, body, and soul must be addressed when rendering care to help the victim or survivor achieve optimal health and wellness (Mariano, 2022), extending Dr. Watson's holistic vision to community partners outside of nursing.
Mind. According to Sampson (2018), PTSD is an example of the psychological impact of human trafficking. Depression, anxiety, and substance abuse are predominant among victims and survivors (Leslie, 2018). When addressing the mind component of the patient's well-being, the nurse should make the appropriate referrals for proper assessments and provision of resources. This could be achieved through collaboration with mental health providers, spiritual care, social workers, rehabilitative drug dependency treatment, or case managers, depending on the setting, victim needs, and interprofessional members available. The trauma-informed care framework is recommended to best provide therapeutic care and prevent re-traumatization of the victim or survivor (Leslie, 2018). Meditation, journaling, and centering are holistic techniques the nurse can independently educate those affected by trafficking for stress reduction.
Body. As outlined in the training program, physical consequences of trafficking include wounds from abuse, unplanned pregnancy, abortion, sexually transmitted infections, and poor oral health. The application of astute assessment skills followed by initiating appropriate referrals may positively impact the patient's physical needs. Nurses should seize every opportunity to educate the patient before the encounter ends, including reporting signs of complications to a healthcare provider and holistic self-care techniques, such as mouth care, good hygiene, good nutrition, and physical activity.
Soul. An individual's soul, also known as spirit, can be wounded through verbal, physical, psychological, and sexual abuse inflicted during exploitation and manipulation. At a minimum, effective therapeutic communication will build rapport and trust with the victim/survivor/patient. Ideally, an intentional connection between the nurse and the patient is achieved through mindful active listening. A nurse's caring and deliberate approach can achieve 2 objectives. First, the bond and trust many patients need to divulge their trafficking experience can be established, increasing the validity of identification screening tools and data collected. Second, the individual's spirit is nurtured, propelling its healing into motion (Burkhardt & Nagai-Jacobson, 2022). It is worth noting that multiple meaningful interactions between the nurse and patient may need to take place before a victim or survivor feels safe enough to reveal victimization. Nurses must exhibit kindness and patience to help victims and survivors achieve this level of vulnerability.
Conclusion
Unfortunately, information on how to care for persons victimized by human trafficking is not yet highly published. There is a need to integrate human trafficking education and available resources into nursing professional development curriculums and community volunteer training. Nurses have an ethical responsibility to educate and commission community volunteers to holistically approach all persons served, to increase the likelihood of recognizing and rescuing this unique population. The application of the holistic caring process and Watson's Theory of Human Caring promotes "self-giving in the moment" and one's "knowingly participating in a healing experience" (Watson, 2018, p. 92).
Supplemental Material
Supplemental material for this article is available online. | 2021-09-28T06:23:10.066Z | 2021-09-27T00:00:00.000 | {
"year": 2021,
"sha1": "be92743f13955536351952d1aced9783e42ad803",
"oa_license": "CCBYNC",
"oa_url": "https://digitalcommons.gardner-webb.edu/cgi/viewcontent.cgi?article=1038&context=nursing-dnp",
"oa_status": "GREEN",
"pdf_src": "Sage",
"pdf_hash": "fce4c5bc5c4bae14243bf7a5b93ec01d99641719",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13441287 | pes2o/s2orc | v3-fos-license | Knowledge-Aided STAP Using Low Rank and Geometry Properties
This paper presents knowledge-aided space-time adaptive processing (KA-STAP) algorithms that exploit the low-rank dominant clutter and the array geometry properties (LRGP) for airborne radar applications. The core idea is to exploit the fact that the clutter subspace is only determined by the space-time steering vectors, {red}{where the Gram-Schmidt orthogonalization approach is employed to compute the clutter subspace. Specifically, for a side-looking uniformly spaced linear array, the} algorithm firstly selects a group of linearly independent space-time steering vectors using LRGP that can represent the clutter subspace. By performing the Gram-Schmidt orthogonalization procedure, the orthogonal bases of the clutter subspace are obtained, followed by two approaches to compute the STAP filter weights. To overcome the performance degradation caused by the non-ideal effects, a KA-STAP algorithm that combines the covariance matrix taper (CMT) is proposed. For practical applications, a reduced-dimension version of the proposed KA-STAP algorithm is also developed. The simulation results illustrate the effectiveness of our proposed algorithms, and show that the proposed algorithms converge rapidly and provide a SINR improvement over existing methods when using a very small number of snapshots.
I. INTRODUCTION
Space-time adaptive processing (STAP) is considered to be an efficient tool for detection of slow targets by airborne radar systems in strong clutter environments [1]- [4]. However, due to the very high degrees of freedom (DoFs), the full-rank STAP has a slow convergence and requires about twice the DoFs of the independent and identically distributed (IID) training snapshots to yield an average performance loss of roughly 3dB [1]. In real scenarios, it is hard to obtain so many IID training snapshots, especially in heterogeneous environments. Therefore, it is desirable to develop STAP techniques that can provide high performance in small training support situations.
Reduced-dimension and reduced-rank methods have been considered to counteract the slow convergence of the fullrank STAP [1]- [9], [26]- [28]. These methods can reduce the number of training snapshots to twice the reduced dimension, or twice the clutter rank blueif we assume that the degrees of freedom of the reduced dimension correspond to the rank of the clutter. The parametric adaptive matched filter (PAMF) based on a multichannel autoregressive model [29] provides another alternative solution to the slow convergence of the full-rank STAP. Furthermore, the sparsity of the received data and filter weights has been exploited to improve the convergence of a generalized sidelobe canceler architecture in [30]. However, it still fundamental for radar systems to improve the convergence performance of STAP algorithms or reduce their required sample support in heterogeneous environments because the number of required snapshots is large relative to those needed in IID scenarios.
Recently developed knowledge-aided (KA) STAP algorithms have received a growing interest and become a key concept for the next generation of adaptive radar systems [32]- [45]. The core idea of KA-STAP is to incorporate prior knowledge, provided by digital elevation maps, land cover databases, road maps, Global Positioning System (GPS), previous scanning data and other known features, to compute estimates of the clutter covariance matrix (CCM) with high accuracy [32], [33]. Among the previously developed KA-STAP algorithms, there is a class of approaches that exploit the prior knowledge of the clutter ridge to form the STAP filter weights in [42]- [44] and [45]. The authors in [42] introduced a knowledge-aided parametric covariance estimation (KAPE) scheme by blending both prior knowledge and data observations within a parameterized model to capture instantaneous characteristics of the cell under test (CUT). A modified sample matrix inversion (SMI) procedure to estimate the CCM using a least-squares (LS) approach has been described in [43] to overcome the range-dependent clutter non-stationarity in conformal array configurations. However, both approaches require the pseudo-inverse calculation to estimate the CCM and this often requires a computationally costly singular value decomposition (SVD) [42]. redAlthough two weighting approaches with lower computations are discussed in [42], they are suboptimal approaches to the LSE by the SVD and the performance of these approaches relative to the LSE by the SVD depends on bluethe radar system parameters, especially the array characteristics [42]. Moreover, the latter approach has not considered the situation when the prior knowledge has uncertainties. Under the assumption of the known clutter ridge in the angle-Doppler plane, the authors in [44] imposed the sparse regularization to estimate the clutter covariance excluding the clutter ridge. Although this kind of method can obtain a high-resolution even using only one snapshot, it requires the designer to know the exact positions of the clutter ridge resulting in being sensitive to the prior knowledge. Furthermore, the computational complexity caused by sparse recovery is expensive. A data independent STAP method based on prolate spheroidal wave functions (PSWF) has been considered in MIMO radar by incorporating the clutter ridge [45], where the computational complexity is significantly reduced compared with the approaches in [42] and [43]. However, it is highly dependent on the ideal clutter subspace and is not robust against clutter subspace mismatches.
In this paper, we propose KA-STAP algorithms using prior knowledge of the clutter ridge that avoid the pseudo-inverse calculation, require a low computational complexity, and mitigate the impact of uncertainties of the prior knowledge. redSpecifically, for a side-looking uniformly spaced linear array (ULA), the proposed method selects a group of linearly independent space-time steering vectors that can represent the ideal clutter subspace using prior knowledge of the dominant low-rank clutter and the array geometry properties (LRGP). The orthogonal bases of the clutter ideal subspace are computed by a Gram-Schmidt orthogonalization procedure on the selected space-time steering vectors. Two robust approaches to compute the STAP filter weights are then presented based on the estimated clutter subspace. To overcome the performance degradation caused by the redinternal clutter motion (ICM), we employ a covariance matrix taper (CMT) to the estimated CCM. The array calibration methods discussed in [42] can be applied to our proposed algorithm to mitigate the blueimpact of non-ideal factors, such as channel mismatching. Moreover, a reduced-dimension version of the proposed KA-STAP algorithm is devised for practical applications. Finally, simulation results demonstrate the effectiveness of our proposed algorithms.
The main contributions of our paper are: (i) A KA-STAP algorithm using blueprior knowledge of the LRGP is proposed for airborne radar applications.
(ii) A KA-STAP combining CMT is introduced to counteract the performance degradation caused by ICM redand prior knowledge uncertainty, and a reduced-dimension version is also presented for practical applications. redFurthermore, the proposed algorithm provides evidence for the KAPE approach to directly use the received data and the calibrated space-time steering vectors (only the spatial taper without the temporal taper) to compute the assumed clutter amplitude.
(iii) A detailed comparison is presented to show the computational complexity of the proposed and existing algorithms.
(iv) A study and comparative analysis of our proposed algorithms including the impact of inaccurate prior knowledge and non-ideal effects on the SINR performance, the convergence speed and the detection performance with other STAP algorithms is carried out.
The work is organized as follows. Section II introduces the signal model in airborne radar applications. Section III details the approach of the proposed KA-STAP algorithms and also discusses the computational complexity. The simulated airborne radar data are used to evaluate the performance of the proposed algorithms in Section IV. Section V provides the summary and conclusions.
II. SIGNAL MODEL
The system under consideration is a side-looking pulsed Doppler radar with a redULA consisting of M elements on the airborne radar platform, as shown in Fig.1. The platform is at altitude h p and moving with constant velocity v p . The radar transmits a coherent burst of pulses at a constant pulse repetition frequency (PRF) f r = 1/T r , where T r is the pulse repetition interval (PRI). The transmitter carrier frequency is f c = c/λ c , where c is the propagation velocity and λ c is the wavelength. The number of pulses in a coherent processing interval (CPI) is N . The received signal from the iso-range of interest is represented by a space-time N M × 1 data vector x.
redThe received space-time clutter plus noise return from a single range bin can be represented by [4] x = Na m=1 Nc n=1 σ m,n redv(f s,m,n , f d,m,n ) ⊙ α(m, n) + n, (1) where n is the Gaussian white thermal noise vector with the noise power σ 2 n on each channel and pulse; N a is the number of range ambiguities; N c is the number of independent clutter patches over the iso-range of interest; f s,m,n and f d,m,n are the redspatial and Doppler frequencies of the mnth clutter patch, respectively; redσ m,n is the complex amplitude for the mnth The space-time steering vector is given as the Kronecker product of the temporal and spatial steering vectors, v(f s,m,n , f d,m,n ) = v t (f d,m,n ) ⊗ v s (f s,m,n ), which are given by [1] v t (f d,m,n ) = [1, · · · , exp(j2π v s (f s,m,n ) = [1, · · · , exp(j2π where () T denotes the transposition operation, f s,m,n = da λc cos θ m,n sin φ m,n , f d,m,n = 2vpTr λc cos θ m,n sin φ m,n , and d a is the inter-sensor spacing of the ULA. If we stack all clutter patches' amplitudes into a vector σ = [σ 1,1 , · · · , σ 1,Nc , · · · , σ Na,1 , · · · , σ Na,Nc ] T , redand assume there are no non-ideal factors, then the clutter plus noise received data denoted by (1) can be reddescribed as where V denotes the clutter space-time steering matrix, given by Thus, the CCM based on (5) can be expressed as where Σ = E[σσ H ]. Under the condition that the clutter patches are independent from each other, Σ = diag(a), a = [a 1,1 , a 1,2 , · · · , a Na,Nc ] T and a m,n = E[|σ m,n | 2 ] (m = 1, · · · , N a , n = 1, · · · , N c ) for the statistics of the clutter patches. Here, E[·] denotes the expectation operator, diag(a) stands for a diagonal matrix with the main diagonal taken from the elements of the vector a and () H represents the conjugate transpose of a matrix. The optimal filter weight vector on maximizing the output SINR for the Gaussian distribution clutter which is given by the full-rank STAP processor can be written as [4] w opt = µR −1 s, where µ is a constant which does not affect the SINR performance, s is the N M × 1 space-time steering vector in the target direction, and R = E[xx H ] = R c + σ 2 n I is the clutter plus noise covariance matrix (I is the identity matrix).
III. KA-STAP ALGORITHMS USING LRGP
In this section, we firstly review the method that estimates the CCM using a LS technique in [42], [43] and point out the existing problems of this method. Then, we detail the design and the computational complexity of the proposed KA-STAP algorithms using LRGP.
A. CCM estimated by LS
In practice, prior knowledge of certain characteristics of the radar system and the aerospace platform, such as platform heading, speed and altitude, array normal direction, and antenna phase steering, etc., can be obtained from the Inertial Navigation Unit (INU) and the GPS data [39], [42]. In other words, we can obtain the values of the number of range ambiguities N a , the platform velocity v p , and the elevation angle θ. Thus, we can develop KA-STAP algorithms based on redthese prior knowledge, e.g., the methods described in [42]- [44] and [45]. redIn reality, the clutter consists of returns over a continuous geographical region, which we divide into a discrete set of clutter patches for analytical and computational convenience. The rest of the discussion is on the issues associated with choosing the number of clutter patches N c . A possible approach is to assume a value of N c and discretize the whole azimuth angle evenly into N c patches for each range bin [42], [43]. redIn addition, it usually ignores range ambiguities, i.e., N a = 1, where the justification can be seen in [42]. Then, the parameter σ redin (5) can be estimated using the observation data by solving the LS problem as follows [42], [43],σ where redσ = [σ 1 ,σ 2 , · · · ,σ Nc ] T . Herein, the solution for the above problem based on an LS technique is given bŷ Because σ depends only on the clutter distribution, it does not vary significantly with the range under homogeneous clutter environments. Furthermore, to avoid the effect of the target signal at CUT, the near range bins of the CUT are used to estimate σ [43], which is given bŷ where 2L is the total number of the secondary data. Then, the estimated CCM by the LS method (we call it least-squares estimator (LSE) in the following) iŝ redThen the clutter plus noise covariance matrix is estimated asR whereσ 2 n is the estimated noise power level which can be collected by the radar receiver when the radar transmitter operates in a passive mode [2]. Finally, the STAP filter weights can be computed according to (8) using redR instead of R.
However, there are several aspects that should be noted. First, the above approach requires the designer to redchoose the suitable azimuth angle φ and the suitable number N c of the clutter patches, which are difficult to obtain in practice. The redselection of N c and φ will affect the space-time steering vectors of the clutter patches, which affects the estimation accuracy of the estimated CCM. Specifically, if the assumed number of clutter patches N c > N M , then V H V −1 does not exist. redSecond, the computational complexity of the which should be avoided in practice. redTwo weighting approaches with lower computations are discussed in [42]. blue-However, the solutions obtained by the weighting approaches are suboptimal approximations to the LSE obtained by the SVD. The performance of these approaches relative to the LSE computed by the SVD depends on the radar system parameters, especially the array characteristics [42]. blueIn the presence of non-ideal factors in the clutter component and despite the inclusion of the estimated angle-independent channel mismatch in the space-time steering vectors V and the use of the modified V to solve the problem (9), the techniques do not consider the impact of the temporal random taper α d . Nevertheless, the received data vector x is formed by all non-ideal factors. Thus, whether it is suitable to compute the parameter σ only considering the spatial random taper is worth being investigated, as will be discussed in Section III.C.
B. Proposed KA-STAP Algorithm
To overcome the rank-deficiency and the inverse of the matrix V H V, in the following, we will detail the proposed KA-STAP algorithm to estimate the CCM using prior knowledge of LRGP. redIn this subsection, we only consider the ideal case of the received data, i.e., the signal model in (5).
From (5), we know that the clutter return is a linear combination of returns from all clutter patches. Thus, we have Proof: The first equation can be obtained from (7). With regard to the second equation, let us denote the SVD of the matrix V by redV = UCD H . Then, we have Note that the orthogonal basis of the clutter subspace U can be calculated by V, or VV H , herein we will not need to compute that via the CCM. From (14), it also results that the clutter subspace is independent from the power of the clutter patches and is only determined by the clutter spacetime steering vectors. Moreover, from the above subsection, it is seen that the clutter space-time steering vectors can be obtained using the prior knowledge from the INU and GPS data. Therefore, it is easier to compute the orthogonal bases of the clutter subspace U by V, or VV H than that by the CCM due to the unknown power of the clutter patches. The other problem to calculate the clutter subspace arising is that one should know the clutter rank first. Fortunately, some rules for estimating the clutter rank was discussed in previous literature, such as [1], [2], [46] and [47]. Specially, for a sidelooking ULA, the estimated clutter rank is redapproximated by Brennan's rule as where β = 2v p T r /d a and the brackets ⌈⌉ indicate rounding to the rednearest largest integer. In [47], this rule has been extended to the case with arbitrary arrays. Usually, N r ≪ N M and the STAP algorithms can be performed in a low dimensional space so that the computational complexity and the convergence can be significantly improved [45]. After the clutter rank is determined, there are several approaches to compute the orthogonal bases of the clutter subspace. First, we can use the Lanczos algorithm [48] applied to VV H to compute the clutter subspace eigenvectors. The computational complexity of that using the Lanczos algorithm is on . Moreover, the computational complexity can be significantly reduced for the case of ULA and constant PRF by exploiting the Toepliz-block-Toeplitz structure of VV H [48].
Second, an alternative low-complexity approach is to perform the Gram-Schmidt orthogonalization procedure on the space-time steering vectors V, redwhere the implementation steps of the Gram-Schmidt orthogonalization are listed in Table I and interested readers are referred to [49] for further details. Note that this procedure is at the computational cost . It should be also noted that the approach of the Gram-Schmidt orthogonalization can be applied to arbitrary arrays if we can obtain the prior knowledge of the array geometry, some radar system parameters and some information of the platform.
redIn particular, for the case of side-looking ULA, we can further reduce the computational complexity to compute the clutter subspace eigenvectors. Since the dimension of the columns of V should satisfy N c ≫ N r , if we carry out the Gram-Schmidt orthogonalization procedure on the columns of V one by one, this will result in unnecessary computations due to the linear correlation among the columns. Thus, it is desirable to directly find a group of vectors that are linear independent or nearly linear independent (i.e., most of the vectors are linearly independent and only very few vectors are linearly correlated). redFortunately, for a ULA we have the following proposition.
Proposition 1: For the case of side-looking ULA and constant PRF, the clutter subspace belongs to the subspace computed by a group of space-time steering vectors {v p } Nr p=0 , which are given bȳ v p (n, m) = exp(j2πf s (βn + m)), where Proof: Let us stack the above space-time steering vectors into a N r × N M matrixṼ, which is shown as where Note thatṼ is a Vandermonde matrix of dimension N r ×N M . redFor z n,m , n = 0, · · · , N − 1 and m = 0, · · · , M − 1, the number of linearly independent columns ofṼ is determined by the number of distinct values of βn + m. If β is an integer, If β is a rational value (not an integer), the number of distinct values of βn + m is larger than N r = ⌈M + β(N − 1)⌉. Therefore,Ṽ has full rank, which is equal to [1] rank(Ṽ) = min(N r , N M ) = N r .
The dimension of the clutter subspace is also N r . Herein, the clutter subspace shares the same subspace withṼ. We can then compute the clutter subspace by taking the Gram-Schmidt orthogonalization procedure on the rows ofṼ. Moreover, it should be noted that the computational complexity of the second approach is on the order of redO which exhibits a much lower complexity compared with the LSE resulting in a very useful tool for practical applications. It also avoids the procedure to determine the values of the number of clutter patches N c and the azimuth angle φ.
After computing the orthogonal basis of the clutter subspace, we try to design the STAP filter weights by two different kinds of methods. One is to use the minimum norm eigencanceler (MNE) derived in [5] to form the filter weights. Specifically, the MNE method is a linearly constrained beamformer with a minimum norm weight vector appearing orthogonal to the clutter subspace, which is described by [5] min w w H w, subject to U H w = 0 and w H s = 1, The solution to the above optimization problem in (22) is provided by [5] The other method tries to design the filter weights using both the orthogonal bases of the computed clutter subspace and the observation data. Let us first calculate the root-eigenvalues by projecting the data on the clutter subspace U, formulated aŝ Then, the clutter plus noise covariance matrixR can be estimated byR whereΓ = diag(γ ⊙γ * ) and ⊙ denotes the Hadamard product. Finally, the STAP filter weights can be computed bŷ where we use the fact thatR −1 = U Γ −1 + 1 σ 2 n I U H . The whole procedure of the proposed KA-STAP algorithm is summarized in Table I.
C. Proposed KA-STAP Employing CMT
In practice, there are many non-ideal effects, such as the internal clutter motion (ICM) and bluethe channel mismatch [3], which result in mismatch between the actual clutter subspace and that computed by our proposed algorithm. In this case, the performance of our proposed algorithm will
redFor the angle-dependent channel mismatch under normal circumstances, the transmit and receive antenna patterns bluepoint in the same direction and have a significant maximum in the look direction. blueThe energy from the sidelobes is generally several orders of magnitude lower than that from the mainbeam. This will lead to clutter subspace leakage mainly coming from the main beam [3]. Thus, the angle-dependent channel mismatch can be approximated by spatial random tapers only related to the main beam. Since the main beam is usually fixed in a CPI, then this random tapers can be seen as angle-independent. For the angle-independent channel mismatch, we assume the spatial taper α s is a random vector but stable over a CPI due to the narrowband case considered in the paper. Herein, when in presence of channel mismatch, the clutter plus noise bluereceived data vector is given by [3] x = (V ⊙ Ξ s )σ + n, where bluethe columns of Ξ s are all equivalent to 1 N ⊗ α s and 1 N denotes the all 1 vector with dimension N . When considering ICM, the received data can be represented as [3] x where V s = V ⊙ Ξ s and α d is the temporal taper accounting for the ICM. Then, the clutter plus noise covariance matrix is where where T d denotes the space-time CMT accounting for the ICM and 1 M,M is the M × M all 1 matrix. In order to obtain the clutter plus noise covariance matrix, we should estimate R s and T d in (29). redRegarding the estimation of R s , we can firstly use the array calibration methods discussed in [42] to redestimate the spatial taper (denoted asα s ), which will not be discussed here due to space limitations. The reader is bluereferred to the literature [42] for further details. redThen, substitutingα s into V s , we obtain the blueestimateV s . On the other hand, since the elements of α d do not equate blueto zeros, we assumē If we multiply both side of (28) byᾱ d ⊗ 1 M and use the estimateV s instead of V s , then it becomes where n s = n ⊙ (α d ⊗ 1 M ). In this situation, similarly as the analysis in Section III.B, we can blueemploy the Gram-Schmidt orthogonalization procedure to compute bluea matrix with eigenvectors ofV s , which is denoted asÛ s . Then the root-eigenvalues γ s can be calculated bŷ blueWe can then estimate R s aŝ Here, it uses the fact that the amplitude of the temporal taper caused by the ICM is one. This fact can be seen the ICM models bluereported in [1], [3], which will be also detailed afterwards. From (35), we observe thatR s can be estimated using the received data x directly without α d . It also provides evidence for the KAPE approach to directly use the received data and the calibrated space-time steering vectors (only the spatial taper without the temporal taper) to compute the parameter σ.
redRegarding the estimation of T d , it can be obtained via a rough knowledge of the interference environment (e.g., forest versus desert, bandwidth, etc.) [6]. One common model, referred to as the Billingsley model, is suitable for a land scenario. The only parameters required to specify the clutter Doppler power spectrum are essentially the operating wavelength and wind speed. The operating wavelength is usually known, while the wind speed should be estimated. Another common model, presented by J. Ward in [1], is suitable for a water scenario. The temporal autocorrelation of the fluctuations is Gaussian in shape with the form: where σ v is the variance of the clutter spectral spread in m 2 /s 2 . In the following simulations, we consider the CMT model of the latter one. redAfter computing the estimatesR s andT d , we can compute the CCM asR c =R s ⊙T d . SinceR c is still of low rank, we adopt the Lanczos algorithm to compute the clutter subspace U, where the computational complexity is on the order of O((N M ) 2 N ′ r ) (N ′ r is the clutter rank ofR c ). Finally, the STAP filter weights are computed according to (23) or (26). The whole procedure can be seen in Table I. redPrior knowledge uncertainty impact. In the proposed algorithms, the prior knowledge uncertainty, such as velocity misalignment and yaw angle misalignment, will have a great impact on the performance. However, the scheme that employs the CMT will mitigate this impact. To illustrate this, we take a typical airborne radar system for example. The parameters of the radar system are listed at the beginning of Section IV. Consider a far field scenario, the elevation angle will be close to zero resulting in cos θ ≈ 1. Let v pu and φ u denote the velocity deviation and the yaw angle deviation, respectively. Then, for a discretized azimuth angle φ, the spatial frequency f s and Doppler frequency f d can be represented as From (37) and (38), we see that the prior knowledge uncertainty will affect the position and shape of the clutter ridge, which leads to the mismatch between the exact and the assumed space-time steering vectors. Fig.2 provides a more direct way to illustrate the impact of prior knowledge uncertainty to the clutter ridge in the spatio-temporal plane. By employing a CMT, the clutter spectra will become wider along the clutter ridge in the figure including the exact clutter ridge. From this point of view, the impact of prior knowledge uncertainty is mitigated. Because the methods in [43], [44] and [45] do not consider any strategies to mitigate the impact of prior knowledge uncertainty, the performance will depend highly on the accuracy of the prior knowledge. The KAPE approach in [42] also adopts the CMT and can mitigate the impact of prior knowledge uncertainty in a sense. But the differences between the proposed algorithm and the KAPE approach lie in three aspects. First, the KAPE approach estimates the CCM using the LS or some approximate approaches. While the proposed algorithm estimates the CCM using the Gram-Schmidt orthogonalization procedure (that is not an approximate approach) by exploiting that the clutter subspace is only determined by the space-time steering vectors. Furthermore, for a side-looking ULA radar, the proposed algorithm directly selects a group of linearly independent space-time steering vectors using the LRGP and then takes the Gram-Schmidt orthogonalization procedure to compute the clutter subspace.
Second, the proposed algorithm shows evidence bluethat is feasible to directly use the received data vector and the calibrated space-time steering vectors (only the spatial taper without the temporal taper) to compute the parameter σ. Third, the proposed algorithm with an RD version in the following section is presented to further reduce the complexity.
D. Proposed Reduced-Dimension (RD) KA-STAP Algorithms
From the above discussions, one aspect to be noted is that it is impractical to use all the DoFs available at the ULA for reasons of computational complexity when N M is too large. In such situations, a common approach is to break the full DoFs problem into a number of smaller problems via the application of an M N × D (with D ≪ M N ) transformation matrix S D to the data [1]. Our proposed KA-STAP algorithms can be easily extended to this kind of approach. By applying the reduced-dimension transformation matrix S D to the data and the space-time steering vectors, we obtain where¯denotes the results after the transformation. Then, the reduced-dimension CCMR c becomes In a manner similar to that of the proposed full-DoF KA-STAP algorithm described in Section III.B, we compute the orthogonal bases of the clutter subspaceŪ, estimate the CCM R c =ŪΓŪ H , and then calculate the STAP filter weights according to (23) or (26). When employing a CMT to the ideal clutter covariance matrix, the final RD clutter covariance matrix can be estimated aŝ redwhereÛ s is computed by taking the Gram-Schmidt orthogonalization procedure toV s = S H DV s ,Γ s is calculated via (35) usingx andÛ s instead of x andÛ s , andT d denotes the estimated RD CMT. Again, the STAP filter weights can be computed according to (23) or (26). By inspecting (40) and (41), we find that the computational complexity of our proposed RD-KA-STAP algorithm is related to D instead of N M (D ≪ N M ), which leads to great computation savings.
In this paper, we focus on the reduced-dimension technique known as extended factored (EFA) algorithm or multibin element-space post-Doppler STAP algorithm [1]. The simulations with this technique will show the performance of our proposed RD-KA-STAP algorithm.
E. Complexity Analysis
Here we illustrate the computational complexity of the proposed algorithms (shortened as LRGP KA-STAP and LRGP RD-KA-STAP) and other existing algorithms, namely, the sample matrix inversion algorithm (SMI), the EFA algorithm in [1], the joint-domain-localized (JDL) algorithm in [7], the CSMIECC algorithm in [43], and the KAPE algorithm in [42]. In Table II, D denotes the size of the reduced dimension. We can see that the computational complexity of our proposed algorithms is significantly lower than the CSMIECC and the KAPE algorithms red (N r ≪ N c , N M ), which require the pseudo-inverse of the matrix V H V. With regard to the SMI algorithm, our proposed algorithms also show a lower computational complexity because the number of snapshots used for training the filter weights of the SMI is in the order of 2N M .
Although the computational complexity of the EFA and JDL algorithms is lower than our proposed LRGP KA-STAP algorithm, two aspects should be noted. One is that the number of snapshots used for training filter weights is much larger than our proposed algorithms. The other is that the computational complexity of EFA and JDL is proportional to the number of Doppler frequencies of interest (we only list the computation complexity for one Doppler frequency). While our proposed algorithms only have to compute the CCM once for different Doppler frequencies of interest. Besides, the computational complexity of our proposed LRGP RD-KA-STAP is lower than the EFA since L in EFA is in the order of 2D, where D is usually larger than N r .
IV. PERFORMANCE ASSESSMENT
In this section, we assess the proposed KA-STAP algorithms by computing the output SINR performance and probability of detection performance using simulated radar data. The output SINR is defined by redThroughout the simulations, unless otherwise stated, the simulated scenarios use the following parameters: side-looking ULA, uniform transmit pattern, M = 8, N = 8, f c = 450MHz, f r = 300Hz, v p = 50m/s, redd a = λ c /2, β = 1, N r = ⌈M + β(N − 1)⌉ = 15, h p = 9000m, signal-tonoise ratio (SNR) of 0dB, the target located at 0 • azimuth with Doppler frequency 100Hz, clutter-to-noise ratio (CNR) of 50dB, and unitary thermal noise power. All presented results are averaged over 100 independent Monte Carlo runs.
A. Impact of ICM on the SINR Performance
In this subsection, we evaluate the impact on the SINR performance with different ICM for our proposed algorithms.
In the examples, we consider four different ICM cases with σ v = 0, σ v = 0.05, σ v = 0.1 and σ v = 0.5. The number of snapshots for training is 4. In Fig. 3(a), (b), (c) and (d), we show the SINR performance against the target Doppler frequency of our proposed LRGP KA-STAP algorithm both with and without a CMT. From the figures, we observe the following conclusions. (i) When there is non-ICM, the proposed LRGP KA-STAP algorithm without a CMT can obtain the optimum performance since the computed clutter subspace is exact. However, it degrades the SINR performance with the increase of σ v resulting in extra sensitivity to the ICM. That is because the computed clutter subspace can not represent the true clutter subspace. (ii) Our proposed LRGP KA-STAP algorithm with a CMT illustrates a robust characteristic to the ICM. When the estimated parameter σ v of CMT is correct, we can achieve the optimum SINR performance. Furthermore, it is demonstrated the range of values of CMT mismatch in which the estimated spreading exhibit acceptable SINR performance, which can be useful in applications. This can be interpreted as that the computed clutter subspace via the application of the CMT to the ideal clutter subspace, spans a similar space to the true clutter subspace.
B. Impact of Inaccurate Prior Knowledge on the SINR Performance
In this subsection, we focus on the impact of inaccurate prior knowledge on the SINR performance of our proposed algorithms. In the first example, we consider the impact of the velocity misalignment by showing the SINR performance against the target Doppler frequency, as shown in Fig.4. Consider three different cases: the velocity misalignments of prior knowledge are (a) 0.5m/s; (b) 1m/s; (c) 2m/s, compared with true platform velocity. The potential Doppler frequency space from −150 to 150Hz is examined and 4 snapshots are used to train the filter weights. The plots show that the proposed LRGP KA-STAP algorithm without a CMT is sensitive to the velocity misalignment, while the LRGP KA-STAP algorithm with a CMT is robust to that. The reason for this is that the velocity misalignment of prior knowledge will lead to the mismatch between the computed clutter subspace and the true clutter subspace. Although the computed clutter subspace via the CMT can not avoid this situation, it can mitigate this impact. Because the velocity misalignment between the clutter patches and the platform can be seen as the Doppler spreading of the clutter patches. Moreover, the results also show that a slightly larger value of the estimated parameter σ v will result in an improved SINR performance for the velocity misalignment case.
The evaluation of the impact caused by the yaw angle misalignment is shown in Fig.5, where we also consider three different cases: the yaw angle misalignments of prior knowledge are (a) 0.2 • ; (b) 0.5 • ; (c) 1 • . The curves also indicate that: (i) the proposed LRGP KA-STAP algorithm without a CMT is sensitive to the yaw angle misalignment, while the LRGP KA-STAP algorithm with a CMT is robust to that; (ii) a slightly larger value of the estimated parameter σ v will result in an improved SINR performance. The misalignment of the yaw angle will lead to a Doppler frequency mismatch between the radar platform and the clutter patches. While the CMT mainly aims at mitigating the performance degradation caused by the clutter Doppler spreading, the CMT will lead to an improved estimated clutter subspace and will exhibit robustness against the yaw angle misalignment.
C. Comparison With Conventional STAP Algorithms
To provide further investigation about the performance of our proposed algorithms, we compare the SINR performance versus the snapshots of our proposed LRGP KA-STAP and LRGP RD-KA-STAP algorithms with the Loaded SMI (LSMI), the EFA algorithm (3 Doppler bins), the 3 × 3 JDL algorithm, redStoica's scheme in [36] (the prior knowledge covariance matrix is computed in the same way as the CSMIECC bluealgorithm), and the CSMIECC algorithm (the combination parameter is set to 0.6) in [43], where the simulation results are shown in Fig. 6. Here, we consider a scenario of ICM with σ v = 0.5, and assume the diagonal loading factors for all algorithms are set to the level of the thermal noise power. The parameter σ v for our proposed algorithms is redassumed to 1. The curves in the figure illustrate that our proposed algorithms have a very fast SINR convergence speed which only needs three snapshots for training, and offer significant better SINR steady-state performance compared with the LSMI, EFA, JDL, redStoica's scheme and CSMIECC algorithms. This is because the proposed algorithms provide a much better estimation of the CCM by using prior knowledge of the data, the low clutter rank property, the geometry of the array and the interference environment. It should be noted that the SINR performance of the LRGP RD-KA-STAP algorithm is worse than that of LRGP KA-STAP with full-DOFs. This is due to the fact that the reduced DOFs will lead to lower computational complexity at the cost of performance degradation.
The results in Fig.7 illustrate the SINR performance versus the target Doppler frequency. The number of snapshots used for training in the LSMI, EFA, JDL, redStoica's scheme and CSMIECC algorithms is set to 48, while 4 in our proposed algorithms. It is found that our proposed LRGP KA-STAP algorithm provides the best SINR performance among all algorithms, and forms the narrowest clutter null resulting in improved performance for the detection of slow targets. It is also shown that the performance of the proposed LRGP RD-KA-STAP algorithm is worse than that of LRGP KA-STAP with full-DOFs, but better than other algorithms in most Doppler bins. Note that although the LRGP RD-KA-STAP algorithm performs slightly worse than other algorithms in Doppler range of −60 to 60Hz, it requires much smaller snapshots for training filter weights.
In the next example, as shown in Fig.8, we present the probability of detection performance versus the target SNR for all algorithms. The false alarm rate is set to 10 −3 and for simulation purposes the threshold and probability of detection estimates are based on 10, 000 samples. We suppose the target is injected in the the boresight with Doppler frequency 100Hz. We note that the proposed algorithms provide suboptimal SINR performance versus the target Doppler frequency. The number of snapshots used for training in the LSMI, EFA, JDL and CSMIECC algorithms is set to 48, while we only use 4 snapshots for our proposed algorithms. detection performance using very short snapshots, but remarkably, obtain much higher detection rate than other algorithms at an SNR level from −8dB to 0dB.
V. CONCLUSIONS
In this paper, novel KA-STAP algorithms have been proposed by using prior knowledge of LRGP to obtain an accurate estimation of the CCM with a very small number of snapshots. By exploiting the fact that the clutter subspace is only determined by the space-time steering vectors, we redhave developed a Gram-Schmidt orthogonalization approach to compute the clutter subspace. In particular, for a sidelooking ULA, we have proposed a scheme to directly select a group of linearly independent space-time steering vectors to compute the orthogonal bases of the clutter subspace. Compared with the LSE algorithm, it has not only exhibited a low complexity, but also shown a simple way to compute the CCM. To overcome the performance degradation caused by the non-ideal effects redand the prior knowledge uncertainty, the proposed KA-STAP algorithm that combines the CMT has been presented and a reduced-dimension version has been devised for practical applications. blueThis has also provided evidence that is feasible to directly use the received data vector and the calibrated space-time steering vectors (only the spatial taper without the temporal taper) to compute the assumed clutter amplitude. The simulation results have shown that our proposed algorithms outperform other existing algorithms in terms of SINR steady-state performance, SINR convergence speed and detection performance for a very small number of snapshots, and also exhibit robustness against errors in prior knowledge. | 2013-11-28T01:30:15.000Z | 2013-11-28T00:00:00.000 | {
"year": 2013,
"sha1": "ddfab358d11fc232d247adf8fbc0dbec9b559f25",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijap/2014/196507.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "4c7ac68013b5c4ebd0f97288b13061000af97a08",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
7388073 | pes2o/s2orc | v3-fos-license | The outer membrane protein Omp35 affects the reduction of Fe(III), nitrate, and fumarate by Shewanella oneidensis MR-1
Background Shewanella oneidensis MR-1 uses several electron acceptors to support anaerobic respiration including insoluble species such as iron(III) and manganese(IV) oxides, and soluble species such as nitrate, fumarate, dimethylsulfoxide and many others. MR-1 has complex branched electron transport chains that include components in the cytoplasmic membrane, periplasm, and outer membrane (OM). Previous studies have implicated a role for anaerobically upregulated OM electron transport components in the use of insoluble electron acceptors, and have suggested that other OM components may also contribute to insoluble electron acceptor use. In this study, the role for an anaerobically upregulated 35-kDa OM protein (Omp35) in the use of anaerobic electron acceptors was explored. Results Omp35 was purified from the OM of anaerobically grown cells, the gene encoding Omp35 was identified, and an omp35 null mutant (OMP35-1) was isolated and characterized. Although OMP35-1 grew on all electron acceptors tested, a significant lag was seen when grown on fumarate, nitrate, and Fe(III). Complementation studies confirmed that the phenotype of OMP35-1 was due to the loss of Omp35. Despite its requirement for wild-type rates of electron acceptor use, analysis of Omp35 protein and predicted sequence did not identify any electron transport moieties or predicted motifs. OMP35-1 had normal levels and distribution of known electron transport components including quinones, cytochromes, and fumarate reductase. Omp35 is related to putative porins from MR-1 and S. frigidimarina as well as to the PorA porin from Neisseria meningitidis. Subcellular fraction analysis confirmed that Omp35 is an OM protein. The seven-fold anaerobic upregulation of Omp35 is mediated post-transcriptionally. Conclusion Omp35 is a putative porin in the OM of MR-1 that is markedly upregulated anaerobically by a post-transcriptional mechanism. Omp35 is required for normal rates of growth on Fe(III), fumarate, and nitrate, but its absence has no effect on the use of other electron acceptors. Omp35 does not contain obvious electron transport moieties, and its absence does not alter the amounts or distribution of other known electron transport components including quinones and cytochromes. The effects of Omp35 on anaerobic electron acceptor use are therefore likely indirect. The results demonstrate the ability of non-electron transport proteins to influence anaerobic respiratory phenotypes.
Background
Shewanella oneidensis (formerly putrefaciens) MR-1 is a Gram-negative bacterium that can use a wide variety of terminal electron acceptors for anaerobic respiration including insoluble manganese (Mn) and iron (Fe) oxides [1][2][3][4]. Some electron transport components are common to the use of multiple electron acceptors. For example, both menaquinone and a 21-kDa tetraheme cytochrome c (CymA) are required for the use of fumarate, nitrate, Mn(IV), and Fe(III) [4][5][6]. Other components are specific for individual electron acceptors, including a periplasmic cytochrome that serves as the fumarate reductase [7] and the cytochrome OmcA which has a role in Mn(IV) reduction [8]. However, many components that influence or participate in electron acceptor use in MR-1 remain unidentified.
Outer membrane (OM) components could be important for the ability of MR-1 to reduce insoluble metal oxides. For example, the OM cytochromes OmcA and OmcB play a role in Mn(IV) reduction but are not essential for Fe(III) reduction [8,9]. The Mn(IV) and Fe(III) reductases in MR-1 may be distinct [9,10], and various studies suggest the presence of Fe(III) reductase in the OM [11][12][13]. Insights are needed into other OM components that could have roles in metal reduction.
Since metal reduction is predominant under anaerobic conditions [2,13,14], components which are upregulated under anaerobic conditions could represent candidates with potential roles in these reductive processes. In this manuscript, a 35-kDa OM protein (Omp35) that is upregulated under anaerobic conditions was identified and characterized. Although Omp35 lacks obvious electron transport moieties, a null mutant lacking Omp35 exhibited significant lags during growth on three anaerobic electron acceptors: fumarate, nitrate, and Fe(III).
Western blots confirmed the absence of Omp35 in all subcellular fractions of OMP35-1, whereas Omp35 was readily detected in the OM and intermediate density membrane (IM) fractions of MR-1 (Fig. 2). The IM closely resembles the OM, except for a buoyant density between that of the cytoplasmic membrane (CM) and OM [16]. Omp35 was not detected in CM or soluble fractions of MR-1 (Fig. 2). This subcellular localization is consistent with its purification from the OM. The levels of Omp35 in OM fractions from the OM cytochrome mutants OMCA1 (∆omcA) and OMCB1 (∆omcB) [8] were the same as those for MR-1 (data not shown).
Western blots confirmed that Omp35 is significantly upregulated under anaerobic conditions, with levels more than 7-fold higher in fumarate-grown cells compared to aerobically-grown cells (Fig. 3A,3B). This is not the result of transcriptional regulation because the levels of omp35 transcript were statistically similar in aerobically-and fumarate-grown MR-1 (Fig. 3C,3D). Levels of Omp35 protein in the OM of the etrA mutant ETRA-153 [17] were similar to the levels found in the OM of MR-1 suggesting that EtrA does not significantly regulate Omp35 (data not shown).
The ability of wild-type omp35 to complement OMP35-1 was examined. Two constructs (pBComp218 and pBComp411) containing omp35 plus 218 and 411 bp of upstream DNA, respectively, in the vector pBCSK were introduced into OMP35-1. Each insert was tested in both orientations; the forward (F) is in frame with the lacZ promoter of the vector, whereas the reverse (R) is not.
The potential role of Omp35 in anaerobic respiration was assessed by a comparison of the relative abilities of MR-1 and OMP35-1 to grow on and reduce various electron acceptors. The maximal growth yields of OMP35-1 were essentially the same as those for MR-1, with no apparent growth lags on 20 mM TMAO, 5 mM DMSO, 10 mM thiosulfate, or O 2 as terminal electron acceptors (data not shown). OMP35-1 also reduced 5 mM δMnO 2 and AQDS at rates similar to those of MR-1 (data not shown). However, there was a distinctive lag in the onset of growth of OMP35-1 on 20 mM fumarate, 2 mM nitrate, and 10 mM Fe(III) citrate (Figs. 5,6), and in the reduction of Fe(III) citrate and 2 mM αFeOOH (Figs. 6,7). The rates of reduction of nitrate and nitrite by OMP35-1 were also slower than those of MR-1 (not shown), corresponding to the delayed growth on nitrate. The lag on fumarate was the most pronounced with MR-1(pBCSK) reaching maximal growth at 1 day, while OMP35-1(pBCSK) showed no growth until day 3 (Fig. 5A). On nitrate, OMP35-1 took one day longer than MR-1 to attain maximal growth (Fig. 5B). The growth of OMP35-1(pBCSK) on Fe(III) citrate lagged behind that of MR-1(pBCSK) for the first 12 hrs (Fig. 6B).
Three of the four complementing omp35 plasmids restored the growth of OMP35-1 on fumarate to rates that were indistinguishable from those of wild-type (Fig. 5A). The growth rate of OMP35-1(pBComp218R) was less than that of MR-1, but was much closer to MR-1 than to OMP35-1 (Fig. 5A). One of the constructs (pBComp218F) restored nitrate reduction and growth to wild-type levels, whereas the other three constructs exhibited a lag similar to OMP35-1 (Fig. 5B). However, the final growth yields on nitrate were greater for the various complements than for OMP35-1 (Fig. 5B). Similar to the nitrate results, growth on and reduction of Fe(III) citrate was near wild-type for OMP35-1(pBComp218F) (Fig. 6), whereas the others lagged behind wild-type at 12 hrs (Fig. Colony PCR reactions with primers specific for the omp35 gene or the chloramphenicol acetyltransferase gene (cat) from pEP185.2 Figure 1 Colony PCR reactions with primers specific for the omp35 gene or the chloramphenicol acetyltransferase gene (cat) from pEP185.2. Lanes 1 and 2 are reactions with omp35 primers O1 and O2 (see Table 2) and lanes 3-5 are reactions with cat primers C1 and C2 (see Table 2). The templates for PCR were as follows: lanes 1 and 3, MR-1; 6B). OMP35-1(pBCSK) could reduce insoluble αFeOOH, but it was significantly slower than MR-1(pBCSK) (Fig. 7). All four constructs partially restored αFeOOH reduction to levels that were between those of wild-type and OMP35-1 (Fig. 7). Both R constructs exhibited less Fe(II) accumulation than the corresponding F constructs (Fig. 7), which could indicate some control of expression of the F constructs by the lacZ promoter of pBCSK.
Cell surface exposure of Omp35
To determine the potential cell surface exposure of Omp35, proteinase K susceptibility experiments were conducted with anaerobically grown MR-1 cells and compared to control cells that were exposed to buffer. The proteinase K-to-control ratio for the periplasmic fumarate reductase was near unity, i.e. the same in proteinase K and buffer-treated cells (Fig. 8), indicating that proteinase K did not compromise the integrity of the OM. The ratio for Omp35 was also near unity ( Fig. 8) indicating that Omp35 is not significantly exposed to proteinase K at the cell surface. However, proteinase K treatment of either purified OM (Fig. 8) or OM solubilized with 0.2% Z3-12 (data not shown) from fumarate-grown MR-1 resulted in complete degradation of Omp35. Thus, proteinase K can completely degrade Omp35 if given sufficient access. To confirm that the proteinase K was active in the whole cell experiments, the surface-exposed outer membrane cytochrome OmcA was substantially degraded by proteinase K (Fig. 8) in agreement with previous findings [18].
Possible roles of Omp35 in cells
Since the absence of Omp35 caused significant lags in the use of some, but not all, electron acceptors, Omp35 might have either direct or indirect roles in electron transport.
Relative levels of Omp35 protein (A, B) and omp35 transcript (C, D) in aerobically-grown versus fumarate-grown MR-1 Figure 3 Relative levels of Omp35 protein (A, B) and omp35 transcript (C, D) in aerobically-grown versus fumarategrown MR-1. A, B: Omp35 protein was detected by western blot of whole cells using an antibody specific for Omp35. An example of two dilutions of a representative experiment are shown in panel A, and the quantitative results from densitometric analysis of western blots from three independent experiments (mean ± S.D.) are shown in panel B. *, statistically different from aerobic to P ≤ 0.001. C, D: Transcript was determined by RNase protection using an antisense probe specific for the omp35 transcript. The data for three independent experiments are shown in panel C, and the quantitative results from densitometric analysis of the RPA blots (mean ± S.D.) are shown in panel D. For each experiment, dilutions of each sample were analyzed to ensure linearity of signal intensity.
While its OM location could make sense for a direct role in Fe(III) reduction, it is not clear why this would impact the use of soluble electron acceptors such as fumarate and nitrate whose terminal reductases are periplasmic (fumarate) [7,19] or likely periplasmic (nitrate). The effects on soluble electron acceptors might imply indirect effects which alter the synthesis or distribution of other electron transport components.
To examine possible indirect effects, the content and distribution of electron transport components in fumarategrown OMP35-1 was compared to that of MR-1. Proteinand heme-stained SDS-PAGE patterns of subcellular fractions were similar for the two strains (data not shown). Cytochrome spectra of the subcellular fractions indicated that the cytochrome content and distribution of OMP35-1 were similar to MR-1 (data not shown). Western blotting of membrane and soluble fractions showed no significant differences between MR-1 and OMP35-1 in the content or distribution of the OM cytochromes OmcA and OmcB, the OM protein MtrB, and the periplasmic fumarate reductase (Fcc 3 ) (data not shown).
Menaquinone is an important electron transport component required for the reduction of fumarate, nitrate, Fe(III), and Mn(IV) [6]. OMP35-1 displayed a delayed phenotype on three of these. However, the levels of menaquinone, methylmenaquinone, and ubiquinones were similar in MR-1, MR-1(pBCSK), OMP35-1, and OMP35-1(pBCSK) (data not shown). Analysis of Omp35 protein in the OM of the menaquinone-minus mutant CMA-1 [6] indicated that the levels were similar to those found in the OM of MR-1 (data not shown). Therefore, OMP35-1 contains a normal complement of quinones, Western blot of lysed whole cells with an antibody specific for Omp35 Anaerobic growth of various strains on fumarate (A) and nitrate (B) Figure 5 Anaerobic growth of various strains on fumarate (A) and nitrate (B). Values represent mean ± high/low for two parallel but independent experiments for each strain. and the absence of menaquinone and methylmenaquinone does not affect the level of Omp35 protein in CMA-1.
It was noted during the purification of Omp35 that fractions containing Omp35 had a yellow color. Because flavins (FAD and FMN) have a yellow color and are electron transfer cofactors of some proteins, the possibility that Omp35 is a flavoprotein was explored. To detect the possible presence of either FAD or FMN in partially purified or purified fractions containing Omp35, several techniques were used including fluorescence spectroscopy, thin layer chromatography (TLC), and UV-visible spectral analysis. All three detected the flavin standards (FAD and FMN) but all three failed to detect any flavin moiety in the yellow fractions containing Omp35. The lower limits of detection of the standards were 0.1 nmol and 2.5-5 pmol for TLC and fluorescence spectroscopy, respectively. Attempts to remove non-covalently bound flavin (boiling, trichloroacetic acid precipitation, and chloroform extraction) and UV-visible spectra of nontreated proteins also failed to detect any flavin. The sum total of procedures should have been able to detect either covalently or noncovalently bound flavin. Spectral analysis of Omp35 fractions was also negative for heme as were heme-stained SDS-PAGE gels. Motif searching of the Omp35 sequence resulted in no matches for heme-or flavin-binding motifs or other motifs suggestive of electron transport proteins.
A small nonproteinaceous compound is apparently excreted from MR-1 that has the ability to restore AQDS reduction to a menaquinone-minus mutant [20]. This compound is not menaquinone, but is apparently redoxactive and has a yellow-orange color [20]. Studies were Anaerobic reduction (A) and growth (B) on Fe(III) citrate by various strains Figure 6 Anaerobic reduction (A) and growth (B) on Fe(III) citrate by various strains. One representative experiment from two independent experiments is shown. Figure 7 Anaerobic reduction of αFeOOH by various strains.
Anaerobic reduction of αFeOOH by various strains
Values represent mean ± high/low for two parallel but independent experiments for each strain.
done to explore the possibility that Omp35 may have a role in the secretion of this compound. If true, the yellow color associated with Omp35 could conceivably be due to an affinity between this compound and Omp35. On separate plates, MR-1 and OMP35-1 were streaked adjacent to the menaquinone-minus mutant CMA-1 in a triangular pattern as done by Newman and Kolter [20]. Both MR-1 and OMP35-1 restored the ability of CMA-1 to reduce AQDS and Fe(III) (not shown). This indicates that OMP35-1 retained the ability to excrete this compound and that Omp35 is not necessary for its secretion.
Discussion
The 7-fold upregulation of Omp35 in fumarate-grown versus aerobically-grown MR-1 (Fig. 3A) is similar to the 7-fold anaerobic upregulation of the OM cytochrome OmcA [16]. This upregulation suggests an important function under anaerobic conditions. A role for OmcA is limited to Mn(IV) reduction [8], whereas the absence of Omp35 resulted in significant lags in growth on fumarate, nitrate, and Fe(III). Several other proteins in MR-1 are also increased under anaerobic conditions including the OM cytochrome OmcB, and the activities for Fe(III), nitrate and fumarate reductases, and formate dehydrogenase [8,13,19]. The mechanisms responsible for these anaerobic upregulations have not been identified in MR-1. While the transcriptional regulator EtrA has a partial role in upregulating fumarate and nitrate reductase [17], it has no effect on levels of OM cytochromes or Omp35. This coincides with the observation that the upregulation of Omp35 is not transcriptional (Fig. 3). The anaerobic induction of OM proteins with resulting effects on anaerobic electron transport have been reported in other species. For example, a major OM protein (AniA) which resembles copper-containing nitrite reductase is anaerobically induced in Neisseria gonorrhoeae [21]. However, the aniA mRNA in N. gonorrhoeae is only expressed under anaerobic conditions, with a role for Fnr and NarP as transcriptional regulators [21].
Regarding the post-transcriptional upregulation of Omp35 under anaerobic conditions (Fig. 3), possible control mechanisms include factors that affect translation efficiency, such as mRNA binding proteins or mRNA secondary structure elements that repress/activate translation [22][23][24]. Translational regulation of proteins that are induced anaerobically has been reported. The level of ethanol dehydrogenase (AdhE) activity is 10-fold higher in fermentatively-grown versus aerobically-grown E. coli K-12, and the translation of adhE mRNA is regulated by RNA secondary structure changes that block the RNA polymerase binding site [25]. Pseudoazurin, a periplasmic shuttle protein in Thiosphaera pantotropha, is expressed anaerobically during nitrification and is regulated by changes in RNA secondary structure [26].
The proteinase K experiments indicated that Omp35 is not significantly exposed on the cell surface. Proteinase K was chosen for these studies because it cleaves on the carboxyl side of a variety of amino acids including aliphatic, aromatic, and hydrophobic residues, and because it completely degraded Omp35 when given adequate access to the protein in purified OM fragments (Fig. 8). This is consistent with the Omp35 sequence, which indicates that 68% of the Omp35 residues are possible proteinase K cleavage sites. While proteinase K-resistant residues of Omp35 could be exposed on the cell surface in small loops, sizable extracellular loops are not likely because the longest stretch of proteinase K-resistant residues in Omp35 is four. Since Omp35 is not significantly exposed on the cell surface, it is also hard to envision a direct role in cell attachment or adhesion. Even so, while proteins that influence adhesion to insoluble Fe(III) oxides could be important [12], it is not clear why this would affect growth on nitrate and fumarate.
The significant growth lags observed with OMP35-1 on fumarate, nitrate, and Fe(III) indicate that Omp35 is required for optimal growth rates under these conditions. While complementation of OMP35-1 with pBComp218F restored wild-type growth on fumarate, nitrate, and Fe(III), the other omp35 constructs varied in their ability The effect of proteinase K on MR-1 proteins OmcA, fuma-rate reductase (FR), and Omp35 Figure 8 The effect of proteinase K on MR-1 proteins OmcA, fumarate reductase (FR), and Omp35. Either fumarategrown whole cells (WC) or purified outer membrane (OM) fractions isolated from fumarate-grown MR-1 were treated with proteinase K. Results of duplicate experiments are expressed as the ratio of band intensity with proteinase K vs. protease-free control. *, statistically significant from FR control to P ≤ 0.006.
to restore electron acceptor use (Figs. 5,6,7), even though all restored Omp35 protein (Fig. 4). While expression of the forward constructs could be influenced by the lacZ promoter of the vector, it is unclear why OMP35-1(pBComp411F) was slower on nitrate and Fe(III) than was OMP35-1(pBComp218F) despite ~200 bp of additional upstream DNA. One possibility is that the extra upstream DNA in pBComp411F might have reduced control of the lacZ promoter on omp35 expression. If RNA secondary structure controls translation of omp35, nonoptimal levels of transcript could influence protein levels. However, Omp35 levels in all four constructs were quite similar and about two-fold higher than those in MR-1 based on NIH image analysis of the western blots (Fig. 4). Since the pBComp218F construct was able to fully complement OMP35-1, elevation of Omp35 levels above those found in MR-1 is not necessarily problematic. However, closer examination indicated that OMP35-1(pBComp218F) contained 20-30% less Omp35 than the other three complements. It is unclear if such minor differences influence the phenotypes, but it is possible that the cells are sensitive to small variations in Omp35 content.
The mechanism by which Omp35 affects growth rates on only some electron acceptors is not clear. Omp35 does not contain detectable heme or flavin, and sequence analysis does not reveal any sites suggestive of iron-sulfur centers, heme-or flavin-binding sites, molybdopterinbinding sites, or other redox-active moieties. Omp35 is therefore not likely to be an electron transport protein. However, motif searching does have limitations. Some motifs can be quite variable among organisms and the motifs are based on most likely consensus domains derived from a limited number of proteins. Therefore, while Omp35 is likely not an electron transport protein, this possibility cannot be completely dismissed. However, such a role would not make sense for fumarate and nitrate given that these terminal reductases are periplasmic in MR-1. However, the size of Omp35 is consistent with the size range typical for porins [27], and sequence alignments predict that Omp35 is likely a porin (Fig. 9). Omp35 aligned most prominently to a hypothetical protein (IfcO) in S. frigidimarina NCIMB400 (46.3% identity, 61.2% homology) (Fig. 9). IfcO is located upstream of ifcA (a gene encoding an iron-induced flavocytochrome Ifc 3 ) [15]. IfcO displays 20% identity to a cation-selective porin from Neisseria meningitidis [15]. The genes surrounding omp35 in MR-1 and ifcO in S. frigidimarina are quite different, suggesting that there could be differences in regulation and/or function. Omp35 also aligned to two putative porins at loci SO1420 and SO1557 in the MR-1 genome (31.4 and 23.0% identity, and 45.0 and 39.3% homology, respectively) ( Fig. 9) and to the porin PorA [28] from N. meningitidis (20.6% identity, 35.5% homol-ogy) (Fig. 9). The sequence homologies are consistent with the presence of conserved membrane core domains and variable loops typical of porins [29,30]. The extent of sequence identity and homology is also typical for that seen with porins across other species.
It is, however, not clear how a porin role for Omp35 relates to the observed electron acceptor phenotype of OMP35-1. If Omp35 is needed for diffusion of electron acceptors across the OM, one might have expected its absence to slow the rate of entry of all soluble electron acceptors including fumarate, nitrate, TMAO, DMSO, nitrite, and thiosulfate. All of these have molecular weights that range from 50-120 Da, well below the exclusion limit for many porins (<600 Da) [31,32]. While some porins can discriminate between substrates based on their charge [31,33] were not affected in OMP35-1. Since αFeOOH is insoluble and too large to cross the OM, a porin would have no role in αFeOOH diffusion. While the loss of a porin might impair entry of general nutrients required by MR-1 (e.g. arginine, serine, glutamate, lactate, formate, ammonium sulfate, phosphate, trace metals, etc.), this should have affected growth under all conditions. Interestingly, a porA mutant of N. meningitidis displayed normal growth rates, whereas a porB mutant was somewhat retarded [34]. While N. meningitidis does not have the respiratory versatility of MR-1, these phenotypes are consistent with that of OMP35-1 and suggest that other porins may substitute for the absence of Omp35.
While many porins exhibit heat modifiability [33,35], we observed no such behavior for Omp35. In addition, OMP35-1 still contained another OM protein that migrated at 35 kDa. Another putative MR-1 porin encoded by locus SO1420 is of similar size to Omp35.
Additional OM channel proteins, often called protein transport pumps, allow the energy-dependent uptake or secretion/efflux of porin-excluded substances. Transport pumps are different than type II porins; they bind a specific substrate with a much higher affinity and their channels are not continuously open [36]. Several putative OM uptake pumps have been identified in the genome of MR-1. These include possible metal ion transporters and peptide and/or amino acid uptake systems [37]. Omp35 is not likely to represent one of these systems since nonmetal electron acceptors (fumarate and nitrate) were affected in OMP35-1. Defects in amino acid uptake would likely have affected growth under a variety of conditions. Amino acid sequence similarities between Omp35 of MR-1 and the top matches as identified by BLAST Figure 9 Amino acid sequence similarities between Omp35 of MR-1 and the top matches as identified by BLAST. Alignments were done using the ClustalW function of MacVector software. Identical residues are indicated by uppercase letters, analogous residues by lowercase letters, and unmatched residues by dots. Alignments were facilitated by introducing gaps (-). The numbers on the right indicate the relative numbering of residues within each immature protein; the total number of residues in each protein is shown in parentheses at the end of each sequence. Comparative sequences and their accession numbers are as follows: PorA N men (Neisseria meningitidis PorA, OM porin precursor, GenBank AF226349_1); SO1557 MR-1 (S. oneidensis MR-1, putative OM porin, TIGR genome locus SO1557); SO1420 MR-1 (S. oneidensis MR-1, putative OM porin, TIGR genome locus SO1420); IfcO S fri (S. frigidimarina, putative OM porin, GenBank AJ236923). In addition to using Fe(III) as an electron acceptor, MR-1 must assimilate iron for cellular constituents such as cytochromes. Siderophores are low molecular weight Fe(III) chelators that scavenge iron from the environment [38,39]. MR-1 synthesizes siderophores that are negatively regulated by Fur (ferric uptake regulator) [40]. However, Omp35 is not likely involved in siderophore transport for several reasons: (i) Omp35 has no homology to known siderophore proteins; (ii) the specific cytochrome content of OMP35-1 resembled that of wild-type; (iii) the use of some electron acceptors (e.g. TMAO, Mn(IV)) that require iron-containing components such as cytochromes was not affected in OMP35-1; and (iv) the growth on 10 mM Fe(III) citrate should not have invoked a need for siderophore transport, because siderophores are only important under low iron conditions.
A variety of genes encoding putative efflux pump proteins have also been identified in the MR-1 genome. The membrane fusion protein (MFP) family transports larger molecules, such as peptides, proteins, and carbohydrates across the OM [41]. The MR-1 genome encodes members of this family including nine MexB-like efflux transporters [37]. Another member of this family from E. coli is the AcrAB efflux system [42]. A direct link to the OM by the AcrAB efflux system occurs through TolC, an OM protein channel that has been implicated in the secretion of proteins and the efflux of toxins [43,44]. A tolC homolog has been found to protect MR-1 from cell death due to the accumulation of AQDS through the TolC-mediated efflux of AQDS [45]. Although Omp35 did not match any known OM transport pumps, the possibility of a role as an efflux pump for the removal of toxic metabolites or the excretion of unidentified compounds remains a possibility. A small unidentified compound that is released from MR-1 restores the ability of a menaquinone mutant to reduce AQDS [20]. Our results indicate that Omp35 is not required for the secretion of this compound. Even so, a potential role for such a compound in nitrate and fumarate reduction seems improbable.
The effects of Omp35 on electron acceptor use are therefore likely indirect. This is also the case for MtrB, an OM protein of MR-1 that is required for Mn(IV) and Fe(III) reduction but that lacks obvious electron transport moieties [46,47]. MtrB is required for the proper localization and insertion of the OM cytochromes OmcA and OmcB into the OM [47]. While the localization of OM cytochromes is normal in OMP35-1, Omp35 could conceivably have a role in the localization or arrangement of OM components required for Fe(III) reduction. How this would also impact nitrate and fumarate reduction, however, remains unclear.
Conclusions
A 35-kDa probable porin (Omp35) was isolated from the OM of MR-1. Omp35 levels are markedly upregulated anaerobically by a post-transcriptional mechanism. To our knowledge, this is the first report of a porin that is upregulated anaerobically in this manner. An omp35 null mutant exhibited significant lags in anaerobic growth on fumarate, nitrate, and Fe(III). The absence of Omp35 did not affect the quinone content or the levels or distribution of various cytochromes in MR-1. Omp35 does not contain obvious electron transport moieties, so its effects on the use of electron acceptors are likely indirect. The results highlight the possibility for non-electron transport proteins to influence anaerobic respiratory phenotypes, and the importance of considering such indirect effects when characterizing electron acceptor deficiencies.
Bacterial strains, plasmids, media, and growth conditions
All materials were from sources previously described [4,5]. A list of the bacteria and strains used in this study is presented in Table 1. For molecular biology purposes, S. oneidensis strains were grown aerobically at room temperature (23-25°C) or at 30°C on Luria-Bertani (LB) medium, pH 7.4 [48]. E. coli strains were grown aerobically at 37°C on LB medium. Growth media were supplemented with appropriate antibiotics when required, including ampicillin (Ap), 50 µg mL -1 ; chloramphenicol (Cm), 34 µg mL -1 ; and kanamycin (Km), 50 µg mL -1 .
For other applications, S. oneidensis was grown at room temperature either aerobically or anaerobically as previously described [49] in M1 defined medium [2] supplemented with 15 mM lactate and vitamin-free Casamino Acids (0.1 g L -1 ). For testing the growth on or reduction of electron acceptors under anaerobic conditions, 15 mM formate was also included. Anaerobic studies were conducted in an anaerobic chamber (Coy Laboratory Products, Ann Arbor, MI) with an atmosphere of 4 to 6% H 2 (balance N 2 ). For anaerobic growth or analysis of electron acceptor use, the medium was supplemented with one of the following electron acceptors: 20-30 mM disodium fumarate, 10 mM sodium thiosulfate, 20 mM trimethylamine N-oxide (TMAO), 5 mM dimethylsulfoxide (DMSO), or 2 mM sodium nitrate. For growth on TMAO, the medium was also supplemented with 30 mM HEPES to buffer against alkalinization by the product trimethylamine. Studies with Fe(III) or Mn(IV) were conducted in LM medium [3] supplemented with 15 mM lactate, 2 mM sodium bicarbonate, and one of the following electron acceptors: 10 mM Fe(III) citrate, 2 mM Fe(III) oxyhydroxide (αFeOOH), or 5 mM vernadite (δMnO 2 ).
For electron acceptor characterization of strains, inocula were prepared from cells grown aerobically for 1-2 days on LB medium supplemented with the appropriate antibiotics. Cells were suspended in sterile distilled water and the inoculum densities were adjusted to equalize turbidity (adjustments were made in the inoculum optical density and/or volume).
Purification of Omp35
OM was isolated from MR-1 cells grown anaerobically with fumarate as the electron acceptor. Loosely associated proteins were removed by treatment of the OM at room temperature with 23 mM sodium cholate in buffer A (20 mM K 2 HPO 4 (pH 7.4), 1 mM EDTA, 0.02% sodium azide, and 5% glycerol) containing 10 mM DTT (dithiothreitol) and 0.2 M NaCl with a cholate/protein ratio of 9:1 (wt/ wt) and the protein at 1 mg mL -1 . The suspension was stirred, sonicated four times (30 sec each) with 1-2 min periods of cooling, stirred for an additional 10 min, and then centrifuged for 97 min at 50,000 rpm (302,000 × g) at 4°C in a Beckman 50.2 Ti rotor. The pellets were resuspended in 3 mL buffer A and treated at room temperature with lysozyme (5 × 10 4 U per 7-8 mg protein) for 1 hr and with mutanolysin (6 U per mg protein) for an additional hr to digest any remaining cell wall material. The sample was diluted with buffer A and centrifuged as above. The pellet was solubilized with one of the following: (i) 1 After stirring at room temperature for 10 min, the solubilized OM was sonicated twice (30 sec each with 1 min cooling) and centrifuged for 28 min at 90,000 rpm (438,000 × g) in a Beckman TLA-100.3 rotor at 4°C (or at room temperature when 3.65 M urea was included). This work a Nal r , Sm r , Km r , Ap r , Tc r , and Cm r , resistance to nalidixic acid, streptomycin, kanamycin, ampicillin, tetracycline, and chloramphenicol, respectively. b The vector pBCSK freely replicates in MR-1 while pEP185.2 does not.
Two methods were performed to separate Omp35 from other OM proteins including the OM cytochromes. In the first, Z3-12 solubilized OM was concentrated by ultrafiltration to a volume of < 2 mL using a Millipore Ultra-Free filter (30,000 MWCO), and then applied to a Sephacryl S-200 HR (Pharmacia Biotech) gel filtration column (1.6 × 86 cm) at 4°C. Proteins were eluted with 0.5% Z3-12 in buffer A containing 0.2 M NaCl and 10 mM DTT, and fractions were screened by heme-and silver-stained SDS-PAGE. Those containing a 35-kDa band were pooled and dialyzed overnight in 10 mM K 2 HPO 4 (pH 7.4) with 5% glycerol, 1 mM EDTA, 0.02% sodium azide, 0.5% Z3-12 and 0.1 mM DTT (buffer B). The dialyzed sample was applied to a Bio-Gel hydroxylapatite column (0.5 × 2.5 cm); after equilibration and removal of nonbinding proteins with buffer B, a step gradient of increasing concentrations of K 2 HPO 4 buffer was applied (25 mM, 50 mM, 100 mM, 200 mM, and 400 mM). A 35-kDa band was prominent in the fractions which eluted at 50-100 mM K 2 HPO 4 ; this band was heme negative (i.e. not a cytochrome).
In the second method, differential ultrafiltration was used on OM that had been solubilized with each of the three detergent protocols. A Pall Filtron 50,000 MWCO filter retained the red cytochromes while the 35-kDa protein was in the filtrate. This filtrate was concentrated using a 30,000 MWCO Millipore Ultra-Free filter, and then applied to a GCL-300 gel filtration column (Isco, Inc; 1.6 × 35 cm). Fractions were obtained in which a 35-kDa protein was in high concentration relative to other minor proteins.
Fractions enriched in Omp35 were subject to SDS-PAGE and transferred to a PVDF membrane. The Omp35 band was excised from the membrane and the N-terminal sequence was determined by the Protein/Nucleic Acid Shared Facility of the Medical College of Wisconsin, using a Beckman Coulter model 2CF 3000 pulsed liquid phase protein sequencer.
DNA manipulations
A list of synthetic oligonucleotides used is presented in Table 2. Restriction enzyme digests, cloning, subcloning, and DNA electrophoresis were done according to standard techniques [48] following manufacturers' recommendations as appropriate. The following procedures were done as previously reported [8,17]: DNA ligation, isolation of plasmid and cosmid DNA, colony PCR, DNA sequencing, and determination of sizes of DNA fragments, RNA, and proteins. Electroporation and preparation of cells for electroporation was performed as previously described for either E. coli [5] or MR-1 [17].
Computer-assisted sequence analysis and comparisons were done with MacVector software (Accelrys, San Diego, CA). Oligonucleotide primers were designed by using OLIGO software (version 6.15; Molecular Biology Insights, Cascade, CO).
Antibody specific for Omp35
Recombinant technology was used to generate a protein fusion of thioredoxin (TR) to an internal 250-residue fragment of Omp35. Specifically, a 750-bp fragment of omp35 was generated by PCR of MR-1 genomic DNA using primers O5 and O6 ( Table 2). The PCR product was cloned into pBAD/Thio-TOPO (Invitrogen, Carlsbad, CA), and transformed into E. coli TOP10. After identifying a clone containing the omp35 fragment in the proper orientation, expression of the fusion protein was induced with 0.02% arabinose for 2 hr at 37°C. The cells were har-
Name Oligonucleotide sequence a
Oligonucleotides based on omp35 a The underlined regions indicate the following restriction endonuclease sites engineered into the oligonucleotides: XhoI sites in O1 and O2 and ClaI sites in O3, O4, and K1.
vested by centrifugation, and lysed using Bugbuster Protein Extraction Reagent (Novagen, Madison, WI). The resulting fusion protein, a 250-residue fragment of Omp35 with thioredoxin (TR) at the N-terminus and a 6x histidine tag at the C-terminus, was localized primarily in inclusion bodies. After solubilization with 6 M urea, the TR-Omp35 fusion was purified using His•Bind Quick resin (Novagen) according to manufacturer's instructions. The purified fusion was dialyzed at 4°C against 20 mM Tris-HCl (pH7.5)/0.1 M glycine/5% (w/v) glycerol/1% (w/v) NaCl, and then concentrated by ultrafiltration. The purified concentrated TR-Omp35 fusion protein was used as an antigen to generate polyclonal antisera in New Zealand white rabbits using Titermax (CytRx Corp., Norcross, GA) as an adjuvant. A purified immunoglobulin G (IgG) fraction was obtained from the immune and preimmune sera using ammonium sulfate fractionation and ion exchange chromatography.
Construction of an Omp35 gene replacement mutant
An omp35 gene replacement mutant (OMP35-1) was constructed from MR-1 using a strategy analogous to that described previously [4]. A 1729-bp fragment containing the entire omp35 gene plus 5' and 3' flanking sequences was generated by PCR of MR-1 genomic DNA using custom primers O1 and O2 (Table 2). This PCR product was cloned into pCR2.1-TOPO generating pTOPO/omp35. Inverse PCR [50] of pTOPO/omp35 using custom primers O3 and O4 (Table 2) generated TOPO/omp35(∆393), a 5.2-kb fragment that is missing 393 bp of internal omp35 sequence. The 2.1-kb Km r gene from pUT/mini-Tn5Km was generated by PCR with the custom primer K1 (Table 2). Following digestion with ClaI, the Km r gene was ligated to the TOPO/omp35(∆393) fragment, generating pTOPO/omp35:Km. A 3.4-kb DNA fragment containing the Km r -interrupted omp35 gene was cut from pTOPO/ omp35:Km with XhoI and ligated into the XhoI site of the suicide vector pEP185.2, generating pDSEPomp35, which was then electroporated into the donor strain E. coli S17-1λpir. E. coli S17-1λpir(pDSEPomp35) was mated with MR-1 and MR-1 exconjugants were selected using kanamycin under aerobic conditions on defined medium with 15 mM lactate as the electron donor. Colonies were screened by colony PCR [51]; those lacking the expected wild-type 1.7 kb PCR product were pursued as putative insertional mutants. Throughout, appropriate analyses (restriction digests, PCR, DNA sequencing) were done to verify that the expected constructs were obtained.
Constructs to complement OMP35-1
Wild-type omp35 plus 218 and 411 bp of upstream DNA were amplified from MR-1 genomic DNA with custom primers O1 and O2 (Table 2) and O1 and O7 (Table 2), respectively, using the Expand High Fidelity PCR System (Roche). The products were cloned in pCR2.1-TOPO, from which the inserts were excised by either BamHI/NotI (reverse constructs) or NotI/SacI (forward constructs) and cloned in pBCSK generating pBComp218F and R and pBComp411F and R. The forward construct (F) is in frame with the lacZ promoter of the vector, whereas the reverse (R) is in the opposite orientation. These plasmids were electroporated into JM109, and the identity and orientation of the inserts was verified. They were then electroporated into OMP35-1.
Ribonuclease protection
Total RNA was isolated from either fumarate-grown or aerobically-grown (shaken at 200 rpm) cultures after 1 day of growth using a hot phenol method followed by treatment with RNase-free DNase as previously described [5,52]. To generate a probe, a 342-bp internal fragment of omp35 was cloned into pCR2.1-TOPO; the desired orientation was verified by PCR. Using M13 primers, a 585-bp fragment containing omp35, with flanking 5' and 3' vector DNA, was generated by PCR; using this fragment as a template, the biotin-labeled antisense omp35 RNA probe was generated using the MAXIscript™ Kit (Ambion) and biotin-14-CTP. The probe was gel purified on a Trisborate-EDTA-urea polyacrylamide gel. RNase protection assays were done using 10 µg total RNA, 400 pg of probe, and the RPA III™ Kit (Ambion, Austin, TX). . Standards and samples were tested at both pH 7.7 and pH 2.6 (acidified with 0.1 mL of 1.0 N HCl). FAD and FMN have higher fluorescence under acidic and basic conditions respectively. Background fluorescence from buffer S was subtracted from the values.
Flavin content was also analyzed by comparison of reduced and oxidized spectra using an Aminco DW-2000 spectrophotometer operated in the split-beam mode. Samples were used directly without boiling or other treatments. Room temperature spectra were recorded in a quartz sub-micro sample cuvette with sample and reference slit masks, using a slit width of 2.0 nm and a scan speed of 2.0 nm s -1 . To obtain reduced spectra, samples were treated with dithionite. Scans were performed in both the visible (400-700 nm) and UV (200-325 nm) ranges.
The ability of OMP35-1 to restore electron acceptor use to a menaquinone-minus mutant
Agar plates of M1 defined medium were prepared containing lactate and formate and either anthraquinone-2,6disulfonic acid (AQDS, 5 mM final concentration) or 10 mM Fe(III) citrate. For the former, an AQDS suspension was prepared aseptically (0.206 g in 5 mL sterile distilled water) and dispensed into 100 mL medium.
Inocula were grown aerobically on LB agar. Cells were suspended in sterile distilled water and applied to the appropriate position on the agar plate using a sterile loop. Strains were inoculated in a triangular pattern as described by Newman and Kolter [20] with CMA-1 on 2 sides of the triangle and either MR-1 or OMP35-1 on the third side. Plates were placed in the anaerobic chamber immediately after inoculation. Fe(III) citrate reduction was visualized on day 2. Agar plates were covered with a ferrozine agarose solution (0.01 g ferrozine, 1.2% agarose in 10 ml 50 mM HEPES, pH 7). Magenta color corresponding to Fe(II)-ferrozine developed immediately. AQDS reduction was visualized on day 3 by examining the orange halo that formed around the streaks on the plates; this orange color was due to the reduced product 2,6-anthrahydroquinone disulphonic acid [AHDS].
Miscellaneous procedures
For most electron acceptors, growth was assessed by measuring culture turbidity at 500 nm using a Beckman DU-64 spectrophotometer. Because Fe(III) citrate interferes with the turbidity measurement, growth on Fe(III) was assessed by measuring total cellular protein as previously described [3]. Nitrate [55] and nitrite [56] were determined colorimetrically in cell-free filtrates. Fe(II) was determined by a ferrozine extraction procedure [14,57]. When Fe(III) citrate was used, Fe(II) analysis was performed in unfiltered samples. When αFeOOH was used, Fe(II) was determined in samples that were filtered through 0.2 µm filters or that were centrifuged after ferrozine was added. Mn(II) was determined in filtrates by a formaldoxime method [58,59]. δMnO 2 [1] and αFeOOH [57] were prepared as described previously.
Cytoplasmic membrane (CM), intermediate membrane (IM), outer membrane (OM), and soluble fractions (cytoplasm plus periplasm) were purified by an EDTA-lysozyme-Brij 58 (polyoxyethylene cetyl ether) protocol as previously described [49]. The separation and purity of these subcellular fractions were assessed by spectral cytochrome content [49], membrane buoyant density [49], and SDS-PAGE gels [53] stained for protein with Pro-Blue (Owl Separation Systems, Woburn, MA) or for heme [60]. Protein was determined by a modified Lowry method, with bovine serum albumin as the standard [5].
Quinones were extracted from cells and were resolved by thin-layer chromatography (TLC) as previously described [4] on Merck Kieselgel 60 F 254 plates.
Western blotting was performed as previously described and developed using either the ImmunoPure NBT/BCIP Substrate Kit (Pierce, Inc., Rockford, IL) [16] or the Super-Signal West Pico Kit (Pierce) [47]. Polyclonal antibodies specific for the OM cytochromes OmcA and OmcB [9], the OM protein MtrB [47], and the periplasmic fumarate reductase (Fcc 3 ) [17] have been described previously. The antibody specific for Omp35 (above) was diluted to a final purified IgG concentration of 0.4 µg mL -1 .
Statistical analysis was performed using single-factor ANOVA (analysis of variance). Relative band densities of the RNA or protein blots were determined using NIH image software http://rsb.info.nih.gov/nih-image/. | 2017-08-03T01:31:28.465Z | 2004-06-22T00:00:00.000 | {
"year": 2004,
"sha1": "049a2a89e96a56ccf1ab0787af1b6941449fa38a",
"oa_license": "CCBY",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-4-23",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a44f27edbe6d59d6b216e494295710ec6bc60408",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
260048755 | pes2o/s2orc | v3-fos-license | Healthcare professionals' satisfaction toward the use of district health information system and its associated factors in southwest Ethiopia: using the information system success model
Background Ethiopia has the potential to use the district health information system, which is a building block of the health system. Thus, it needs to assess the performance level of the system by identifying the satisfaction of end users. There is little evidence about users' satisfaction with using this system. As a result, this study was conducted to fill this gap by evaluating user satisfaction and associated factors of district health information system among healthcare providers in Ethiopia, using the information system success model. Methods An institutional-based cross-sectional study was conducted from November to December 2022 in the Oromia region of southwest Ethiopia. A total of 391 health professionals participated in the study. The study participants were selected using a census. Using a self-administered questionnaire, data were collected. Measurement and structural equation modeling analyses were used to evaluate reliability, the validity of model fit, and to test the relationship between the constructs, respectively, using analysis of moment structure (AMOS) V 26. Results System quality had a positive direct effect on the respondent's system use (β = 0.18, P-value < 0.001), and satisfaction (β = 0.44, P-value < 0.001). Service quality had also a direct effect on the respondent's system use (β = 0.37, P-value < 0.01), and satisfaction with using the district health information system (β = 0.36, P-value < 0.01). Similarly, system use had also a direct effect on the respondent's satisfaction (β = 0.53, P-value < 0.05). Moreover, computer literacy had a direct effect on the respondent's system use (β = 0.63, P-value < 0.05), and satisfaction (β = 0.51, P-value < 0.01). Concussions The overall user satisfaction with using the district health information system in Ethiopia was low. System quality, service quality, and computer literacy had a direct positive effect on system use and user satisfaction. In addition, system use and information quality had a direct positive effect on healthcare professionals' satisfaction with using the district health information system. The most important factor for enhancing system use and user satisfaction was computer literacy. Accordingly, for the specific user training required for the success of the district health information system in Ethiopia, the manager should offer additional basic computer courses for better use of the system.
training required for the success of the district health information system in Ethiopia, the manager should offer additional basic computer courses for better use of the system. KEYWORDS DHIS2, satisfaction, healthcare professionals, Ethiopia, D&M model Background The formulation of decisions, policies, and plans is based on the use of exact, valid, timely, and reliable data and information (1,2). Achieving better health outcomes requires robust health systems, and moving forward, a strong health system is built on a strong health information system (HIS) (3). Globally, to achieve universal health coverage, information systems are crucial and are key in health interventions, assessments of the health sector, planning, resource allocation, and program oversight (4). The World Health Organization states that the main goal of HIS is the global development of automated patient data services, which results in more effectively retrieving the data needed for treatment, statistics, teaching, and research. To improve the efficacy and effectiveness of health care through better management at all levels, HISs are developed for the integrated collection of data, processing, and reporting (5).
District Health Information System Version 2 (DHIS2) is a web-based, integrated national health information system that incorporates high-quality data used at all levels to enhance the delivery of healthcare services (3,6,7). More than 60 countries currently use DHIS, and most international projects are more interested in using it to track health performance (7,8). Ethiopia has also developed DHIS usage potentials that will roll out userfriendly DHIS 2 versions throughout the region. To improve decision-making among public health facilities, the Federal Ministry of Health (FMOH) is deploying and implementing DHIS 2.
Information systems (IS) demonstrate a relationship between users' attitudes and intentions to continue with information systems. As a result, the increase in IS investments highlights the need for understanding end-user satisfaction and system usage. Additionally, it is claimed that user satisfaction, a subjective or perceptual metric associated with users' attitudes and keeping intentions, has been widely used to assess the success of IS (9). One of the most common methods for assessing a system's performance and, consequently, its success is user satisfaction (5,10). User satisfaction, satisfaction with the hardware and software, satisfaction with the system development project, user complaints about the information system center, and user satisfaction with the intermediaries are factors relating to system success (11,12).
According to an Iranian study, satisfaction criteria in 11 different hospitals were relatively favorable (54.6%) (10). Another study conducted in Tanzania showed that 85% of the users responded that they were satisfied with the DHIS2 system (9). A study done about the utilization of DHIS-2 among healthcare professionals in southwest Ethiopia was 57.3% and the study participants who were serving in the expert position had more good utilization of DHIS-2 (49.1%) (7). But the satisfaction of those users with this system was uncertain. One classifies a system as weak if it does not meet the needs of the users and is not consumer-based (13). Disregarding or not paying enough attention to human aspects is one of the primary reasons why an information system fails to accomplish some of its intended aims. This, in turn, would fail to create an appropriate interface between the system and its users and to provide the users with a sense of ownership of the system (12).
Organizations have used benchmarking, Kaplan and Norton's balanced scorecard, and research has created models like DeLone and McLean's (D&M), and Doll's to better understand the tangible and intangible benefits of IS implementation, highlighting the need for better and more reliable success metrics (9,14). Although the majority of health professionals believe that technology may reduce the burden of paper-based documentation and the inaccessibility of patient data in urgent situations, they can also be quickly dissatisfied when a new system or support does not meet their expectations (15,16).
Theoretical background and hypothesis
DeLone and Mclean models offer a model that encompasses the dimensions for IS success measurement after thoroughly reviewing various IS success measurements (17).In this study, we proposed a modified DeLone and McLean's (D&M) model that consisted of five main components (information quality, system quality, service quality, system use, and user satisfaction) with the addition of three new factors (computer literacy, system training, and attitude), which were significant factors for users' system use and satisfaction with DHIS2 (10-12, 15,16), and the following provides details of adapting predictors and our hypotheses ( Figure 1).
System quality
System quality examines whether a system has the userrequired capability to support the activity at work, and researchers have found that the most popular indicator of system quality is the ease of use (18). System quality identified as the primary determinant factor for IS success measurement proposed by various studies (17)(18)(19). In the context of this, the following hypotheses are investigated in this study: H1: System quality will have a positive effect on DHIS-2 use. H2: System quality will have a positive effect on user satisfaction Walle et al. 10.3389/fdgth.2023.1140933 Information quality Information quality concerns are associated with IS output metrics. The majority of proven information quality measures include perceived usefulness, accuracy, format, and timeliness (17). Information quality is identified as the determinant factor for IS use and user satisfaction (17,18,20). Accordingly, the following hypotheses are investigated in this study: H3: Information quality will have a positive effect on DHIS-2 use. H4: Information quality will have a positive effect on user satisfaction
Service quality
Service quality takes into account the external and internal user support that is offered, as well as the extra infrastructures that support the proper adoption of the DHIS-2 (18). A study was done in America (21), Brazil (22), Denmark (23), Nigeria (24), and Ethiopia (15). showed that, service quality was identified as the factor to influence IS use and user satisfaction. Thus, this study tests the following hypotheses: H5: Service quality will have a positive effect on DHIS-2 use. H6: Service quality will have a positive effect on user satisfaction
Computer literacy
Computer literacy is the term used to describe the knowledge and abilities that allow individuals to use computers efficiently for a certain purpose (18). Studies conducted in UAE (25), Saudi Arabia (26), and Ethiopia (18).revealed that commuter literacy was a determinant factor for IS success. As a result, this study tests the following hypotheses: H7: Computer literacy will have a positive effect on DHIS-2 use. H8: Computer literacy will have a positive effect on user satisfaction Attitude Attitude exhibits how individuals' thoughts toward the information system affect their feelings and behavior, and the study revealed that attitude/feeling towards the system influences the information system (16). This study tests the following hypotheses: H9: Attitude will have a positive effect on DHIS-2 use. H10: Attitude will have a positive effect on user satisfaction
System training
System training specifically the training initiatives that are designed to teach the use of health information systems (HMIS, EMRs, DHIS-2) to the users (27). studies revealed that system use influences IS success (15,27). Hence, this study tests the following hypothesis: H11: System training will have a positive effect on DHIS-2 use. H12: System training will have a positive effect on user satisfaction
System use
System use is the actual use of the district health information system, and studies revealed that system use influences user Modified information system success model. (28,29). In the context of this, the study tests the following hypotheses: H13: System use will have a positive effect on user satisfaction H14: System use mediates the relationship between information quality and DHIS-2 user satisfaction H15: System use mediates the relationship between system quality and DHIS-2 user satisfaction H16: System use mediates the relationship between service quality and DHIS-2 user satisfaction H17: System use mediates the relationship between user attitude and DHIS-2 user satisfaction H18: System use mediates the relationship between computer literacy and DHIS-2 user satisfaction H19: System use mediates the relationship between system training and DHIS-2 user satisfaction The determining elements that contributed to the success of the information system in those settings can differ from those in developed nations. Therefore, to understand the crucial success and failure criteria, rigorous assessment studies on various health information system implementation projects in such settings are required.
The study may have effects on practice, policy, and upcoming researchers. The beneficiaries of this study are health professionals, healthcare organizations. and patients. Ethiopia has the potential to use the district health information system, which is a building blocks of the health system. Thus, it needs to assess the performance level of the system by identifying the satisfaction of end users. According to our search of the literature, little research has been done on the subject of healthcare professionals' satisfaction with using district health information systems in a resource-limited setting using the information system success model. As a result, the purpose of and specific alim of this study was conducted to Introduce a modified theoretical model constructed based on the on the information technology success model (D&M model) and empirically test the modified information system success model for determining the key factors influencing satisfaction of healthcare providers towards using DHIS2, which was implemented in southwest Ethiopia.
Study setting and period
This study was carried out in public facilities in Ilu Abba Bor and Buno Bedelle Zones, Oromia Regional State, southwest Ethiopia. Ilu Abba Bor Zone and Buno Bedelle Zone are two of the 20 zones of the Oromia regional state situated southwest of the region and located at a distance of about 600 km and 483 km from the center of the region, respectively. In the two zones, there are six public hospitals, namely: Bedele Hospital, Darimu Hospital, Dembi Hospital, Metu Karl Hospital, Dedhesa Hospital, and Chora Hospital. The study was conducted from November to December 2022.
Study design
An institution-based cross-sectional study was carried out among healthcare professionals who were working in public hospitals.
The population of the study All health professionals who were working in Ilu AbaBor and Bunno Bedelle zone public hospitals were considered the source population, whereas all health professionals who were working in the system at Ilu AbaBor and Bunno Bedelle zone public hospitals during the study period were considered the study population. However, healthcare professionals, who worked in the system and were not available during the study period, and had less than six months of work experience not included in the study.
Sample size determination and sampling procedure
Study participants were included from six hospitals in southwest Ethiopia. The data was collected by approaching each study participant. Selected hospitals found within the research areas were contacted. Based on study participants at each hospital, the sample size was calculated. Hence, all health professionals who handle data, generate data, use generated data for their decision-making, and serve as the focal person within their hospitals were included, and the users of DHIS-2 in southwest Ethiopia were a small number (n = 421). Thus, the study participants were sampled using census in selected hospitals.
Operational definitions
Health professionals: Users who serve as the focal person within their department.to handle data, generate data and use generated data for their decision-making, which included physicians (doctors and health officers), nurses (clinical nurses, midwives, optometrists, physiotherapists, and anesthesiologists), laboratories, pharmacies, radiologists, and HMIS (health data entry and management secretaries, and information system officers) (15,30).
User satisfaction: Satisfaction was assessed using a 5-point Likert scale with a 5-item questionnaire, participants with a score equal to or above the median were categorized as satisfied; those with scores below the median were categorized as dissatisfied (15,31). The second section about the information system success evaluation model variables, which was developed by DeLone and MacLean (D&M), was used as the basis for this study (15,17). In the world of informatics, a validated and widely used information system success evaluation approach (15). System quality; 7 items, information quality; 10 items, service quality; 9 items, system use; 10 items, user satisfaction; 5 items, and net benefit are the fundamental dimensions in this paradigm (7, 9-12, 17, 32, 33). To evaluate user satisfaction, we selected five parameters from the D&M model while omitting the net benefit. User background factors like system training; 4 items, and computer literacy; 4 items were included as determining factors to be examined instead of net benefit because numerous researchers identified it as a determinant element, particularly in low resource settings (15,33). In addition, we extend the attitude; items that influence the user satisfaction towards using DHIS2 (9,34).
Data collection tool, data quality, and procedures
A pretest study with 10% of the total sample size was undertaken out of the study in Jimma Hospital before the actual data collection to assess the validity and reliability of the data collection instrument. The required adjustments were therefore made.
The internal consistency of each component of the data collection tool was assessed using Cronbach's alpha, which was obtained from system quality = 0.83, information quality = 0.81, service quality = 0.79, system use = 0.91, user satisfaction = 0.88, attitudes = 0.84, system training = 0.85, computer literacy = 0.81. Finally, two days of training for the actual data collection were given to three onsite supervisors, three health informatics, and two nurse professionals who served as data collectors. With eligible study participants, the validated data-collecting instrument was used to gather data, and the consistency and completeness of the data were also reviewed daily by the supervisors and investigators.
Data processing and analysis
Data were entered using Epi Data version 4.0.2, and descriptive analysis was done using STATA version 14. For descriptive statistics, frequencies and percentages were determined and presented using graphs and tables, Moreover, model predictors were analyzed using structural equation model (SEM) software called analysis of moment structure (AMOS) version 26. Standardized path coefficients were used to identify the association between predictors and dependent variables.
This study also used the common model-fit measures to assess the model's overall goodness of fit, including the Chi-square ratio (<3), the goodness of fit index (GFI > 0.9), adjusted goodness of fit index (AGFI > 0. 8 To determine statistically significant predictors the critical ratio and standardized path coefficients with P-value <0.05 were employed to assess the relationship between the predictors and dependent variable. The influence and level of significance of each of the six possible mediation paths in the model were explored, partial mediation occurs when a construct's direct, indirect, and total effects are all statistically significant, and full mediation occurs when the direct and indirect effects are both statistically significant but the total effect is not. Moreover, to confirm a mediation effect, we typically looked for a substantial indirect effect with a P-value < 0.05.
Sociodemographic characteristics
Out of 421 participants in this study, 391 (92.87% response rate) of them completed the questionnaire. Of all participants, 218 (55.8%) were male. Around three-fifth, 231 (59.1%) of the participant's age ranged from 20 to 30 years with the mean age of the participant was 31 years ± 5.9 SD. More than half of 217 (55.5%) of the participants ranges between 3 and 5 years of work experience. Around three-fifth 250 (63.9%) of the participants had a salary of between 5,000-10,000 ETB ( Table 1).
Factors associated with DHIS2 user satisfaction SEM analysis found that information quality, system quality, service quality, computer literacy, system training, and attitude was explained in 67% of system use and 82.0% of the endogenous variable (DHIS2 user satisfaction), which had an R 2 of 0.67 and 0.82 respectively. This showed that the proposed model was a strong predictive power. Further, the results offer significant insights into healthcare providers' satisfaction with using DHIS2 for improving the quality of the healthcare system in a resourcelimited setting. The model's standardized estimates of the model predictors are shown below (Figure 3). The SEM analysis finding presented in Figure 3 showed that system quality had a positive direct effect on the respondent's system use (β = 0.18, P-value < 0.001), and satisfaction towards using DHIS2 (β = 0.44, P-value < 0.001). This shows that one standard deviation additional change in system quality increases the use of the system by 0.18 units and the satisfaction level of healthcare professionals towards using DHIS2 by 0.44 units standard deviation keeping another variable constant. Service quality had also a positive direct effect on the respondent's system use (β = 0.37, P-value < 0.01), and satisfaction with using DHIS2 (β = 0.36, P-value < 0.01). This shows that one standard deviation additional change in service quality increases the use of the system by 0.37 units and the satisfaction level of healthcare professionals towards using DHIS2 by 0.36-unit standard deviation keeping another variable constant.
Similarly, system use had also a positive direct effect on the respondent's satisfaction with using DHIS2 (β = 0.53, P-value < 0.05). This shows that with one standard deviation increases in system use the satisfaction level of healthcare professionals towards using DHIS2 raised by 0.53-unit standard deviation keeping another variable constant. Moreover, the finding showed that computer literacy had a positive direct effect on the respondent's system use (β = 0.63, P-value < 0.05), and satisfaction with using DHIS2 (β = 0.51, P-value < 0.01). This shows that one standard deviation additional change in computer literacy increases the use of the system by 0.63 units and the satisfaction level of healthcare professionals towards using DHIS2 by 0.51-unit standard deviation keeping another variable constant.
However, the study revealed that system training had not a positive direct effect on the respondent's system use (β = −0.05, P-value = 0.316), and satisfaction with using DHIS2 (β = 0.03, P-value = 0.576). and attitude had also not a positive direct effect on the respondent's system use (β = −0.02, P-value = 0.620), and satisfaction towards using DHIS2 (β = −0.02, P-value = 0.690). In addition, information quality was not having a significant effect on system use (β = 0.03, P-value = 0.569) among healthcare professionals in Ethiopia. Proportion of satisfaction towards using DHIS2 among healthcare professionals, southwest Ethiopia, 2022. In summary, system use had the most substantial effect on the respondent's satisfaction towards using DHIS2, and computer literacy had the most important factor towards using DHIS2, which was larger than the effects of other predictors ( Table 2).
Mediation effect
In this study, the mediation analysis revealed that system use partially mediates the relationship between system quality and user satisfaction with DHIS2 (β = 0.304, P-value = 0.035). This means system use would have an indirect but significant effect on the relationship between system quality and DHIS2 user satisfaction. Furthermore, system use fully mediates the relationship between computer literacy and DHIS2 user satisfaction (β = 0.413, P-value = 0.011). This means system use would have an indirect but significant effect on the relationship between computer literacy and DHIS2 user satisfaction. However, the system used did not mediate the relationship between information quality, service quality, attitude, and system training with DHIS2 user satisfaction in Ethiopia ( Table 3).
Discussion
The purpose of the study was to assess healthcare professional satisfaction and its determinant factors towards using DHIS2 among healthcare professionals in southwest Ethiopia. Accordingly, the overall user satisfaction with using DHIS2 was 46.0% [95.0%; CI: (41.2-50.9)]. This finding was consistent with the study done in Malaysia (37). Oman (38), Saudi Arabia (39), and Kenya (40).However, this study's finding was lower than the study conducted in Tanzania (85%) (9). This difference may be due to the system allowing them to locate all required registers, track patients, make transfers and referrals, and receive notifications of lab results. It was simple to learn, understand, and verify data entry and reports, and it was web-based and practical for day-to-day work. The system's training was also good in Tanzania (9). But our study found higher than central Ethiopia user satisfaction with using EMR systems(38.6%) (15). The discrepancy might be due to the system compatibility, userfriendly or easiness between systems. In addition, the study period and sample size between them were also different.
The finding is also lower than a pilot study on district health information system challenges and lessons learned in central Ethiopia (48.65) (7). The possible reason was the study period, sample size, and challenges in southwest Ethiopia such as, high human resource turnover, inadequate access to DHIS skilled personnel, inadequate knowledge of health information (1).
The SEM result showed that system quality, service quality, and computer literacy was a direct positive effect on system use and user satisfaction. In addition, system use and information quality had also a direct positive effect on healthcare professionals' satisfaction with using DHIS2. Hence, H1, H2, H4, H5, H6, H7, H8, H13, H15 and H18 were supported in this study.
System quality had a significant direct and indirect effect on the use of district health information systems and healthcare professionals' satisfaction with using this system. This suggests that increased system quality should result in greater user satisfaction and beneficial effects on personal productivity. This finding is in line with studies about the effect of software quality in Greece (41), system use and user satisfaction in the adoption of electronic medical records systems in Tanzania (9), information systems success in South Africa (17), and electronic medical record system use and user satisfaction at five low-resource setting hospitals in Ethiopia (15). This indicates that users were impressed by the system's various features, such as the ability to access information from any location, follow up on clients, communicate with one another within the system, receive notifications, and store all the registers and client data they need in a centralized location.
Service quality was a direct significant effect on the use of the district health information system and healthcare professionals' satisfaction with using this system. Users will be happier and more likely to use the system if they are more satisfied with the DHIS2 level of support, such as when they receive helpful internal and external assistance. This finding was in line with the study done on user satisfaction with a clinical information system.in America (21), hospital information system satisfaction in Brazil (22) (24), and electronic medical record system use and user satisfaction at five low-resource setting hospitals in Ethiopia (15). Accordingly, the IT department's collaboration with the system suppliers in delivering prompt upgrades to the DHIS2 may have increased customer satisfaction with service quality. So, more computers must be placed within the wards so that clinicians may enter patient data without having to wait for a free computer. Improving service quality also requires ensuring a reliable power supply and providing quick system assistance. Given that donor funding supports the majority of installations in such contexts, it is critical for these organizations to offer enough support to improve service quality and, in turn, user satisfaction and DHIS2 use.
Computer literacy was a direct and indirect significant effect on the use of the district health information systems and healthcare professionals' satisfaction with using this system. This is a clear indication that health workers need to have a basic understanding of computers to be more motivated to use the district health information system. This finding was consistent with a study conducted on physician user satisfaction with an electronic medical records system in primary healthcare centers in UAE (25), the association between computer literacy and training on clinical productivity and user satisfaction in using the electronic medical record in Saudi Arabia (26), and modeling antecedents of electronic medical record system implementation success in low-resource setting hospitals in Ethiopia (18). This showed that to increase the success of the district health information system in Ethiopia, it is advised to provide additional basic computer courses during or before system implementation in addition to specific user training (18).
Information quality had a direct positive effect on healthcare professionals "satisfaction with using DHIS2. This demonstrates that higher information quality results in higher user satisfaction and district health information system utilization. The result was consistent with the study done a systematic review of comparison of user groups" perspectives of barriers and facilitators to implementing electronic health records in Canada (20), hospital information system satisfaction in Brazil (22), assessing eGovernment systems success in China (29), and modeling antecedents of electronic medical record system implementation success in low-resource setting hospitals in Ethiopia (18). This revealed that decision-makers should therefore emphasize the following factors when implementing the district health information system: making enough information available, ensuring good accuracy and timely updating of information on the system, and ensuring that reports are in a format and layout that health professionals regularly use and understand (17,18).
System use also had a direct positive effect on healthcare professionals' satisfaction with using DHIS2. This indicates that users will continue to use the system, thereby increasing their satisfaction with it. This result was consistent with a study done on system use and user satisfaction in the adoption of electronic medical records systems in Tanzania (9) and the measurement and dimensionality of the mobile learning system's success in Taiwan (42). The possible solution might be due to, users believing that since all registers were contained inside one system, they would be able to track their customers within the system with the aid of message notifications in the event of transfers and they would also be able to readily identify the nonvaluated clients. Additionally, they believed that with the notification capabilities, regional hospitals could easily and quickly provide them with testing results, as opposed to the past, when they had to wait for a phone call or the post office (9).
Implications of the study
Based on the results, this study provides theoretical and practical implications to facilitate the successful implementation of DHIS2 in Ethiopia. Theoretically, this study evaluated the effectiveness of DHIS2 in limited resource environments; it is the first full validation of the D&M model in Ethiopia and will provide awareness for system users about how to effectively use the system and increase their satisfaction. In addition, the study will help as a baseline for upcoming research and insight for policymakers to improve the success of the district health information system in Ethiopia. Practically, the managers should focus on significant factors such as system quality, service quality, information quality, and computer literacy for ensuring system use and user satisfaction with using DHIS2 in Ethiopia.
Computer literacy was the more powerful and significant factor in improving the system used as well as healthcare professionals' satisfaction with using DHIS2. Accordingly, the manager should provide additional basic computer courses to improve the system's implementation in addition to the specific user training necessary for the success of DHIS2 in southwest Ethiopia.
Limitations of the study and future research
Although we think our study will significantly aid future DHIS2 utilization in a limited resource setting, some limitations must be mentioned. This study did not include public health centers and private hospitals, and the findings were not supported by a qualitative study. Moreover, only self-reported questionnaires were used to collect the data for our study, which means there may be some response bias. Future studies should test the D&M model by adding net benefit, and the proposed model needs to be regularly tested, verified, and expanded in a variety of user and implementation contexts.
Conclusion
The overall user satisfaction with using the district health information system in Ethiopia was low. The modified D&M model was found to be well-suited to evaluating the effectiveness of DHIS-2 in southwest Ethiopia. The SEM result showed that system quality, service quality, and computer literacy had a direct positive effect on system use and user satisfaction. In addition, system use and information quality had a direct positive effect on healthcare professionals "satisfaction with using DHIS 2. The relationship between system quality and user satisfaction with DHIS2 is mediated in part by system use. In addition, system use fully mediates the relationship between computer literacy and user satisfaction towards DHIS2. These findings assist implementers in understanding key areas for DHIS2 users. The most important factor for enhancing system use and healthcare professionals" satisfaction with DHIS2 was computer literacy. Accordingly, in addition to the specific user training required for the success of DHIS2 in Ethiopia, the manager should offer additional basic computer courses to the system users.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by the ethical review committees of Mettu University with ethical reference number: RPC/288/2015. The participants provided their written informed consent to participate in this study.
Author contributions
AW: was responsible for a significant contribution to the conceptualization, study selection, data curation, formal analysis, funding acquisition, investigation, methodology, and the original draft preparation. Project administration, resources, software, supervision, validation, visualization, and reviewing are all handled by TF, SW, and AD. SW and AW: wrote the final draft of the manuscript, and the final draft of the work was read, edited, and approved by all authors. All authors contributed to the article and approved the submitted version. | 2023-07-22T15:42:22.886Z | 2023-07-17T00:00:00.000 | {
"year": 2023,
"sha1": "50a130effeb698346f2dd231ff3b6c00d3c23000",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fdgth.2023.1140933/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09014f03dbee28c39ee59860c7b33c64e85653de",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6582046 | pes2o/s2orc | v3-fos-license | Hepatopulmonary Syndrome in Poorly Compensated Postnecrotic Liver Cirrhosis by Hepatitis B Virus in Korea
Background Hepatopulmonary syndrome (HPS) refers to the association of hypoxemia, intrapulmonary shunting and chronic liver disease. But there is no clear data about the prevalence of HPS in postnecrotic liver cirrhosis by hepatitis B virus (HBV), the most common cause of liver disease in Korea. The aim of this study was to investigate the prevalence of HPS in poorly compensated postnecrotic liver cirrhosis by HBV, and the correlation of the hepatopulmonary syndrome with clinical aspects of postnecrotic liver cirrhosis by HBV. Methods: Thirty-five patients underwent pulmonary function test, arterial blood gas analysis and contrast-enhanced echocardiography. All patients were diagnosed as HBV-induced Child class C liver cirrhosis and had no evidence of intrinsic cardiopulmonary disease. Results: Intrapulmonary shunt was detected in 6/35 (17.1%) by contrast-enhanced echocardiography. Two of six patients with intrahepatic shunts had significant hypoxemia (PaO2 < 70 mmHg) and four showed increased alveolar-arterial oxygen gradient over 20 mmHg. Only cyanosis could reliably distinguish between shunt positive and negative patients. Conclusions: The prevalence of intrapulmonary shunt in poorly compensated postnecrotic liver cirrhosis by HBV was 17.1% and the frequency of hepatopulmonary syndrome was relatively low (5.7%). ‘Subclinical’ hepatopulmonary syndrome (echocardiographically positive intrapulmonary shunt but without profound hypoxemia) exists in 11.4% of cases with poorly compensated postnecrotic liver cirrhosis by HBV. Cyanosis is the only reliable clinical indicator of HPS of HBV-induced poorly compensated liver cirrhosis. Further studies are required to determine if the prevalence and clinical manifestations of HPS varies with etiology or with geographical and racial differences.
INT R O D UC T IO N
The hepatopulmonary syndrome (HPS) is characterized clinically by the triad of pulmonary vascular dilation, systemic hypoxemia and the setting of advanced liver disease 1 -7 ) . In 1956, a case re port by Rydell a nd Hoffba ue r 8 ) conce rning a 17-yea r-old ma n with hypoxemia and juvenile cirrhosis provided the first clinical and postmortem documentation of what was to be later termed HPS by Knudsen and Kennedy in 1979 7 ) . The postmortem study demonstrated both precapillary dilations and direct arteriovenous communications after vascular injections with a plastic vinyl acetate solution 9 , 10 ) . The definition of hypoxemia may vary, but several studies cons ider PaO2 less than 70 mmHg in patients with liver disease as abnormal. An increased alveolararterial oxygen gradient (A-aDO2 : greater than 20 mmHg) represents a more sensitive but less practical measure of abnormal oxygenation. Patients with HPS frequently demonstrate oxygenation that become worse as one moves from the supine to standing position (orthodeoxia) breathing either room air or 100% ins pired oxygen. A poor correlation exists between PaO2 determined while breathing room air and PaO2 determined while breathing 100% oxygen; the latter may be of additional prognostic significance. Research studies using a multiple inert gas elimination technique (MIGET) have shown the hypoxemia of HPS to be a result of shunt, diffusion-perfus ion defect and excess perfusion for a given ventilation (low V/Q) 1 1 ) .
B a c kg ro u n d : He p a t o p u lm o n a ry s y n d ro m e ( H PS ) re f e rs t o t h e a s s o c ia t io n o f hy p o x e m ia , int ra p u lm o n a ry s h u nt ing a n d c h ro n ic liv e r d is e a s e . B ut t h e re is n o c le a r d a t a a b o u t t h e p re v a le n c e o f H PS in p o s t n e c ro t ic liv e r c irrh o s is by h e p a t it is B v iru s ( HBV ) , t h e m o s t c o m m o n c a u s e o f liv e r d is e a s e in Ko re a . T h e a im o f t h is s t u dy w as t o inv e s t ig at e t h e p re v a le n ce o f HPS in p o o rly co m p e ns at e d p o s t n e c ro t ic liv e r c irrh o s is by HBV , a n d t he c o rre lat io n o f t he he p at o p u lm o na ry s y n d ro m e w it h c lin ic a l a s p e ct s o f p o s t n e c ro t ic liv e r c irrho s is by HBV . M e t h o d s : T h irty -f iv e p at ie n t s u n d e rw e nt p u lm o n a ry f u n c t io n t e s t , a rt e ria l b lo o d g a s a n a ly s is a n d c o n t ra s t -e n h a n c e d e c h o c a d io g ra p hy . A ll p at ie n t s w e re d ia g n o s e d a s H BV -in d u c e d C h ild c la s s C liv e r c irrh o s is a n d h a d n o e v id e n c e o f int rin s ic c a rd io p u lm o n a ry d is e a s e .
Most studies about hepatopulmonary syndrome have focused on alcoholic liver disease 1 2 -17 ) . There is no clear data about the prevalance of HPS in postnecrotic liver cirrhosis by hepatitis B virus (HBV). And it is unclear how the presence of HPS relates to the clinical aspects of HBV induced liver cirrhosis. So this study was done to investigate the prevalence of HPS in poorly compensated postnecrotic liver cirrhosis by HBV and the correlation of HPS with clinical aspects of HBV induced poorly compensated postnecrotic liver cirrhosis.
Patients
Thirty-five cirrhotic patients were randomly recruited, from both the Gastroenterology Ward and the Gastroenterology Out-patients in Seoul Municipal Boramae Hos pital (Seoul National Univers ity Hospital Affiliated Hos pital), Seoul, Korea. The inclusion criteria were the followings: Liver cirrhosis was diagnosed with histologic findings (85%), and with conventional clinical (positive evidence of chronic liver disease stigmata and physical findings of liver cirrhosis), ultrasonographic (coarse liver surface and shrunken liver on ultrasonography) and biochemical criteria (abnormal blood liver function test), and clinical presentations of portal hypertension (15%). All patients showed serum HBS Ag positve and Child C clincal findings, with absence of cardiac or pulmonary disease and absence of pulmonary vascular abnormalities not related to liver disease.
Patients were informed about the intended procedures and the aim of the study, and consent was obtained in every case, according to the s pecifications guidelines of the 1975 Declaration of Helsinki. Transthoraic contrast echocardiography (TTCE) was performed in all cirrhotic patients.
Tra nsthoracic contrast echocardiograpy (TTCE)
We used a standard echoccardiograph (Acuson Computerized Sonography 128XP, USA), with a 3.5 MHz transthoracic probe. Studies were recorded on videotape for further analys is. For TTCE, a previously published method 2 was closely followed. Four-chamber apical image was obtained through a transthoracic approach. Then, 10 mL of saline with 0.5 mL room air, were injected through the intravenous line. The second and third injection followed. A positive result was defined when microbubbles were observed in the left atrium in one or more of the three injections. The grading method was validated by examination of its reproducibility. First, the intrins ic reproducibility was examined by comparing the results obtained after the first and the second injection of each substance. Second, intraobserver reproducibility was measured by assess ing the concordance resulting from a two-step blind review of each register performed by the same observer. Third, interobserver reproducibility was assessed by blind comparison of the res ults obtained by both observers in the interpretation of each register. We did not compare two studies of the same patient performed at different times for ethical reasons .
Arterial blood gas ana lys is (ABGA)
A sample of arterial blood was obtained in each patient at the time of the echocardiographic study by puncture of the radial artery of the left arm, following the standard technique, in the supine position and breathing room air. PaO2 and PaCO2 were determined by selective electrodes (lL 16/40 ph/blood gas analyzer. Instrume ntation la boratory S pA, Mila no, Ita ly). PaO2 < 70 mmHg was cons ide red as hypoxe mia . PaCO2 < 35 mmHg was cons ide red as hypoca pnia.
Pulmonary function test
We used a standard pulmonary function test machine (Sensor Medic Model 2200, USA).
Diagnos is of HPS
Using saline-TTCE, we considered clinical HPS to be present in cases with PaO2 < 70 mmHg with bubbles in left cardiac chambers, subclinical HPS in A-aDO2 greater than 20 mmHg with bubbles in left cardiac chambers.
Statistical ana lys is
Estimation of Kappa heavy index allowed determination of the reproducibility of the procedure and intra-and inter-observer variability. Wilcoxon s test was chosen to compare results of TTCE, and correlation between ordinal variables was determined by Spearman s test. Correlation between continuous and ordinal variables was determined by ANOVA and post-hoc test (Bonferroni). The limit of significance was set at a p<0.05. For analysis purposes, we used the SPSS.
R E S U LT S
Thirty-five patients were studied (13 women and 22 men: mean age, 53.1 14.5 years) All patients were classified as Child s C. (Child s classification included assessment of total bilirubin, serum albumin, clinical ascites, nutrition and existence of encephalopathy; Child s A classification represented minimal disease and Child s C class ification represented the most severe liver disease). The etiologies of liver diseases of all the patients were postnecrotic liver cirrhosis by hepative B virus. (Table 1) Splenomegaly and ascites were seen in all 35 patients (100%), and spider angiomas were seen in 21/35 patients (60%). Cyanosis was observed in 2 patie nts (6%). Esophagogastroduode noscopy was performed before the echocardiographic studies. Thirtytwo patients had esophageal varices, nine of which (25.7%) were grade , twenty of which (57.1%) were grade , three of which (8.6%) were grade . Japanese portal hypertension study group classification was used for esophageal varix grading (Table 2).
Ta ble 1. Sy mpto ms a nd S ig ns of S tudy Po pulatio n
2) Pulmonary function test (Table 3) and arterial blood gas analysis ( A-aDO2 , Alveolar-arterial oxygen difference (gradient) 3) Clinical and laboratory findings of liver cirrhosis patients with pos itive shunt (Table 5) Intrapulmonary shunt was detected by TTCE in 6/35 patients (17.1%) and these cases showed significantly lower PaO2 than in negative intrapulmonary shunt cases (PaO2 : 72.
4) Comparison of clinical and laboratory findings
between shunt negative and positive patients ( Table 6 and Table 7) Except cyanos is, any clinical or laboratory findings including spider angioma, esophageal varix, biochemical indicator of hepatic function and parameters of pulmonary function test, including diffusing capacity did not distinguish between positive and negative intrapulmonary shunt patients.
D IS C US S IO N
The triad of liver disease, arterial hypoxemia and intrapulmonary vascular dilatation has defined an entity commonly referred to as the hepatopulmonary syndrome 1, 3 , 10 -1 1, 19 , 2 6 ) . In the original description by Rydell a nd Hoffba ue r 4 ) , lung necropsy s pecime ns studied using plastic vascular casts contained both precapillary/ capillary dilatations and distinct anatomic arteriovenous communications which caused severe hypoxemia in the setting of chronic liver disease (juvenile cirrhosis) 14 , 16 -18) . Hepatopulmonary syndrome is becoming increas ingly recognized as one of the most serious complications of chronic liver disease. Different workers have reported incidences of positive air-contrast echocardiography varying from 5 to 47%, and prevalence of HPS between 5 and 29% 3 , 14) . Our present study, although s mall, suggested a relatively lower occurrence of this condition (17.1% positive intrapulmonary s hunt, 5.7% he patopulmona ry syndrome) in the Korean population, among whom hepatitis B virus is the most common cause of cirrhos is (100% in the prese nt study), compared with alcohol and hepatitis C virus in Western countries. The results from our study were similar to those from an Indian study that the prevalence of intrapulmonary shunting in hepatitis B virus induced liver cirrhosis was 8.9% but HPS with hypoxemia was only 6.7% 2 ) . Further studies are required to determine if the prevalence of HPS varies with etiology of liver disease or with geographical and racial differences.
Advanced he patic dysfunction, with associated hyperdynamic circulation, has been suggested as being the most probable setting for the development of HPS. However, the condition has also been found in cases of congenital hepatic fibrosis and portal vein thrombosis. This has given rise to the question of whether portal hypertens ion is a contributing factor 1, 3 ) . Moreover, if hepatic dysfunction was the only prerequis ite, one would expect HPS to occur predominantly among Child s class C cirrhosis. However, Abrams et al. found 15 of 25 cases of HPS (60%) had Child s grade A, and only two had grade C 3 ) . In this study, most of HPS cases we re a lcoholic liver cirrhos is a nd he patitis C virus (HCV)-induced liver cirrhos is . A common bile duct ligation rat model for hepatopulmonary syndrome has been developed and increased pulmonary endothelial nitric oxide synthase activities and circulating endothelin-1 levels seem to correlate with vascular dilatation and oxygen abnormalities 19 , 2 6 ) .
In the present study, all cases were HBV-induced Child s grade C liver cirrhosis and the prevalence of HPS is 5.7%. In this small study, the prevalence of hepatopulmonay syndrome in patients with poorly compensated (Child C) postnecrotic liver cirrhosis by HBV was 5.7%. Two of 35 cases of cirrhosis (5.7%) had positve contrast echocardiographly with hypoxemia (PO2 < 70 mmHg) and four of 35 case of cirrhosis (11.4%) were s ubclinica l cases (positive contrast echocardiograpy without hypoxemia). Our results suggested that subclinical hepatopulmonary syndrome exists a nd maybe the re we re some fa ctors , still unknown, to determine definite hepatopulmona ry syndrome (hypoxe mia with pos itive contra st echocardiograpy) and subclinical hepatopulmonary syndrome. This study suggested that there was a wide clinical spectrum of hepatopulmonary syndrome. There was one report about association of esophageal varices and hepatopulmonary syndrome in liver cirrhosis 2 7) . But, in this study, there was no association of the grade of esophageal varices and hepatopulmonary syndrome. Cyanosis was the only reliable clinical indicator, and there was no clear relationship with the presence of spider angioma and hepatopulmonary syndrome. Funther studies are required to determine if the prevalence of HPS and clinical manifestations of HPS varies with etiology or with geographical and racial differences. | 2016-05-12T22:15:10.714Z | 2001-06-01T00:00:00.000 | {
"year": 2001,
"sha1": "d8adb9e9eb9b8f570b55fb86d6ea8e58ad3c7300",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3904/kjim.2001.16.2.56",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8adb9e9eb9b8f570b55fb86d6ea8e58ad3c7300",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234684039 | pes2o/s2orc | v3-fos-license | The albumin-to-alkaline phosphatase ratio as an independent predictor of future non-alcoholic fatty liver disease in a 5-year longitudinal cohort study of a non-obese Chinese population
Background The albumin-to-alkaline phosphatase ratio (AAPR) is a newly developed index of liver function, but its association in patients with non-alcoholic fatty liver disease (NAFLD) has not been established. The aim of this study was to investigate the association between the AAPR and NAFLD in a non-obese Chinese population. Methods The study included 10,749 non-obese subjects without NAFLD at baseline and divided them into quintiles according to the AAPR. A Cox multiple regression model was used to examine the association between the AAPR and its quintiles and the incidence of NAFLD. Results The average age of the study population was 43.65 ± 15.15 years old. During the 5-year follow-up, 1860 non-obese subjects had NAFLD events. In the Cox multiple regression model, after adjusting the model according to important risk factors, the AAPR and NAFLD risk were independently correlated, and with a gradual increase in the AAPR, the NAFLD risk decreased gradually (HR: 0.61, 95% CI: 0.47, 0.81; P-trend< 0.0001). Additionally, there were significant interactions between the AAPR and BMI, blood pressure and lipids (P-interaction < 0.05). Stratified analysis showed that the risk of AAPR-related NAFLD decreased in people with normal blood pressure and lipid levels, while the risk of AAPR-related NAFLD increased abnormally in people who were underweight. Conclusions This longitudinal cohort study provides the first evidence that the AAPR is an independent predictor of future NAFLD events in non-obese people. For non-obese people with a low AAPR, especially those with BMI < 18.5 kg/m2, more attention should be given to the management of risk factors for NAFLD to prevent future NAFLD. Supplementary Information The online version contains supplementary material available at 10.1186/s12944-021-01479-9.
Background
Non-alcoholic fatty liver disease (NAFLD) is a widespread chronic liver disease without a history of heavy alcohol consumption. It covers the development process of chronic diseases from simple steatosis of the liver to more severe non-alcoholic steatohepatitis and liver cirrhosis [1,2]. However, in recent years, increasing evidence has shown that the disease burden of NAFLD comes not only from liver disease but also from NAFL D-related cardiovascular disease, metabolic disease and kidney disease [2][3][4][5]. NAFLD is a multi-system disease that affects multiple organs of the body and metabolic regulatory pathways [6,7].
NAFLD is generally thought to be caused by overweight and obesity [2,8], and in the past, related studies were mainly conducted in obese people. However, in recent years, an increasing number of studies have focused on non-obese NAFLD [9][10][11]. In a recent meta-analysis of more than 2 million people in 24 countries, nonobese people accounted for 40.8% of NAFLD patients globally [12], and in Asia, this situation seems to be more common [13,14]. Additionally, a growing body of research suggests that people with non-obese NAFLD appear to be more prone to metabolic syndrome and progress to severe liver disease at a significantly faster rate [15,16]. Therefore, it may be important to identify non-obese people at risk of NAFLD as early as possible and to manage their metabolic status.
Monitoring liver function markers, blood glucose and lipid metabolic markers and abdominal ultrasound are the most commonly used methods to assess the risk of NAFLD [17]. Albumin (ALB) and alkaline phosphatase (ALP) are the main indexes often used to evaluate liver function in clinical practice, in which the level of ALB can reflect the protein synthesis ability of the liver; ALP is a hydrolytic enzyme widely distributed in various tissues of the human body. It is mainly concentrated in the liver. When liver injury occurs, the level of ALP in the circulation increases [18]. Recently, in a study of liver tumour disease comparing the effects of different liver function measures on long-term prognosis, it was found that the albumin-to-alkaline phosphatase ratio (AAPR) showed the highest C-index compared to other liver function measures [19]. This result has also been verified in some similar studies [20,21]. On the other hand, in the early stage of NAFLD, ALB and routine liver enzymes are usually normal [22], which makes it difficult for clinicians to identify groups at high risk of NAFLD based only on liver function tests. Therefore, the purpose of this study was to identify a population at high risk of NAFLD as early as possible with the help of some commonly used clinical liver function markers. At present, the link between the AAPR and NAFLD has not been established. Therefore, based on a large-sample longitudinal non-obese cohort, the following hypotheses are proposed in this study: can the AAPR be used to predict future NAFLD events in the non-obese Chinese population?
Study design
The longitudinal cohort data of this study come from the Dryad database, which is open and free, allowing researchers to use database services freely according to the purpose of the study. According to the terms of service of the database, the data sources were quoted and marked in this study [23]. The packet provides data on 16,173 non-obese subjects without NAFLD, liver disease, diabetes, history of heavy drinking and baseline medication use recruited by Wenzhou People's Hospital from Jan 2010 to Dec 2014. The study scheme was approved by the institutional review boards of Wenzhou People's Hospital, informed consent was obtained from the subjects, and a 5-year follow-up was completed. The detailed study design has been mentioned in previous studies [24]. In this study, a secondary analysis was carried out based on the NAFLD longitudinal cohort, and the following were some design elements: study exposure factors: AAPR; outcome: new-onset NAFLD events; subjects: 10,749 non-obese subjects were analysed after excluding the subjects with ALP and ALB deletions.
Data collection
As mentioned earlier [24], baseline clinical data such as age, sex, height, weight, and blood pressure were recorded using a uniform health questionnaire; blood pressure was measured in a sitting position in a quiet environment using a standard electronic sphygmomanometer, and systolic and diastolic blood pressures (S/DBP) were recorded. Body mass index (BMI) was calculated as height divided by weight squared. The measurement of biochemical indexes was tested by automatic analytical instruments (Abbott AxSYM) through standard methods. The biochemical parameters included in this study were as follows: ALP, ALB, blood urea nitrogen (BUN), aspartate aminotransferase (AST), creatinine (Cr), triglyceride (TG), uric acid (UA), total protein (TP), total cholesterol (TC), direct bilirubin (DBIL), fasting plasma glucose (FPG), gamma glutamyl transferase (GGT), globulin (GLB), low-density lipoprotein cholesterol (LDL-C), alanine aminotransferase (ALT), total bilirubin (TB), and highdensity lipoprotein cholesterol (HDL-C).
Diagnosis of NAFLD
Subjects were assessed for NAFLD by abdominal ultrasound once a year during follow-up. The diagnosis of NAFLD was based on the diagnostic guidelines issued by the Chinese Liver Disease Association in 2010 [25]. The main contents of the evaluation include (a) diffuse high echo of the liver relative to the kidney and spleen; (b) echo attenuation of deep liver; (c) liver mildly to moderately enlarged, margin rounded obtuse; (d) liver blood flow signal is weakened; and (e) the right lobe and diaphragm are obscured or only partially shown. The diagnostic criteria for NAFLD needed to meet the echo characteristics of the above item (a) plus any one of the other items.
Statistical analysis
All statistical analyses in this study were conducted on Empower Stats (R, version 2.20) and statistical software R language (version 3.4.3), and a P-value of < 0.05 (2-tailed) was considered to indicate statistical significance. The main steps were divided into the following three steps: Step one: The baseline characteristics of all patients were stratified according to the AAPR quintile, and the continuous variables were expressed as the mean (standard deviation) or median (interquartile range). One-way ANOVA or the Kruskal-Wallis H test was used for intergroup comparisons. The qualitative data were summarized as frequencies or percentages, and the chi-square test was used to check the differences between groups.
Step two: In the population diagnosed with NAFLD, linear regression was used to check the correlation between the AAPR and baseline data (Supplementary Table 1, Additional file 1). The variables significantly related to the AAPR may be auxiliary factors of the association between the AAPR and NAFLD and were included in the model as important adjustment variables in Cox multiple regression analysis [26]. Additionally, before establishing the multiple regression model, the collinearity between variables was checked, and the variance inflation factor (VIF) of each variable was calculated (Supplementary Table 2, Additional file 1). The variables with VIF > 5 were regarded as collinear variables and could not be included in the multiple regression model [27].
Step three: The incidence of NAFLD in the five AAPR groups was estimated by the Kaplan-Meier curve, and the comparison between groups was made by the logrank test. To explore the association between the AAPR and NAFLD, a Cox multiple regression model was constructed, and the AAPR was input into the model to calculate the hazard ratio (HR) and 95% confidence interval (CI) of NAFLD caused by each 1-unit increase [28]. Five models were used, with the crude model being unadjusted. Model 1 adjusted for the clinical baseline index (age, sex, height, BMI and SBP). Model 2 adjusted for model 1 plus liver function markers (GGT, ALT, AST, GLB, and TP). Since the AAPR is the ratio of ALB to ALP, in order to avoid the potential confounding effect between the AAPR and these two variables, ALB and ALP were not included in model 2. Model 3 adjusted for model 2 plus the blood glucose metabolism marker FPG and kidney function marker Cr. Model 4 adjusted for model 3 plus lipid metabolic markers (TG, HDL-C, and LDL-C). Additionally, considering that the correlation between the AAPR and NAFLD may be different under different conditions [4,5,11], the researchers also conducted an exploratory hierarchical analysis in some subgroups and checked the differences between different hierarchical groups by the likelihood ratio test to determine whether there was an interaction.
Characteristics of the subject
Among the 16,173 patients enrolled in the study, 10,749 non-obese subjects fulfilled the inclusion criteria for the present post hoc analysis. The baseline mean age was 43.65 ± 15.15 years, with slightly more male subjects than female subjects (54.90% vs 45.10%). Table 1 summarizes the baseline characteristics grouped by AAPR quintiles. In the group with a low AAPR, there were more males than females, and with an increase in the AAPR, the number of males decreased gradually, while the number of females increased gradually. In the group with a higher AAPR, the average BMI, weight, age, TC, AST, ALP, TP, GLB, BUN, LDL-C, GGT, Cr, UA, ALT, FPG, TG, SBP and DBP of the subjects were lower than those in subjects with a lower AAPR. In contrast, ALB and HDL-C levels were higher in the groups with higher AAPR values (all P < 0.05).
Correlation analysis between the AAPR and baseline variables
Linear regression analysis showed that age, height, weight, SBP, ALP, ALB, GGT, ALT, AST, TP, GLB, Cr and FPG were associated with the AAPR in the population with NAFLD (P < 0.05). This finding suggests that these variables that were significantly related to AAPR may be auxiliary factors associated with AAPR and NAFLD.
Association between the AAPR and NAFLD
To improve the model's ability to identify the risk of NAFLD, the researchers established a Cox multiple regression model (Table 2). In the unadjusted model, there was a negative correlation between the AAPR and the risk of NAFLD, and the trend of NAFLD decreased with an increase in the AAPR (HR: 0.26, 95% CI: 0.20, 0.33; P-trend < 0.0001). After adjusting for the clinical baseline index (model 1), the negative correlation between the AAPR and NAFLD weakened, and the NAFLD risk corresponding to the AAPR quintile showed the same downward trend as before (HR: 0.41, 95% CI: 0.31, 0.53; P-trend < 0.0001). Then, after further adjustment for liver function markers in model 2, the association between the two was further reduced, and the negative correlation trend remained the same as before. Model 3 further adjusted for the blood glucose metabolism marker FPG and kidney function marker Cr, and the degree of negative correlation between the AAPR and NAFLD remained basically unchanged (HR: 0.54, 95% CI: 0.41, 0.72; P-trend< 0.0001). Finally, after further adjusting for the lipid metabolism markers (TG, HDL-C, and LDL-C) in the Cox multiple regression model, it was found that for each one-unit increase in the AAPR, the risk of NAFLD decreased by 39% (HR: 0.61, 95% CI: 0.47, 0.81, P-trend < 0.0001). Additionally, in the AAPR quintile groups, the group with the highest AAPR had a reduction in the NAFLD risk by 19% compared with the group with the lowest AAPR.
Subgroup analysis
In the exploratory subgroup analysis, the clinical baseline index data, kidney function index, lipid metabolic index, blood glucose metabolism index and liver function index were stratified according to the clinical cutoff points. The HR and 95% CI between different hierarchical groups were analysed and calculated by a Cox regression model, and the difference between hierarchical groups was checked by the likelihood ratio test to determine whether there was an interaction. As shown in Table 3, there was a significant interaction between factors such as BMI, SBP, and DBP in the association between the AAPR and NAFLD in the clinical baseline data subgroup (P-interaction< 0.05). Among them, the risk of AAPR-related NAFLD was abnormally increased in underweight people (BMI < 18.5 kg/m 2 , HR: 86.13, 95% CI: 5.86, 968.98; P = 0.0012), and in people with normal blood pressure (SBP < 140 mmHg, DBP < 90 mmHg), the risk of NAFLD associated with the AAPR was lower. In addition, significant interactions were observed in the lipid metabolism subgroup (P-interaction< 0.05) in which the risk of AAPR-related NAFLD decreased significantly when there was no abnormal increase in blood lipids. However, no significant interaction was observed in the subgroups of age, sex, liver function, kidney function and blood glucose metabolism.
Discussion
To the best of our knowledge, this is the first report on the association between AAPR and new-onset NAFLD risk. In this study, after 5 years of follow-up, it was found that the increase in AAPR was negatively correlated with the risk of future NAFLD events in non-obese people. In the analysis of the Cox multiple regression model, the researchers determined that the AAPR was an independent predictor of NAFLD (HR: 0.61, 95% CI: 0.47, 0.81, Ptrend < 0.0001). The AAPR is the ratio of ALB to ALP, which can reflect some information regarding the two indicators at the same time, as well as information that cannot be reflected by these two indicators. In 2015, Chan et al. first reported that the AAPR can predict the poor prognosis of liver tumours, and its predictive performance is better than that of other liver markers [19]; some subsequent studies have also confirmed that this conclusion is reliable [20,21]. At present, the AAPR has been used as a new liver marker to evaluate the long-term prognosis of liver tumour diseases. In this study, the researchers found that the AAPR can also be used to predict NAFL D in chronic liver diseases; the longitudinal cohort design of this study better reflects that the AAPR can independently predict early NAFLD risk. It is well known that ALB and liver function abnormalities are rarely seen in the early stage of NAFLD, so it may be difficult to detect potential NAFLD risks through conventional biochemical markers [22]. The findings of this study provide a new idea for the prevention of newonset NAFLD.
In this study, the researchers also examined whether there were differences in AAPR-related NAFLD risk among people of different ages, sex, BMI, liver and kidney functions, blood pressure, blood glucose and blood lipids. The results showed that BMI, SBP, DBP, and lipid metabolism had significant interactions in the association between the AAPR and NAFLD (P-interaction < 0.05). Among those with normal blood pressure and lipids, the risk of NAFLD associated with the AAPR was reduced (SBP < 140 mmHg, DBP < 90 mmHg, TC < 5.2 mmol/l, TG < 1.7 mmol/l, HDL-C ≥ 0.9 mmol/l). However, the risk of AAPR-related NAFLD was abnormally increased in underweight individuals (BMI < 18.5 kg/m 2 , HR: 86.13, 95% CI: 5.86, 968.98; P = 0.0012), which may be related to the significant decrease in skeletal muscle mass in underweight individuals. Related studies have shown that with a decrease in BMI, the skeletal muscle weight, skeletal muscle index and body fat of the extremities decrease significantly [29], and low muscle mass is independently positively correlated with NAFLD [30]. Additionally, underweight people not only have an increased risk of NAFLD but also have a lower BMI, which often indicates malnutrition, which will significantly increase the incidence of adverse events [31,32]. It is suggested that individuals with BMI < 18.5 kg/m 2 should increase BMI to a normal level and improve skeletal muscle quality through diet and healthy exercise as soon as possible.
At present, there are very few studies on the AAPR, and the mechanism of the association between the AAPR and NAFLD is not clear. The results of this study were similar to those of previous studies. In this study, a low AAPR was an independent predictor of new-onset NAFLD events. It is generally believed that a low AAPR often indicates that ALB is too low or that ALP is too high. ALB is a very important protein in serum; it not only maintains the colloidal osmotic pressure of the body but also participates in the storage and as a conveyor of many substances [33]. The level of ALB reflects human nutritional status and liver function [18,33]. In addition, ALB is also involved in the regulation of inflammation and the immune response [34,35]. ALP is a hydrolytic enzyme found mainly in the liver, bone, intestine, kidney and placenta. ALP increases in those who are pregnant, suffer from bile duct disease, have impaired liver function or have bone disease [18,36]. It has been reported that ALP is also related to the nutritional status of the body and has anti-inflammatory effects, which can inhibit the inflammatory response [37]. However, in this study, there were only 5 subjects whose ALB was toward the lower limit of the normal reference range, while only 53 people had ALP toward the upper limit of the normal reference range. In other words, the ALB and ALP levels of 99.49% of the population in this study were within the normal reference range, so malnutrition, inflammation and immune response do not seem like likely explanations for this association. A lower AAPR may affect the development of NAFLD in unique ways, the underlying mechanism of which is not clear, and further research is needed to explain this hypothesis in the future.
Study strengths and shortcomings
This study has some unique advantages: (a) This is the first study to explore the association between the AAPR and NAFLD. The findings of this study provide a new idea for the prevention of new-onset NAFLD. (b) This study was a longitudinal cohort design with a large sample size. After strict statistical adjustment and sensitivity analysis, the negative correlation between the AAPR and NAFLD still stably existed, so the conclusion of this study can be considered relatively reliable. (c) The AAPR is the ratio of ALB to ALP, and the measurement of ALB and ALP is very simple and convenient in clinical practice, which is beneficial to the rapid application of the AAPR in clinical practice. Of course, the shortcomings of this study are also obvious: (a) This study is the first to explore the association between the AAPR and NAFLD, so comparisons with similar studies and two-way verification of related basic research are lacking; therefore, the conclusions of this study should be carefully referred to, and more similar studies are needed to verify it. (b) This study is the second analysis of a previous study [24], and this study population was non-obese; considering that there are great differences between obese and non-obese people, more studies are needed to verify the correlation between the AAPR and NAFLD in obese people [10]. Additionally, although NAFLD-related variables have been widely collected in this study, there are still some variables that cannot be measured or obtained, which may lead to inevitable residual confusion. (c) In this study, the general clinical data and biochemical indicators of the subjects were standard parameters collected during physical examination, and repeated measurements were not carried out at the follow-up visits. Therefore, the impact of dynamic changes in baseline data on NAFLD could not be evaluated in this study. (d) In this study, NAFLD diagnosis was performed by ultrasound only, and no biopsy was conducted. Biopsy is the gold standard method to diagnose NAFLD stage [38]. Ultrasound has low sensitivity for the detection of mild steatosis [39], meaning that the subjects could already have steatosis but be classified as healthy liver. (e) The cohort of this study is made up of Chinese people, so the conclusion is only applicable to the Chinese population, while in other ethnic groups, the conclusion of this study is for reference only.
Conclusions
In conclusion, this study demonstrated that a low AAPR is an independent predictor of NAFLD in the future. This finding provides new ideas for the prevention of new-onset NAFLD. Additionally, the AAPR is a new, simple, and inexpensive marker with a wide range of clinical application value. | 2021-05-17T14:03:30.140Z | 2021-05-16T00:00:00.000 | {
"year": 2021,
"sha1": "f48768e980e2e79bd8c73be8d43df2f03a99bf81",
"oa_license": "CCBY",
"oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/s12944-021-01479-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7c990b3829faa9df08f8b63bed1f4f06056889d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44101340 | pes2o/s2orc | v3-fos-license | Low-dose minocycline mediated neuroprotection on retinal ischemia-reperfusion injury of mice.
Purpose
The aim of this study was to investigate the effect of minocycline (MC) on the survival of retinal ganglion cells (RGCs) in an ischemic-reperfusion (I/R) injury model of retinal degeneration.
Methods
Retinal I/R injury was induced in the left eye of mice for 60 min by maintaining intraocular pressure at 90 mmHg. Low- or high-dose MC (20 or 100 mg/kg, respectively) was administered by intravenous injection at 5 min after the retinal ischemic insult and then administered once daily until the mice were euthanized. RGCs and microglial cells were counted using immunofluorescence staining. Functional changes in the RGCs were evaluated using electroretinography. The visual function was assessed using an optokinetic test.
Results
The data demonstrated that the effect of MC was dose dependent. Low-dose MC showed protective effects, with reduced RGC loss and microglial activation, while the high-dose MC showed damage effects, with more RGC loss and microglial activation when compared with the vehicle group. The electroretinography and optokinetic test results were consistent with the morphologic observations.
Conclusions
These data suggested that appropriate concentrations of MC can protect the retina against retinal ischemic-reperfusion injury, while excessive MC has detrimental effects.
Minocycline (MC) has been found to have neuroprotective effects in diseases of the central nervous system (CNS) [10,11], such as middle cerebral artery occlusion [12,13], Alzheimer disease [14], Parkinson disease [15], oxygenglucose deprivation [16], and Huntington's disease [17]. Growing evidence also showed that minocycline likely has a neurologic effect in many retinal diseases [18,19]. A study of retinal ischemia-reperfusion injury reported that minocycline exerted a neuroprotective effect through preventing retinal inflammation and vascular permeability [20]. Minocycline has also been used as an antioxidant agent to prevent retinal disease [21,22]; another important function of minocycline is suppression of microglial activation in neurologic diseases [23][24][25]. These studies support the idea that minocycline has a neuroprotective role. However, it was reported that minocycline could exacerbate visual dysfunction in a mouse model of retinopathy of prematurity (ROP) [26]. In short, the role of minocycline in the treatment of neurologic retinal diseases is contradictory with the mechanism still unknown. Microglial cells are a major type of immune cells in the CNS and have been thought to be involved in the pathogenesis of glaucoma [27,28]. Activated microglia are inflammatory cells and are detrimental to the function of the CNS [29]. It has been reported that microglial cells have diverse phenotypes and can rapidly transform into the reactive state in response to various insults [30]. A recent study showed that MC could reduce photoreceptor damage by suppressing the activation of microglia in retinitis pigmentosa [23]. Moreover, it has been reported that there is a loss of photoreceptors in glaucoma [31,32]. However, no study has investigated the functional change in RGCs under the treatment of MC in the ischemic retina. In the present study, we used molecular biology and visual function tests to determine whether MC could prevent the degeneration of RGCs in retinal ischemic insult and confirmed that MC is a promising therapeutic agent in models of neurologic ischemic damage.
METHODS
Animals: C57BL/6 male mice (8-12 weeks, weight approximately 20-25 g; purchased from Guangdong Medical Laboratory Animal Center) were used in the study. They were housed in a 12 h:12 h light-dark cycle and allowed free access to food and water. All experimental designs and protocols were conducted according to the recommendations of the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Jinan University Institutional Animal Care and Use Committee.
Retinal ischemia-reperfusion injury model: Mice were anesthetized with an intraperitoneal injection of 2.5% tribromoethanol. Before the eye surgery, the pupil was dilated with one drop of 0.5% tropicamide for 5 min, and then the cornea was desensitized with one drop of 0.4% oxybuprocaine hydrochloride eye drops. The ischemia-reperfusion (I/R) injury model was induced by inserting a 33 G needle into the anterior chamber of the left eye. A reservoir of normal saline (of 0.9%) was linked to the needle and hung up to maintain the IOP at 90 mmHg for 60 min (monitored with TonoLab, U.S. Pat. 6,093,147). After the surgery, 0.3% tobramycin was administered in the conjunctival sac to prevent inflammation.
Drug administration: The mice were randomly divided into four experimental groups: the control group, the I/R injury + normal saline (NS) group, the I/R injury + low-dose MC group, and the I/R injury + high-dose MC group. First, to test the effective low dosage, five animals in each group received 10, 20, or 30 mg/kg MC, respectively, via intravenous injection. The RGCs were counted to estimate the effect of MC. Further, two groups of five animals received either 80 or 100 mg/kg MC intravenously to obtain the effectiveness of high-dose MC. Finally, based on the results, the low dose of 20 mg/kg MC was confirmed and administered via the caudal vein 5 min after the operation and once per day until the animals were euthanized with intraperitoneal injection of 2.5% tribromoethanol anesthesia. while 100 mg/kg MC was used for the high-dose group as previously reported [33]. The mice received a volume of 0.1 ml solution of MC per 10 g bodyweight. Mice in the I/R injury + NS group were treated with an equal volume of NS.
Histology: Mice were anaesthetized using 2.5% tribromoethanol. Perfusion-fixation was performed with 0.9% NS until bleed out, followed by 4% paraformaldehyde in PBS (0.1 M; 8.6 g Nacl, 2.68 g NaH 2 PO 4 , 11.51 g Na 2 HPO 4 , pH 7.4) until the tissues stiffened. The enucleated eyeball cup was trimmed and immediately fixed in 4% paraformaldehyde for 1 day at 4 °C, followed by dehydration with 30% sucrose solution overnight, and embedded in a compound Tissue Freezing Medium (SAKURA4583, Tissue-Tek OCT, American; Torrance, CA) at -20 °C. The eye tissues were cut horizontally at 10 µm thick using a microtome (RM2235, Leica, Wetzlar, Germany). Four nonconsecutive sections through the optic nerve were used from each mouse under the same conditions. Hematoxylin and eosin (H&E) staining was used on the retinal sections to evaluate the inner retinal layer (IRL), which was measured from the inner limiting membrane to the inner nuclear layer. Images were taken using a Leica light microscope. For statistical analysis, retinal thickness was measured at the middle retina at 1.1 mm on both sides from the optic nerve head [11].
Immunofluorescence: First, flat-mounted retinas were fixed in 4% paraformaldehyde overnight. The orientation of each eye was carefully marked with a nick on the nasal side during dissection. Tissues for immunofluorescence were then rinsed with PBS, followed by blocking in PBS with 0.3% Triton X-100 and 10% goat serum for 1 h at room temperature. They were then incubated with primary antibody diluted in blocking solution overnight at 4 °C. The primary antibodies used in this study included goat anti-Brn3a (1:400; Santa Cruz Biotechnology, Santa Cruz, CA) and rabbit anti-Iba1 (1:600; Wako, Osaka, Japan) [34,35]. After rinsing in PBS, the retinas were incubated with secondary antibodies at room temperature for 3 h, followed by rinsing again with PBS. After that, the retinal flat-mounts were mounted on the slide and sealed with an antiquenching reagent and the coverslip. Finally, the retinas were observed, and photographs were taken using a fluorescence microscope (DM6000B, Leica).
Optokinetic test:
The optokinetic test (OKT) was performed to assess the visual acuity of the animals [36]. Briefly, each animal was placed freely on a platform in the center of a chamber. The observer operated the software on a desktop computer, which automatically changed the gratings until a reliable threshold was reached. The whole experiment was conducted in a quiet and dark room maintained at a suitable temperature and low noise level to achieve the best responses. In this study, the OKT was conducted at day 4 after the I/R insult. The spatial frequencies tested in the study were 0.05, 0.06, 0.07, 0.1, 0.15, 0.2, 0.25, 0.30, and 0.35 cycles per degree (cpd). Clockwise drifting grates were used to determine the visual function of the left eye, and counterclockwise drifting grates were used for the right eye. The observer judged "Yes" or "No" depending on whether the animal's neck moved along with the drifting scene. The final OKT score was obtained as the highest frequency of grating record which was just before the observer selected the first "No" [37,38].
Electroretinography: Electroretinography (ERG) is used to evaluate the electrical activities and function of signals in different types of retinal cells. In this study, ERG was performed 7 days after the I/R insult using the protocols described previously [39][40][41]. The mice were prepared for ERG recording after overnight dark adaption. The animals were then anesthetized, and the pupils were dilated, followed by lubricating with 1% methylcellulose. Each animal was placed on a homeothermic device at 37 °C. Recording electrodes of gold wire loop were placed on the cornea. Two reference electrodes were inserted into the subdermis between the ears, while another electrode inserted into the tail acted as a ground. The a-wave, b-wave, and photopic negative response (PhNR) were recorded using the Roland Consult (Brandenburg, Germany) electrophysiological diagnostic system. Light intensities were adjusted as standard luminance intensity units in candela seconds per meter squared (cd.s/ m 2 ). Scotopic ERGs were recorded after dark adaptation with intensities of 3 cd.s/m 2 . After that, light adaption under the continuous white background of 25 cd.s/m 2 was applied for 10 min to suppress rod-cell photosensitivity, and the PhNR was recorded using the white flashes of 3 cd.s/m 2 . A-waves produced by photoreceptors were measured from the baseline to the first negative peak. B-waves conducted by the ON bipolar cells were measured from the trough of the a-wave to the subsequent positive peak. The PhNR was derived from RGCs and is the negative peak following the b-wave [42].
Statistics: All data were analyzed using the statistical software program GraphPad version 5.0 (GraphPad Software, San Diego, CA) and were presented as means ± standard error of the mean. One-way ANOVA followed by the Newman-Keuls multiple comparison test was used for quantitative analysis, and the Kruskal-Wallis test followed by Dunn's multiple comparison tests were used for qualitative analysis. A p value of less than 0.05 was considered statistically significant. The ERG waves were analyzed using RETI-port software (Roland Consult) after 50 Hz low-pass filtering was applied.
Low-dose MC reduced RGC loss in the mouse I/R injury model:
To detect the dose effect of MC, the number of RGCs was counted at day 4 post-I/R injury using Brn3a immunostaining among the different groups. The RGC numbers were averaged from four quadrants of the whole-mounted retina by using five grids of 160 × 160 µm 2 from the optic disc to the border at 500-µm intervals ( Figure 1A,B). There was no statistically significant difference in the number of RGCs between the group that received 10 mg/kg MC (2,332±86.39/ mm 2 ) and the I/R injury + NS group (2,125±99.68/mm 2 ). However, a higher number of RGCs was observed in the 20 mg/kg MC group (2,511±75.17/mm 2 ) and the 30 mg/kg MC group (2,569±74.32/mm 2 ), while a lower number was observed in the 80 mg/kg MC group (2,069±75.09/mm 2 ) and the 100 mg/kg MC group (1,825±79.08/mm 2 ; all groups, n=5). Other than the 10 mg/kg MC group and the 80 mg/kg MC group, there was a statistically significant difference when compared with the I/R injury + NS group (p<0.05). Finally, in the experiments, the optimal low dose of MC was 20 mg/ kg, and the lowest harmful high dose of MC was 100 mg/kg ( Figure 1C).
The thickness of the IRL was evaluated by using H&E staining ( Figure 2) on day 7 after the I/R insult. Quantitative analysis showed there was a statistically significant difference in the thickness of the IRL between the control group (104±2.40 µm; Figure 2A) and the I/R injury + NS group (55±2.2 µm; Figure 2B). The IRL was thicker in the low-dose MC group (75±2.2 µm; Figure 2C) compared with the I/R injury + NS group, whereas there was no obvious difference between the high-dose MC group (54±5.1 µm; Figure 2D) and the I/R injury + NS group. Thus, we found that the detrimental effect of the I/R insult was alleviated by treatment with low-dose MC (20 mg/kg).
Low-dose MC reduced the activation of microglial cells in the I/R injury model:
The number of microglia was quantified based on the standard that resting microglial cells have small cell bodies and few, thin processes, while activated microglial cells are characterized by enlarged cell bodies with numerous hypertrophied processes or amoeboid cell bodies [43]. Microglial cells were counted by scanning the z-axis across the retinal surface in the ganglion cell layer using a 20X objective lens. Four to five grids of 160 × 160 µm 2 were averaged in each quadrant. One observer, who was blinded to the experimental group, calculated the number of microglial cells under the fluorescence microscope.
After 4 days of I/R insult, microglia were identified using anti-Iba1 immunostaining (Figure 3). The numbers of activated Iba1-positive cells were statistically significantly increased in the retinas from the I/R injury + NS group (88.0±4.50/mm 2 , n=7; Figure 3B) compared with the control group (11±2.5/mm 2 , n=7; Figure 3A). Low-dose MC alleviated microglial activation (53±3.3/mm 2 , n=7; Figure 3C) compared with the I/R injury + NS group. Compared with the I/R group, the high-dose MC group statistically significantly aggravated microglial activation (106±4.40/mm 2 , n=7; Figure 3D). The results revealed a statistically significant reduction in microglial activation in the low-dose MC-treated retinas compared with the I/R injury + NS group and the high-dose MC-treated group.
Low-dose MC improved optokinetic responses:
The OKT is currently a commonly used tool to assess the visual function of animals [44]. Visual stimuli are projected on computer monitors so that a virtual cylinder with vertical sine wave gratings is drawn by the monitors (Figure 4A,B). The cylinder is rotated at 12 degrees per second to elicit each mouse to stop moving its body and track the grating ( Figure 4C). The grating frequency is increased when an observer selects "Yes," and the system uses a simple staircase method until the mouse shows no visible reaction to the moving gratings ( Figure 4D) [45].
Before the I/R insult, there was no difference between the eyes, and both exhibited normal acuity (left: 0.350 cpd; right: 0.350 cpd) in the OKT performed at noon. The OKT responses decreased in the eyes of the I/R injury + NS group (0.156±0.014 cpd, n=8) compared with the control group (0.350 cpd, n=5) on day 4 following the I/R procedure, while the low-dose MC-treated mice (0.218±0.013 cpd, n=8) maintained higher responses than the I/R injury + NS mice ( Figure 4E). Meanwhile, the high-dose MC group (0.150±0.013 cpd, n=8) showed no discernible difference compared with the I/R injury + NS group. The data demonstrated that low-dose MC treatment preserved the function of RGCs in the retinas.
MC treatment markedly reversed ERG changes caused by the I/R insult: ERG recordings were performed at scotopic 3 cd s/m 2 and photopic 3 cd s/m 2 . ERG recordings for each group are shown in Figure 5. The a-wave, b-wave, and PhNR were recorded ( Figure 5A,B). The PhNR amplitudes were reduced in the I/R injury + NS eyes (20.40±1.750 μV, n=15) compared with those for the control eyes (46.80±2.95 μV, n=15). In addition, the PhNR amplitude in the low-dose MC-treated eyes (27.76±1.790 μV, n=15) was markedly higher than in the I/R injury + NS group. The PhNR amplitude was lowest in the high-dose MC group (14.37±1.640 μV, n=15; Figure 5C). The amplitudes of the scotopic ERG a-waves (75±4.0% of baseline values, n=8) and b-waves (60±2.0% of the baseline values, n=8) showed a reduction in the I/R injury + NS mice compared with the eyes of the mice in the untreated normal control group, while the low-dose MC-treated group revealed less reduction in scotopic ERG a-waves (84±3.0% of the baseline values, n=8) and b-waves (66±2.0% of the baseline values, n=8) compared with those for the I/R injury + NS mice ( Figure 5D). Under the stimulus of photopic 3 cd s/m 2 , the amplitudes of the b-waves were also better preserved in the low-dose MC group (67±2% of the baseline values, n=8) compared with the amplitudes for the I/R injury + NS group (59±3% of the baseline values, n=8). However, the amplitude of the photopic a-waves was not statistically significantly different among these groups ( Figure 5E). The latency time of the scotopic ERG a-and b-waves was longer in the I/R injury + NS group (21.3±0.50 and 46.3±1.50 ms, respectively) than in the control group (18.8±0.40 and 41.0±1.20 ms, respectively). The latency time was slightly shorter in the low-dose MC group (a-wave: 19.1±0.40 ms, b-wave: 43.0±1.40 ms) compared with the I/R injury + NS group. In contrast, the PhNR amplitude of the I/R animals treated with high-dose MC (a-wave: 21.5±0.60 ms, b-wave: 48.0±1.30 ms) showed greater reduction than the I/R injury + NS mice, but the latency times were not obviously exacerbated ( Figure 5F).
The scotopic ERG results suggested that low-dose MC protected against loss of rod-derived retinal function in I/R insult. However, the latency times of photopic ERG a-and b-waves showed no statistically significant differences among these groups ( Figure 5G). Amplitudes of the PhNR were consistent with histological changes and behavioral alterations. This confirms the protective effects of low-dose MC on RGC loss in this I/R model.
DISCUSSION
In the present study, we evaluated RGC loss in I/R injury mice and found that low-dose MC can rescue RGCs based on histologic analysis, visual functional changes, and behavioral tests. Ischemia-associated retinal degeneration leads to severe visual impairment, and even blindness [46]. The ischemic retina injury model can be created with acute hypertension, where high IOP-induced injury was used in this model [47][48][49]. Because of the presence of ischemic impact in glaucoma and the ease of establishing a model, the I/R model has been more popularly used in studies of neurodegeneration and neuroprotection of RGCs in glaucoma studies [50]. We reported the loss of RGCs in the I/R model in a previous study [11]. In the present study, we detected less RGC loss and inner retinal layer thinning in the low-dose MC treatment group, which suggested that the loss of RGCs is induced by a transient ischemic attack, while MC has potential RGC protective properties.
A previous study showed a close relationship between microglia and RGCs in glaucoma [51]. Microglial cells were reported to be involved in the development of many neurodegenerative diseases and neurologic disorders, including glaucoma [52]. There is increasing evidence showing the detrimental effects of activated microglia, while suppressing the activation of microglia can improve the survival of RGCs [53,54]. To date, in CNS degenerative diseases, the main mechanism for the neuroprotective effect of MC has been thought due to inhibiting the activation of microglial cells [54,55]. Consistent with this, the present data showed that the number of activated microglia was reduced by treatment with low-dose MC (20 mg/kg) when compared with the vehicle-treated I/R injury group. The reduction in the number of microglia was consistent with the increased survival of RGCs in the low-dose MC group, but not in the high-dose MC group, which suggested that microglia may promote repair of the injured retina. In a recent study, we reported that microglial activation induced RGC damage [54]. Moreover, the present data also showed inhibiting microglial activity by MC had neuroprotective effects. However, high-dose MC induced toxicity toward neuron and non-neuronal retinal cells. Thus, microglia should be a key target for neuroprotection related to ischemia. The data demonstrated that MC may exert a neuroprotective role in glaucoma via suppression of microglial activation, The OKT has been widely applied as a visual function test in mice with retinal degenerative diseases [56]. Optokinetic tasks overcome the limitations of other visual tasks, as they require no reinforcement training for measurement of vision. In the present study, the OKT analysis showed that the light-adapted visual acuity of low-dose MC-treated I/R injury mice was statistically significantly better than that of I/R injury + NS-treated group, which further suggested the neural protective features of low-dose MC.
ERG is commonly considered a more sensitive method than histology in evaluating retinal insults [57]. PhNR is dependent on the activity of RGCs and is reduced in eyes with experimental glaucoma. The present ERG data indicated that RGC function was damaged under transient ocular hypertension and protected by low-dose MC. From the onset of the waves in ERG, the a-wave is generated by photoreceptors, while the b-wave mainly originates from bipolar cells that are post-synaptic to photoreceptors. Scotopic a-waves are related to rod function, while photopic waves are related to cone function. A previous study reported that photoreceptors are damaged in glaucoma [26]. The present data showed that the average amplitude and latency time for scotopic ERG a-waves were different between the I/R injury + NS group and the low-dose MC group. Conversely, there were no statistically significant differences in the average amplitude and latency time for photopic ERG a-waves. The ERG a-and b-waves correlated with changes in the retinal layer thickness, which indicated that MC could prevent RGCs and photoreceptors from glaucomatous damage. In conclusion, the present study demonstrated the protective effects of low-dose MC on I/R injury-induced RGC loss, making MC an appealing candidate for glaucoma therapy. The possible mechanism of action is related to the inhibition of microglial activation. An equally important finding is that MC had not only dose-dependent neuroprotective effects but also potential toxicity at a high dose for neurons and non-neuronal cells. The results also indicated that MC, at appropriate doses, may be an effective therapeutic intervention for ischemic damage of the retina. | 2018-06-07T14:16:44.342Z | 2018-05-18T00:00:00.000 | {
"year": 2018,
"sha1": "7f42d8b4fb2757a1a5ddad7400e821e2ae0e644a",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7f42d8b4fb2757a1a5ddad7400e821e2ae0e644a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264137468 | pes2o/s2orc | v3-fos-license | Optimization of Statin-Loaded Delivery Nanoparticles for Treating Chronic Liver Diseases by Targeting Liver Sinusoidal Endothelial Cells
In this study, we developed functionalized polymeric micelles (FPMs) loaded with simvastatin (FPM-Sim) as a drug delivery system to target liver sinusoidal endothelial cells (LSECs) for preserving liver function in chronic liver disease (CLD). Polymeric micelles (PMs) were functionalized by coupling peptide ligands of LSEC membrane receptors CD32b, CD36 and ITGB3. Functionalization was confirmed via spectroscopy and electron microscopy. In vitro and in vivo FPM-Sim internalization was assessed by means of flow cytometry in LSECs, hepatocytes, Kupffer and hepatic stellate cells from healthy rats. Maximum tolerated dose assays were performed in healthy mice and efficacy studies of FPM-Sim were carried out in bile duct ligation (BDL) and thioacetamide (TAA) induction rat models of cirrhosis. Functionalization with the three peptide ligands resulted in stable formulations with a greater degree of in vivo internalization in LSECs than non-functionalized PMs. Administration of FPM-Sim in BDL rats reduced toxicity relative to free simvastatin, albeit with a moderate portal-pressure-lowering effect. In a less severe model of TAA-induced cirrhosis, treatment with FPM-CD32b-Sim nanoparticles for two weeks significantly decreased portal pressure, which was associated with a reduction in liver fibrosis, lower collagen expression as well as the stimulation of nitric oxide synthesis. In conclusion, CD32b-FPM stands out as a good nanotransporter for drug delivery, targeting LSECs, key inducers of liver injury.
Introduction
Chronic liver disease (CLD) is responsible for more than 2 million annual deaths worldwide [1].Progressive CLD is caused by the continuous production and deposition of extracellular matrix components resulting in significant hepatic fibrosis and nodular distribution of the liver parenchyma, leading to organic dysfunction and liver failure [2,3].Portal hypertension is an important complication of advanced CLD and is the consequence of a marked increase in intrahepatic vascular resistance (IHVR) together with an increased hepatic vascular tone [4].This increased vascular tone is strongly related to alterations in liver sinusoidal endothelial cells (LSECs) that, upon chronic injury, exhibit an imbalance of vasoactive molecules favoring vasoconstriction and increasing IHVR.
LSECs play a major role as sensors and drivers of CLD [5].Under physiological conditions, LSECs are responsible for maintaining hepatic homeostasis, metabolite transport and vascular tone, enhancing hepatocytes exposure to macromolecules from the portal circulation.Furthermore, LSECs possess anti-inflammatory and anti-fibrogenic properties by preventing the activation of Kupffer cells and hepatic stellate cells (HSCs) [6].However, under pathological conditions, LSECs dedifferentiate, becoming a common vascular endothelial phenotype and losing specialized functions [7,8].Therefore, it is crucial to maintain the LSEC specialized phenotype to preserve hepatic function; this might be achieved by means of the direct delivery of vasoprotective drugs.
Statins, originally designed as inhibitors of the enzyme 3-hydroxy-3-methylglutaryl-coenzyme A reductase (HMG-CoA reductase) of the cholesterol biosynthesis pathway [9,10], have been widely used for lipid-lowering purposes with a good safety profile in cardiovascular diseases.However, although the beneficial effects of statins in the cirrhotic liver have been demonstrated, their use in CLD has not yet been consolidated.Furthermore, different animal models of cirrhosis have shown that the use of statins (e.g., simvastatin, atorvastatin) exerts vasoprotective and antifibrotic effects, improving LSEC dysfunction, reducing HSC activation and achieving hepatic fibrosis regression.These remarkable beneficial effects were mainly dependent on the up-regulation of Krüppel-like factor 2 (KLF2) in hepatic cells after statins treatment [11,12] by enhancing the expression of their target genes, such as eNOS (endothelial nitric oxide synthase), or inhibiting the vasoconstrictive RhoA/ROCK (Ras homolog family member A/Rho-associated protein kinase) axis, among other pathways [13,14].However, statins are not free of adverse effects, mainly including muscular but also hepatic toxicity [15,16].As an example, in a previous study in a rat model with bile elimination impairment (bile duct ligation model, BDL) mimicking severely impaired liver function, the adverse events of simvastatin were magnified, reaching significant levels of mortality when using high doses [13].Therefore, the dose-dependent side effects of statins limit their efficacy, especially in populations with a higher risk of toxicity, such as patients with advanced CLD [17].
To overcome this limitation, we produced Pluronic-based polymeric micelles (PMs), loaded with chemically activated simvastatin [18].Briefly, the nanoparticle biodistribution was mainly hepatic and no signs of toxicity were detected.After one week of administration in the BDL model, these nanoparticles demonstrated superior effects compared to free simvastatin in reducing portal pressure, with a significant nanoparticle accumulation in LSECs, although they were also found in Kupffer cells, suggesting a possible loss of efficacy due to their clearance by macrophages.Therefore, it was proposed that an improvement to the formulation by means of functionalization with specific ligands was needed to increase the effect of nanoparticles in LSECs in a more specific manner [19].
One of the outstanding properties of Pluronic PMs is their ability to be functionalized by adding peptide or protein molecules for active targeting to their hydrophilic surface to enhance drug delivery to specific sites in the body or specific receptors on cells.By doing this, the therapeutic efficacy of the cargo can be maximized while reducing systemic toxicity compared with untargeted micelles [20].When the ligands conjugated to PMs bind to their specific receptors on the cell membrane, endocytic internalization of those nanoparticles is promoted.Interestingly, LSECs have one of the highest endocytic capacities in the human body, providing an advantage for specific targeting [21].
This study explores the development of nanoparticles specifically targeting LSECs for the delivery of simvastatin.We tested different peptides recognizing the binding sites of (i) the specific LSEC Fcγ-receptor IIb, expressed mainly in physiological conditions, (ii) the transmembrane glycoprotein thrombospondin receptor, unaltered during LSEC capillarization and (iii) the integrin α V β 3 receptor, highly expressed in dysfunctional LSECs.
Animals
All animal procedures were conducted in accordance with European Union legislation on the protection of animals used for scientific purposes (Directive 2010/63/EU revising Directive 86/609/EEC), approved by the Animal Research Ethics Committee of the Vall d'Hebron Institut de Recerca (Barcelona, Spain).
Male Sprague Dawley CD rats (Charles River Laboratories, Saint-Germain-sur-l'Arbresle, France) were used to generate two models of CLD with portal hypertension and cirrhosis; intrahepatic portal hypertension was induced via secondary biliary cirrhosis, performing a common BDL for a period of 4 weeks on rats weighing 200-220 g, or via intraperitoneal injection of 250 mg/kg of thioacetamide (TAA) 2 days/week over a period of 8 weeks to rats weighing 125-150 g at baseline and weighed weekly to adjust the dose to the weight gain.Only male rats were used because female estrogen hormones modify hemodynamic parameters, increasing or decreasing their values randomly and unrelated to liver disease [22].All animals were housed under a constant temperature of 22 ± 2 • C and 50% humidity in a controlled 12/12 h light/dark cycle.They were fed ad libitum with a grain-based chow (SAFE 150; SAFE Complete Care Competence, Rosenberg, Germany) and had free access to water.
In addition, male BALB/cAnNRj mice (Janvier Labs, Le Genest-Saint-Isle, France) aged 5-6 weeks were used for the maximum tolerated dose assay performed at Cellvax facilities (Villejuif, France).Animal health status was specific and opportunistic pathogenfree (SOPF), and they were housed in polyethylene cages (<5 mice/cage) in a climateand light-controlled environment in accordance with Cellvax's approved standard operating procedures.
Simvastatin Activation
Prior to encapsulation or in vitro treatment, commercial pure simvastatin needs to be activated from its inactive lactone form to its β-hydroxyacid active form [23].For this purpose, 8 mg of simvastatin (0.019 mM) was dissolved in 0.2 mL of pure ethanol, with the subsequent addition of 0.3 mL of 0.1 N sodium hydroxide (NaOH).The solution was heated at 50 • C for 2 h and neutralized with hydrochloric acid (HCl) to pH 7.0.The resulting solution was brought to a final volume of 1 mL with distilled water, and aliquots were stored at −80 • C until use or lyophilized.
Synthesis of Peptide Ligands
All peptides were prepared via the solid-phase peptide synthesis (SPPS) technique [24] using standard Fmoc-L-amino acids on a CS136X peptide synthesizer (CSBio, Mountain View, CA, USA).Fmoc deprotection was performed twice using 20% piperidine in DMF for 5 min.The carboxylic acid functional group (-COOH) of each amino acid was activated with the coupling reagent HCTU in the presence of DIPEA for 5 min and added to the resin for coupling with constant shaking for 30 min at room temperature (RT).The resin was washed -three times with DMF and DCM, dried and cleaved with a cocktail of TFA/trisopropylsilane/water (ratio 94:3:3).After cleavage, the resin was removed by means of filtration, washed twice with neat TFA and bubbled with nitrogen for TFA removal.Afterwards, cold diethyl ether was added to precipitate the peptide, centrifuged (4000 rpm, 10 min) and ether was decanted.Cleaved peptide was dissolved in 0.1% TFA in GdmCl and lyophilized.
For the synthesis of fluorescently labelled peptides, Fmoc-L-Lys(Alloc)-OH was coupled on the C-terminus of the peptide chain to label it with FITC, allowing the free amine (NH 2 ) of Lys to react with the fluorochrome.
Lyophilized crude peptides were dissolved in acetonitrile in water with 0.1% TFA and purified by means of prep RP-HPLC, and peaks were collected and characterized by means of electrospray ionization mass spectrometry (ESI-MS) performed on a LCQ Fleet Ion Trap Mass Spectrometer (Thermo Scientific, Thermo Fisher Scientific, USA).Peptide masses were calculated from the experimental mass-to-charge ratios (m/z) from all the observed multiply charged species of peptides.
Synthesis of Carboxylated Polymer for Functionalization
For the synthesis of functionalized polymeric micelles (FPMs), Pluronic F127 was carboxylated (F127-COOH) by means of the maleic anhydride method [25] to assist with the peptide ligand conjugation.F127 and maleic anhydride (ratio 1:11) were dissolved in distilled chloroform and allowed to react for 24 h under stirring at 70 • C in a condensation system to avoid any loss of solvent.The solution was concentrated and poured twice into an excess amount of iced cold diethyl ether to precipitate the reaction product.F127-COOH was dried via vacuum dehydration and collected as a white powder.
Production of Polymeric Micelles (PMs)
The synthesis of PMs was based on the thin-film hydration technique.Briefly, Pluronic F127 polymer was weighed and dissolved in an organic mixture of methanol/ethanol (1:1), which was removed under vacuum in a rotary evaporator and the formed thin film was left to dry overnight (O/N) at RT to eliminate any remaining solvent.The film was hydrated with PBS at RT and the aqueous solution was vortexed for 5 min, allowing polymers to self-assemble to form micelles.For PM functionalization, micelles were produced with a mixture of F127 and F127-COOH polymers (ratio 8:2), incubated with EDC dissolved in water (polymer/EDC ratio 1:1.5) and stirred for 30 min at RT to activate COOH groups.To conjugate the activated PMs with the peptide ligands, the native chemical ligation technique was utilized, where synthesized peptides, modified with a Cys residue at the N-terminus, were solubilized in PBS, added to the PM solution (peptide/PM ratio 1:100) and incubated under stirring for 2 h at RT.The obtained dispersion of PMs or FPMs was filtered through a 0.22 µm syringe filter for sterilization and the removal of eventual aggregates.All types of nanoparticles were lyophilized for long-term storage at RT until use.
For internalization studies, fluorescent micelles were synthesized using 10% 5-DTAFlabelled polymer obtained via the conjugation of Pluronic F127 polymer with 5-DTAF in an aqueous medium via nucleophilic aromatic substitution through an addition-elimination mechanism.
For the synthesis of drug-loaded micelles, activated simvastatin was dissolved at the desired concentration (20 or 40 mg/mL) in the organic methanol/ethanol mixture together with the polymer before the solvent evaporation step.
Physicochemical Characterization of PM
To confirm its correct functionalization, FPMs were analyzed by means of Fouriertransform infrared spectroscopy (FTIR) in the Preparation and Characterization of Soft-Materials Services at Institut de Ciència de Materials de Barcelona (Barcelona, Spain) using a Spectrum One FT-IR Spectrometer (PerkinElmer, Inc., Waltham, MA, USA), with an energy range of 450-4000 cm −1 , equipped with the Universal Attenuated Total Reflectance (UATR) accessory.
Particles' mean hydrodynamic diameter and polydispersity index were measured by means of dynamic light scattering (DLS), and zeta potential was measured via laser Doppler micro-electrophoresis using a Zetasizer Nano (Malvern Instruments, Malvern, UK) with an angle of 173 • in a measurement range of 0.3 nm-10 µm and sensitivity of 0.1 mg/mL.
Particle shape and size were observed by means of transmission electron microscopy using the high-performance and high-contrast 120 kV JEM-1400Flash Electron Microscope (JEOL Ltd., Tokyo, Japan) from the Electron Microscopy Service at Univeristat Autònoma de Barcelona (Cerdanyola del Vallès, Spain).For their visualization, samples were placed on a copper carbon-coated grid and negatively stained with uranyl acetate for 1 min at RT. Gatan software V 2.32.888.0 (Gatan, Inc., Pleasanton, CA, USA) was used to process information and obtain measures from transmission electron microscopy images.
Encapsulation Efficiency
The encapsulation efficiency (EE) of simvastatin for each formulation was calculated according to Equation (1).The free drug present in the aqueous phase of the micelle synthesis was obtained via centrifugation with filtration (10,000 rpm, 10 min, 4
In Vitro Drug Release Assay
The in vitro release profile of simvastatin from FPMs was assessed via the regular dialysis method, placing FPM-Sim (20 mg/mL) inside a Spectra/Por Float-A-Lyzer G2 dialysis device (MWCO 20 kDa; Spectrum Laboratories, Inc., Rancho Dominguez, CA, USA) immersed in a PBS pH 7.4 solution (1:100 dilution) and maintained at 37 • C under stirring.A 500 µL sample of released media was collected at predetermined time points (0.25, 0.5, 1, 3, 6, 9, 12, 24, 48 and 72 h) for simvastatin quantification via UPLC-MS/MS (Xevo TQ Absolute).This volume was replaced with fresh buffer at each time point.All formulations were analyzed in duplicate.
Isolation and Culture of Primary Rat Liver Cells
All liver cells were isolated from healthy rats as follows.
LSECs and Kupffer cells
The liver was perfused through the portal vein for 10 min at a flow rate of 20 mL/min at 37 • C with Hanks' Balanced Salt Solution (HBSS; Sigma-Aldrich, Merck KGaA, Darmstadt, Germany) without calcium and magnesium containing 12 mM HEPES (pH 7.4), 0.6 mM EGTA, 0.23 mM BSA and 1% heparin.Next, the liver was perfused with 0.15 mg/mL collagenase A (Roche, Merck KGaA, Darmstadt, Germany) in HBSS containing 12 mM HEPES (pH 7.4) and 4 mM calcium chloride dihydrate (CaCl 2 •2H 2 O) for 30 min at a flow rate of 5 mL/min at 37 • C, then excised and ex vivo digested with the same buffer for 10 min at 37 • C in constant agitation.Cells were passed through a 100 µm nylon filter, collected in cold Krebs' buffer and centrifuged at 50× g for 3 min at 4 • C to eliminate hepatocytes.The supernatant was then centrifuged at 800× g for 10 min, and the pellet was resuspended in cold PBS to be centrifuged at 800× g for 25 min through a two-step 25-50% Percoll gradient (Sigma-Aldrich, Merck KGaA, Darmstadt, Germany) at 4 • C. The interface of the gradient enriched in LSECs and Kupffer cells was collected, rinsed with PBS and centrifuged at 800× g for 10 min.The pellet was resuspended in tempered correctly supplemented RPMI medium (10% FBS, 1% L-glutamine, 1% penicillin-streptomycin, 1% amphotericin B, 0.1 mg/mL heparin and 0.05 mg/mL endothelial cell growth supplement), seeded into a culture dish and incubated for 30 min (37 • C, 5% CO 2 ).Kupffer cells were separated from LSECs via their selective adherence to the non-coated dish, being washed with PBS and maintained in RPMI at 37 • C (5% CO 2 ).Non-adherent LSECs were collected, seeded into a collagen-coated culture dish and incubated for 45 min (37 • C, 5% CO 2 ).Finally, they were washed with PBS and maintained in RPMI (37 • C, 5% CO 2 ) [26].
Hepatic Stellate Cells
HSCs were isolated by perfusing the rat liver through the portal vein with Gey's Balanced Salt Solution (GBSS; Sigma-Aldrich, Merck KGaA, Darmstadt, Germany) at a flow rate of 20 mL/min at 37 • C. The liver was then perfused with 1.5 mg/mL pronase E (Roche, Merck KGaA, Germany), 0.15 mg/mL collagenase A and 0.05 mg/mL DNase I (Roche, Merck KGaA, Germany) in GBSS solution for 30 min at a flow rate of 5 mL/min at 37 • C. The digested liver was excised and digested ex vivo, also with pronase E (0.4 mg/mL) and collagenase A and DNase I (0.1 mg/mL) for 10 min at 37 • C in agitation.The resulting suspension was filtered through a 100 µm nylon filter and centrifuged at 50× g for 4 min at 21 • C. The supernatant was centrifuged at 800× g for 5 min to eliminate the hepatocytes, and the obtained pellet was resuspended in GBSS and centrifuged in Optiprep Density Gradient Medium (11.5%) (Sigma-Aldrich, Merck KGaA, Darmstadt, Germany) at 1400× g for 21 min.The fraction enriched in HSCs was collected, rinsed with GBBS and centrifuged at 800× g for 5 min.The pellet was resuspended in tempered correctly supplemented Iscove's modified Dulbecco's medium (IMDM) (10% FBS, 1% L-glutamine, 1% penicillinstreptomycin and 1% amphotericin B), seeded into a culture dish and incubated O/N at 37 • C (5% CO 2 ).The next day, HSCs were washed with PBS and maintained in IMDM (37 • C, 5% CO 2 ) [27].
Cellular Uptake of Peptide Ligands and Micelles
To determine the in vitro uptake of peptide ligands and FPM, healthy LSECs were seeded into a 96-well plate and treated with 15 µl of FITC-peptide ligands (0.75 mg/mL) or 10 µL of 5-DTAF-labelled FPMs (100 mg/mL polymer) at different time points: from 1 min to 1 h (peptide ligands) or 4 h (FPMs).As negative control, some LSECs were not treated.Next, cells were washed with PBS and for 5-10 min at 37 • C. Once the cells were detached after incubation with 30 µL of trypsin 10×, 120 µL of PBS at 5% FBS with DAPI (1:1000 dilution) was added per well and the plate was read by means of flow cytometry on a BD LSRFortessa Cell Analyzer (BD, Franklin Lakes, NJ, USA), detecting the percentage of positive cells for the fluorescent staining of peptide ligands or FPMs.Each condition was analyzed in triplicate.
For the in vivo uptake of micelles, healthy rats received an intravenous dose of 5-DTAF-labelled PMs or FPMs (100 mg/kg of polymer).Untreated rats were used as negative controls.The following day, LSECs, Kupffer cells, HSCs and hepatocytes were isolated and cultured.Cells were then lifted from the plate with trypsin 10×, or 1× for hepatocytes, for 5-10 min at 37 • C, collected and centrifuged at 800× g for 5 min.The pellet was resuspended in PBS at 5% FBS with DAPI (1:1000 dilution), or propidium iodide (1:50 dilution) in the case of HSCs, and cells were analyzed by means of flow cytometry.Results were obtained from three or five animals for each of the study groups, with three determinations per sample.
Determination of Simvastatin in Muscle Tissue
A single dose of oral (inactive) or intravenous (active) simvastatin, and PM-Sim, FPM-CD32b-Sim, FPM-CD36-Sim or FPM-CD32b-CD36-Sim was administered at 20 mg/kg to healthy rats for simvastatin detection in muscle after 10 h of treatment (n = 3 animals/group).A sample of ground quadriceps femoris was dissolved in methanol/distilled water (1:1 volume, final concentration 0.2 g muscle/mL) and sonicated (3 × 15 s).The homogenate was centrifuged (13,000× g, 10 min, 4 • C) and the supernatant was processed for simvastatin extraction: 50 µL of muscle homogenate was mixed with 125 µL of acetonitrile and vortexed for 30 s.Then, 25 µL of 5 M ammonium formate (NH 4 HCO 2 ) buffer (pH 4) was added and the mixture was vortexed again.The sample was centrifuged (13,000× g, 10 min, 4 • C) and the supernatant was analyzed via UPLC-MS/MS (Xevo TQ Absolute) for the quantification of active simvastatin.
Maximum Tolerated Dose
To study the safety profile of simvastatin-loaded FPMs, a three-phase maximum tolerated dose assay was conducted, in which acute toxicity (phase 1 and phase 2) and subacute toxicity (phase 3) of treatment with the FPM-CD36-Sim formulation (referred to as FPM-Sim) were evaluated.After each phase, serum and liver tissue samples were obtained from the animals for biochemical and histological analysis, respectively.All phases were performed in healthy mice, and the weight and condition of the animals were monitored throughout the process.
Acute Toxicity: Phase 1 and Phase 2
In phase 1, different doses of encapsulated simvastatin (10, 20 and 50 mg/kg) were tested by administering a single intravenous dose and drawing samples at different times (4 h, 48 h and 1 week) post-treatment (n = 2 animals/group).Untreated animals were used as the control group.The dose that showed no toxicity or mortality (FPM-CD36-Sim 10 mg/kg) was further evaluated in phase 2, where it was given intravenously 3 days/week for 2 weeks to monitor the cumulative effect of the drug on the animals compared to the control condition (n = 5 animals/group).
Subacute Toxicity: Phase 3
In the subacute toxicity phase, FPM-CD36-Sim 10 mg/kg was administered intravenously 5 days/week for 3 weeks, simulating a longer treatment situation.For this phase, a vehicle group receiving intravenous saline injections was conducted as a control (n = 10 animals/group).
Hemodynamic Measurements
The measurement of hemodynamic parameters was performed in fasted conditions (O/n) 90 min after the last treatment administration.Mean arterial pressure (MAP; mmHg) was measured by means of catheterization of the femoral artery and portal pressure (PP; mmHg) via ileocolic vein catheterization using highly sensitive pressure transducers from a PowerLab data acquisition device associated with the physiological data analysis software LabChart 5.0 (ADInstruments, Dunedin, New Zealand).Superior mesenteric artery (SMA) blood flow (SMABF; (mL/min)•100 g) and portal blood flow (PBF; (mL/min)•100 g) were evaluated with a 1.0 mm-diameter ultrasonic perivascular flowprobe connected to a TS420 Perivascular Flow Module (Transonic Systems Inc., Ithaca, NY, USA).SMA resistance (SMAR; (mmHg•min)/(mL•100 g)) and IHVR ((mmHg•min)/(mL•100 g)) were calculated as ((MAP-PP)/SMABF) and (PP/PBF), respectively.Once the hemodynamic study was completed, blood was collected for biochemistry, and liver tissue samples were collected for histological and molecular analysis.
Biochemical Analysis
Fasting blood and serum samples were analyzed to determine creatinine, total bilirubin, alanine aminotransferase (AST), aspartate aminotransferase (ALT), alkaline phosphatase (ALP), creatine kinase (CK), total cholesterol, triglycerides and albumin values.Samples were measured in the automatized CORE laboratory from Hospital Vall d'Hebron, by means of the analytical platform ATELLICA Solution, using Standard CE Mark-approved IVDR diagnostic kits provided by Siemens Healthineers (Erlangen, Germany).
Sirius Red Staining
Liver samples were fixed in 4% paraformaldehyde, embedded in liquid paraffin at 65 • C, and sectioned in 4 µm-thick slices.Once hydrated, samples were dried and stained with 0.1% Picro-Sirius red for 1 h at RT under gentle agitation.Samples were then mounted with DPX rapid mounting medium (Panreac Química SLU, Castellar del Vallès, Spain).
Gene Expression Analysis
Healthy LSECs were treated O/N (37 • C, 5% CO 2 ) with free activated simvastatin or PM/FPM-Sim at a drug concentration of 2.5 µM.Control wells were treated with empty (Ø) micelles.Liver samples from treated rats were collected in RNAlater stabilization Solution (Invitrogen, Thermo Fisher Scientific, USA) and kept for 1 week at 4 • C. Total RNA was extracted from liver cells or tissue and converted to cDNA.From each sample, 20 ng of cDNA was amplified with specific TaqMan probes for COL1A1 (collagen type I alpha 1 chain; Rn 01463848_m1) and KLF2 (Rn 01420496_gH).The relative gene expression was normalized to GAPDH (glyceraldehyde-3-phosphate dehydrogenase; Rn 99999916_s1).Each sample was analyzed in triplicate.
Statistical Analysis
IBM SPSS Statistics 20 (IBM, Armonk, NY, USA) was used for statistical analysis.Quantitative results were expressed as mean ± standard error of the mean (SEM) and compared with analysis of variance followed by unpaired Student's t test (between two groups) or one-way analysis of variance (ANOVA) with Tukey's HSD post hoc correction (among three or more groups).When data were not normally distributed, nonparametric tests were applied, using the Mann-Whitney U test to compare two groups and the Kruskal-Wallis test for multiple comparisons.Pearson's correlation coefficient was calculated to show the correlation of two parameters.A p-value ≤ 0.05 was considered statistically significant.
FPM Physicochemical Characterization
CD32b, CD36 and ITGB3 peptide ligands, with or without fluorescein isothiocyanate (FITC) labelling, and the scrambled versions of CD32b and CD36 peptide ligands were synthesized (Figure 1A,B) via Fmoc-solid phase peptide synthesis (Fmoc-SPPS) using standard methods.Internalization of FITC-labelled peptides in healthy LSECs showed a high in vitro cellular uptake, with more than 50% of LSECs internalizing the three ligands after 1 min of incubation and, at 1 h, positive cells were higher than 80% in all cases (Figure 1C).
CD32b, CD36 and ITGB3 peptide ligands, with or without fluorescein isothiocyanate (FITC) labelling, and the scrambled versions of CD32b and CD36 peptide ligands were synthesized (Figure 1A,B) via Fmoc-solid phase peptide synthesis (Fmoc-SPPS) using standard methods.Internalization of FITC-labelled peptides in healthy LSECs showed a high in vitro cellular uptake, with more than 50% of LSECs internalizing the three ligands after 1 min of incubation and, at 1 h, positive cells were higher than 80% in all cases (Figure 1C).The physicochemical characteristics of FPMs are summarized in Table 1.The mean hydrodynamic diameter of FPMs varied according to the ligand used for functionalization, ranging from 180 to 290 nm.The observed nanoparticle size is due to an increase in functional groups and charges from the carboxylate side of the polymer and peptide, attracting water molecules to the surface of the PMs, as the hydrodynamic diameter is the The physicochemical characteristics of FPMs are summarized in Table 1.The mean hydrodynamic diameter of FPMs varied according to the ligand used for functionalization, ranging from 180 to 290 nm.The observed nanoparticle size is due to an increase in functional groups and charges from the carboxylate side of the polymer and peptide, attracting water molecules to the surface of the PMs, as the hydrodynamic diameter is the sum of the geometric size plus the layer of water molecules on the surface of the particle.Digital image analysis performed based on transmission electron microscopy photographs demonstrated a much smaller mean diameter (≈20 nm), difficult to eliminate via the reticuloendothelial system.All formulations had mid-range polydispersity index values (ranging from 0.35 to 0.53) and presented a zeta potential close to neutrality, being positive for all nanoparticles except FPM-ITGB3.Finally, the encapsulation efficacy of active simvastatin (20 mg/mL) within FPMs (100 mg/mL Pluronic F127:1 mg/mL peptide) (FPM-Sim) was greater than 95% for all formulations.The stability of FPM-Sim was confirmed by the cumulative percentage of the drug released in a medium simulating physiological condition (at 37 • C and pH 7.4) being below 1.5% in all types of functionalized micelles, for at least 72 h (Figure 2B).
FPM In Vitro Internalization in LSECs
Flow cytometry was used to measure the in vitro internalization of FPMs, labelled with 5-DTAF (5-(4,6-dichlorotriazinyl) aminofluorescein), in LSECs isolated from healthy rats.Cell uptake was found to be more efficient for nanoparticles functionalized with the specific ligands for CD32b, CD36 and ITGB3 than those functionalized with the scrambled versions ScrCD32b and ScrCD36, demonstrating the importance of specific recognition for correct targeting.Figure 3A shows that after 5 min of treatment, a difference in uptake between the two types of functionalization was evident.This difference became significant at 30 min and, at 4 h, the internalization of specific FPMs was over 80% (FPM-CD32b: 81%, FPM-CD36: 84%, FPM-ITGB3: 87%), whereas the scrambled FPMs were only internalized in 55% (FPM-ScrCD36) and 28% (FPM-ScrCD32b) of the total LSECs.
FPM In Vitro Functionality in LSECs
The functional effect of simvastatin was measured by following the expression of KLF2 in isolated primary LSECs treated with simvastatin, either encapsulated (PMs and FPMs) or in its free form.In its encapsulated form, FPM-Sim was equally as effective as free simvastatin or PM-Sim, causing a significant overexpression of KLF2 in healthy LSECs (Figure 3B).In contrast, no changes in KLF2 expression were observed when LSECs were treated with empty nanoparticles, ruling out any possible functional effect of the empty nanodevices.
FPM In Vivo Internalization in Liver Cells
Healthy rats were treated with the three 5-DTAF-labelled FPMs and the quantitation of internalization in the four main types of hepatic cells was carried out by means of flow cytometry and compared with non-functionalized PMs (Figure 4A).The binding of specific peptide ligands on the surface of polymeric micelles led to an increase in the delivery of these nanoparticles to LSECs by more than 13-fold (FPM-CD32b: 42.64%, FPM-CD36: 49%, FPM-ITGB3: 46%) compared to non-functionalized ones (3%).In Kupffer cells, there was also a significantly higher percentage of positive cells for FPM-CD32b compared with PMs (p = 0.041), but the internalization rate of CD32b micelles only reached 19% in liver resident macrophages.The other two FPM formulations also produced increases in cellular uptake in Kupffer cells, but in a more discreet manner.By contrast, HSCs displayed virtually no internalization of any kind of nanoparticle after in vivo treatment.Finally, hepatocytes showed a number of internalized FPM-ITGB3 and PMs, with 47% and 23% of cells being positive, respectively.Due to this observed undesired high internalization of FPM-ITGB3 in hepatocytes, we decided to discard this formulation for the following in vivo studies, selecting CD32b and CD36 peptide ligands either alone (FPM-CD32b or FPM-CD36) or in combination in a mixed functionalization (FPM-CD32b-CD36).
FPM In Vitro Functionality in LSECs
The functional effect of simvastatin was measured by following the expression of KLF2 in isolated primary LSECs treated with simvastatin, either encapsulated (PMs and FPMs) or in its free form.In its encapsulated form, FPM-Sim was equally as effective as free simvastatin or PM-Sim, causing a significant overexpression of KLF2 in healthy LSECs (Figure 3B).In contrast, no changes in KLF2 expression were observed when LSECs were treated with empty nanoparticles, ruling out any possible functional effect of the empty nanodevices.
FPM In Vivo Internalization in Liver Cells
Healthy rats were treated with the three 5-DTAF-labelled FPMs and the quantitation of internalization in the four main types of hepatic cells was carried out by means of flow cytometry and compared with non-functionalized PMs (Figure 4A).The binding of specific peptide ligands on the surface of polymeric micelles led to an increase in the delivery of these nanoparticles to LSECs by more than 13-fold (FPM-CD32b: 42.64%, FPM-CD36: 49%, FPM-ITGB3: 46%) compared to non-functionalized ones (3%).In Kupffer cells, there
Simvastatin Content in Muscle after In Vivo Administration
Considering that muscle toxicity is known to be the main adverse effect of oral simvastatin, the presence of active simvastatin in muscle from healthy treated rats was quantified via UPLC-MS/MS (ultra-high-performance liquid chromatography coupled to tandem mass spectrometry), comparing the administration of free (oral and intravenous) and encapsulated formulations (PMs and FPMs).Higher amounts of active simvastatin in muscle were observed in the group of animals treated intravenously, followed by those receiving oral simvastatin, compared with the very low values obtained when simvastatin was loaded in PMs and FPMs (Figure 4B).lar uptake in Kupffer cells, but in a more discreet manner.By contrast, HSCs displayed virtually no internalization of any kind of nanoparticle after in vivo treatment.Finally, hepatocytes showed a number of internalized FPM-ITGB3 and PMs, with 47% and 23% of cells being positive, respectively.Due to this observed undesired high internalization of FPM-ITGB3 in hepatocytes, we decided to discard this formulation for the following in vivo studies, selecting CD32b and CD36 peptide ligands either alone (FPM-CD32b or FPM-CD36) or in combination in a mixed functionalization (FPM-CD32b-CD36).
Simvastatin Content in Muscle after In Vivo Administration
Considering that muscle toxicity is known to be the main adverse effect of oral simvastatin, the presence of active simvastatin in muscle from healthy treated rats was quantified via UPLC-MS/MS (ultra-high-performance liquid chromatography coupled to
Safety and Toxicity Assay
The maximum tolerated dose was determined in healthy mice in a three-phase study using FPM-CD36 as a reference nanoparticle to assess the toxicity of simvastatin-loaded FPMs.The first and the second phases of the study corresponding to acute protocols (Figure 5A,B) allowed us to establish a well-tolerated dose of encapsulated simvastatin at 10 mg/kg to avoid temporary elevations of AST, ALT and CK in some individuals.
Safety and Toxicity Assay
The maximum tolerated dose was determined in healthy mice in a three-phase study using FPM-CD36 as a reference nanoparticle to assess the toxicity of simvastatin-loaded FPMs.The first and the second phases of the study corresponding to acute protocols (Figure 5A,B) allowed us to establish a well-tolerated dose of encapsulated simvastatin at 10 mg/kg to avoid temporary elevations of AST, ALT and CK in some individuals.Finally, a third subacute protocol was performed, where mice were treated with FPM-CD36-Sim 10 mg/kg for 5 days/week for 3 weeks.A group of control animals also received intravenous injections of saline (vehicle group) to normalize undesired effects of animal handling.Animal behavior was not altered in any aspect, but both vehicle and treated mice experienced a similar weight loss during the 3 weeks of the study, probably due to the manipulation associated with intravenous injection (Figure 6A).Biochemical parameter analysis revealed no signs of liver or muscle toxicity in animals treated with FPMs (Figure 6B).The histological analysis showed that only one animal receiving encapsulated simvastatin showed a higher score of lobular inflammation than the individuals in the vehicle group (Figure 6C).due to the manipulation associated with intravenous injection (Figure 6A).Biochemical parameter analysis revealed no signs of liver or muscle toxicity in animals treated with FPMs (Figure 6B).The histological analysis showed that only one animal receiving encapsulated simvastatin showed a higher score of lobular inflammation than the individuals in the vehicle group (Figure 6C).
All subsequent in vivo experiments were performed with a 10 mg/kg dose of simvastatin-loaded FPMs.All subsequent in vivo experiments were performed with a 10 mg/kg dose of simvastatinloaded FPMs.
FPM Effectivity in an Advanced Model of Cirrhosis (BDL)
In the BDL model, mimicking decompensated cirrhosis, the efficacy of FPM-Sim compared to PM-Sim and oral simvastatin was evaluated after 1 week of daily treatment.After the administration of seven doses, oral simvastatin caused the most significant body weight decrease compared with untreated BDL animals.This reduction was also greater than the moderate weight loss observed in all groups receiving encapsulated simvastatin, despite the stress caused by animal manipulation during the injection of PMs or FPMs (Figure 7A).pared to PM-Sim and oral simvastatin was evaluated after 1 week of daily treatment.After the administration of seven doses, oral simvastatin caused the most significant body weight decrease compared with untreated BDL animals.This reduction was also greater than the moderate weight loss observed in all groups receiving encapsulated simvastatin, despite the stress caused by animal manipulation during the injection of PMs or FPMs (Figure 7A).At the biochemical level, oral administration of simvastatin induced the highest values of AST and ALT, and of CK (Supplementary Table S1).On the other hand, in this advanced cirrhotic model, the use of nanoparticles generated a significant increase in triglycerides and a discrete increase in total cholesterol when compared with untreated BDL rats or treated with oral simvastatin (Supplementary Table S1).At the biochemical level, oral administration of simvastatin induced the highest values of AST and ALT, and of CK (Supplementary Table S1).On the other hand, in this advanced cirrhotic model, the use of nanoparticles generated a significant increase in triglycerides and a discrete increase in total cholesterol when compared with untreated BDL rats or treated with oral simvastatin (Supplementary Table S1).
The hemodynamic studies showed that animals treated orally with simvastatin presented a significant reduction in PP when compared with the untreated control group, with a reduction of 3.92 mmHg (p = 0.004), and with respect to PM-Sim (p = 0.037) (Table 2).A reduction in PP was also observed in the animals treated with functionalized micelles FPM-CD32b-Sim and mixed-FPM-Sim.The rest of the hemodynamic parameters studied were not affected.
FPM Effectivity in a Non-Decompensated Model of Cirrhosis (TAA)
A less severe liver disease model (8-week TAA model) was generated, with the rats receiving treatment 5 days/week during the last 2 weeks of the model.Simvastatin did not cause animal weight loss, except for a slight reduction in body weight of less than 2% in the FPM-CD32b and mixed-FPM treatment groups, most probably associated with the stress induced by intravenous administration (Figure 7B).Compared with untreated BDL rats, a marked overall decrease in liver transaminase levels was observed in untreated TAA individuals, as well as in cholesterol and triglycerides.However, there was still a significant elevation of triglycerides in TAA groups when treating with polymeric micelles (Supplementary Table S2).
On the other hand, as shown in Table 3, treatment with FPM-CD32b-Sim significantly decreased PP by more than 2 mmHg compared with the untreated control group (p = 0.005), and with respect to oral simvastatin (p = 0.037) and FPM-CD36-Sim (p = 0.014).Also, FPM-CD32b-CD36-Sim decreased PP levels in relation the untreated group (p = 0.042).In addition, a slight but consistent decrease in IHVR and SMAR in all groups treated with encapsulated simvastatin formulations suggests an overall improvement in portal hypertension.
To elucidate the possible causes of this improvement in portal hemodynamics, the detection of collagen fibers via Sirius red staining was performed in liver samples from these animals.Figure 8A depicts the quantitation of the fibrotic area in all groups, showing that TAA rats treated with the FPM-CD32b-Sim formulation presented the lowest percentage (2.05%)compared with the other groups; this result is consistent with the improvement in PP caused by nanoparticles functionalized with CD32b.In this sense, both parameters, PP and fibrotic area, showed a significantly positive correlation (r = 0.580; p < 0.001), portraying the relationship between scar tissue formation in the liver, IHVR, and portal hypertension (Figure 8C).Likewise, analysis of gene expression in total liver samples revealed downregulation of the COL1A1 gene, encoding the major component of type I collagen, when rats received simvastatin encapsulated in FPM-CD32b (Figure 8D).Finally, the protein expression levels of endothelial dysfunction markers were assessed in total liver samples, showing that, despite there being no significant difference in KLF2 expression between treated and untreated rats, activation of eNOS was significantly promoted when simvastatin was administered in FPM-CD32b-CD36 and FPM-CD32b, as shown in Figure 9. Finally, the protein expression levels of endothelial dysfunction markers were assessed in total liver samples, showing that, despite there being no significant difference in KLF2 expression between treated and untreated rats, activation of eNOS was significantly promoted when simvastatin was administered in FPM-CD32b-CD36 and FPM-CD32b, as shown in Figure 9.
CD32b Expression in LSECs from Different Models of Liver Disease
Dedifferenciation of LSECs in liver disease is associated, in the most recent literature, with loss of the CD32b-specific marker [29].To assess the degree of CD32 expression that still remains in the LSECs obtained from the liver disease models used in this study, the immunohistochemistry of CD32b in frozen sections of liver samples was carried out.Analysis of the CD32b staining area surrounding the liver sinusoids showed that this specific marker is not equally lost in the two models, demonstrating higher expression levels in the TAA model compared with BDL model (Figure 10).
CD32b Expression in LSECs from Different Models of Liver Disease
Dedifferenciation of LSECs in liver disease is associated, in the most recent literature, with loss of the CD32b-specific marker [29].To assess the degree of CD32 expression that still remains in the LSECs obtained from the liver disease models used in this study, the immunohistochemistry of CD32b in frozen sections of liver samples was carried out.Analysis of the CD32b staining area surrounding the liver sinusoids showed that this specific marker is not equally lost in the two models, demonstrating higher expression levels in the TAA model compared with BDL model (Figure 10).
Discussion
In an attempt to provide a solution for improving the therapeutic window of statins for the treatment of advanced CLD, this work addresses the optimization of a simvastatin delivery system enhancing the targeting of LSECs.The optimization included the design and functionalization with peptide ligands of Pluronic-based PMs loaded with simvastatin.We studied the in vitro characteristics in primary cultures of liver cells, and the in vivo effect in animal models of CLD, demonstrating the increased therapeutic potential of the loaded drug by reducing portal hypertension and liver fibrosis in a non-decompensated CLD animal model.
To increase the accumulation of nanoparticles in LSECs, peptide ligands recognizing three LSEC receptors were designed and synthesized according to various expression criteria (either present in functional, dysfunctional or in both LSEC differentiation stages) and their ability to enter into primary LSECs.
The coupling of these peptides on the surface of PMs generated three FPM formulations with different features, but with homogeneous particle shape and size, and a zeta potential close to neutrality.This was due to the use of polyethylene glycol (PEG) at the hydrophilic end of the polymer, which prevents aggregation between the uncharged nanoparticles [30].The effect of an encapsulated drug can be maximized and the side-effects of the drug minimized only if a micelle is stable enough to retain most of the drug until the target is reached.Our result indicated that FPMs were thermodynamically stable, since no simvastatin was released until at least 72 h under neutral pH conditions, simulating the blood circulation [31,32].
In vitro internalization in primary cultures of LSECs of the three FPM formulations with the specific ligands for CD32b, CD36 and integrin α V β 3 receptors demonstrated that specific ligand-receptor interactions plus passive targeting promoted uptake by a greater number of healthy LSECs than the micelles conjugated with their scrambled variants.
Simvastatin has been shown to reduce portal hypertension through the putative reduction in IHVR by means of several mechanisms, including the induction of KLF2 expression, related to the stimulation of a vasoprotective phenotype in LSECs [11,12,33].This effect was confirmed by the expected overexpression of KLF2 in healthy isolated LSECs treated with free or encapsulated simvastatin.We also ruled out any possible effect of the empty functionalized nanodevices acting only as an inert vehicle for the delivery of the loaded drug.
Intravenous treatment of healthy rats with the different fluorescent formulations confirms that, in all cases, functionalization confers greater efficiency in entering the main liver cells than passive targeting; the capture of FPMs was clearly superior to that achieved by non-functionalized PMs, and LSECs were the liver cells with the highest uptake.Furthermore, in endothelial cells, all three types of functionalization were equally effective.In vivo cell internalization experiments also confirmed that the receptors selected as LSEC targets for our nanoparticles were also expressed in healthy Kupffer cells and hepatocytes, allowing functionalized nanoparticles to enter both cell types quite efficiently.This was expected for CD36 and integrin α V β 3 , as their expression in the liver is elevated not only in LSECs, but also in other hepatic cell types [34,35].However, CD32b, an FcγR conferring LSECs the highest endocytic capacity of any cell in the human body, is known to be the most specific marker of these cells in the liver.Yet, recent studies have shown that CD32b is not only expressed in LSECs but also in Kupffer cells, with a liver level expression of 90% and 10%, respectively [36].
The differential accumulation of simvastatin in muscle depending on the different formulations used was a key factor in this study.It is well known that nanoparticles tend to accumulate passively in the liver because this organ is part of the reticuloendothelial system [37,38].Accordingly, the use of nanoparticle encapsulation decreased the presence of active drug detected in the muscle compared to free simvastatin administered either orally or intravenously.This effect is due to the lower uptake of nanoparticles by skeletal muscle compared with the higher natural retention of nanoparticles in the liver, regardless of their functionalization, since both PMs and FPMs showed an equal reduction in the amount of active simvastatin detected in the muscle.
To determine the dose and schedule of FPM-Sim for the efficacy studies in experimental models of liver diseases, the safety profile was assessed in healthy mice by establishing the maximum tolerated dose in a phased assay, studying the use of different simvastatin doses, as well as different administration patterns.The use of FPM-Sim at high doses of 20 and 50 mg/kg resulted in transient elevations in liver transaminases and CK in some individuals, and these doses were discarded to avoid further toxicity in animal models.In contrast, the 10 mg/kg dose showed no evidence of toxicity in any of the established phases and was scaled in subsequent efficacy studies in rat experimental models of liver disease.
The first efficacy evaluation in an advanced CLD model such as BDL was aimed at improving the PP-lowering effect achieved by PM-encapsulated simvastatin in a previous study [18] using the new functionalized formulations while maintaining the lower levels of toxicity compared with free simvastatin.Encapsulated simvastatin resulted in significantly less body weight reduction at the end of treatment in cirrhotic animals compared to free simvastatin, and lower levels of hepatic and muscular toxicity markers.On the other hand, the use of nanoparticles triggered a significant increase in serum triglycerides and, in a more moderate way, in total cholesterol.This may be explained by the fact that the PM is made from Pluronic F127 (or polaxamer P407), which has been shown to induce a transient hyperlipidemic effect caused by a temporary reduction in the number of fenestrae in LSECs in a dose-dependent manner [39].Consequently, given that BDL animals at baseline have an advanced capillarized endothelium and a marked reduction in fenestrae due to the endothelial dysfunction induced by bile accumulation [40], the addition of an extra element that closes the fenestrae, even temporarily, may lead to an impaired transendothelial transfer of lipoproteins from sinusoidal blood to the extracellular space of Disse.
In terms of efficacy, oral simvastatin, although causing greater toxicity, was the most effective treatment in significantly reducing PP compared to the untreated control group, while the functionalized nanoparticles FPM-CD32b and mixed-FPM only caused a discrete reduction in the effect of simvastatin on PP compared to the non-functionalized PMs.It is worth mentioning that the BDL model used in the present study, mimicking an advanced CLD, resulted in severely affected animals with unusually high levels of PP in the control group and increased liver transaminase values.
We then performed a second in vivo efficacy study in a model of cirrhosis induced by TAA, a widely used model for its high reproducibility and homogeneity of results, its low mortality, and a lower systemic toxicity [41].Moreover, the model was developed only for 8 weeks, yielding cirrhotic animals but in a non-decompensated stage of the disease.Simvastatin was administered in different formulations during the last 2 weeks of model generation.In this non-decompensated model, simvastatin induced lower toxicity in all aspects: the body weight of animals was not reduced, and the slight decrease in body weight observed in the groups that received intravenous treatment might be associated with the stress caused by the administration route rather than the drug itself.Furthermore, transaminases and CK values were also diminished in all TAA animals compared to BDL.In addition, the increase in total cholesterol and triglycerides observed in BDL rats was of lower magnitude, which could be explained by the lesser degree of capillarization expected from a more moderate CLD model represented by the 8-week TAA animals.
The reduced toxicity observed in this model was accompanied by a lower PP in the untreated control group compared with the previous BDL model.Treatment with encapsulated simvastatin further reduced PP, with a significant difference in the FPM-CD32b and mixed-FPM formulations.Moreover, the PP-lowering effect achieved by FPM-CD32b-Sim was also significant versus FPM-CD36-Sim and oral free simvastatin.To elucidate this reduction effect, we evaluated the correlation between PP and fibrosis, confirming that the greater the area of liver fibrosis, the greater the PP increase, and vice versa [3,42].Collagen expression followed the same trend, and in the case of FPM-CD32b-Sim treatment, it was reduced to half compared to the untreated TAA group.Quantification of proteins involved in signaling pathways related to vasoprotection induced by statins administration [14,43,44] showed unaltered expression of KLF2 between treated and untreated animals, suggesting that the dose used in in vivo treatments is not enough to increase the values of this transcription factor.Nonetheless, in line with the improvement in PP reduction, the p-eNOS/eNOS ratio was significantly elevated by FPM-CD32b-Sim and mixed-FPM-Sim, suggesting an amelioration of endothelial dysfunction by stimulating nitric oxide production through the targeted release of simvastatin into LSECs of TAAinduced cirrhotic rats.
We interpret the greater efficacy demonstrated by the FPM-CD32b-Sim nanoparticles in the TAA model as a consequence of enhanced targeting by the specific peptide ligand due to higher levels of CD32b expression in LSECs from this model compared with the BDL model.In this regard, immunohistochemical staining to identify the CD32b antigen in liver sections of rats from both experimental models was significantly higher in TAA than in BDL rats.This is also supported by a recent study conducted in our laboratory [45], in which the percentage of CD32b + LSECs was estimated by means of cell sorting, establishing that the loss of CD32b during the capillarization process might display different features depending on the etiology, disease stage and mechanisms causing liver injury.
One limitation of this work was performing the efficacy studies only in male animals.In our study, we wanted to evaluate as a proof of concept the FPM efficiency by means of the ability to reduce portal pressure in cirrhotic individuals.The vast majority of published studies on the hemodynamic disturbances occurring in liver disease use male rats or mice, because female estrogen hormones modify hemodynamic parameters, increasing or decreasing their values randomly and unrelated to liver disease [22].However, we acknowledge that to have a complete picture of the efficacy demonstrated by the FPM-CD32b-Sim nanoparticles, further studies are needed using both sexes.
In summary, there are three important observations in this study.First, the adequate functionalization of nanoparticles may provide a noticeable positive effect for passive targeting.Second, it seems clear that, even though the specific targeting of LSECs can be lost gradually in pathological situations, it is more effective to choose specificity over other alternatives where the target may be abundant but ubiquitously expressed.Third, the stage of the disease is a key factor in the use of PM nanoparticles in liver diseases.Our results indicate that in advanced decompensated models, nanoparticles are able to decrease the toxicity caused by simvastatin but, on the other hand, their effectiveness is limited.By contrast, in a less severe stage of the disease, simvastatin encapsulation increases the nanoparticles' hepatic beneficial impact and minimizes the possible secondary effect of the poloxamer device.
In conclusion, active targeting of a Pluronic-based nanodevice by means of functionalization with peptide ligands for the specific administration of simvastatin towards LSECs reduces muscle and liver toxicity, but without a clear portal pressure reduction, in a decompensated model of cirrhosis.However, in a non-decompensated model of CLD, functionalization with CD32b ligand enhances the efficacy of simvastatin, reducing PP values as well reducing liver fibrosis.This conclusion favors the use of CD32b-functionalized PMs as potential nanotransporters for vasoprotective drugs targeting the sinusoidal endothelial cells in the liver.
Figure 2 .
Figure 2. Characterization of functionalized polymeric micelles.(A) FTIR spectra of PMs and FPMs with the different peptide ligands (specific and scrambled).The red square indicates the peaks generated by −COOH groups from the modified polymer and −NH2 from bound peptides.(B) In vitro drug release kinetics under physiological conditions.Data are expressed as mean ± SEM. n = 2 per group.
Figure 2 .
Figure 2. Characterization of functionalized polymeric micelles.(A) FTIR spectra of PMs and FPMs with the different peptide ligands (specific and scrambled).The red square indicates the peaks generated by −COOH groups from the modified polymer and −NH 2 from bound peptides.(B) In vitro drug release kinetics under physiological conditions.Data are expressed as mean ± SEM. n = 2 per group.
Figure 4 .
Figure 4.In vivo internalization of FPMs in liver cells and simvastatin quantitation in muscle.(A) Internalization of nanoparticles in different hepatic cell types (LSECs, Kupffer cells, HSCs and hepatocytes) after intravenous treatment of healthy rats with PMs or FPMs.Results are represented as mean ± SEM. n = 3-5 animals per condition.* p ≤ 0.05, ** 0.01, *** 0.001 vs. untreated; # p ≤ 0.05 vs. PMs.(B) Area of active simvastatin determined by means of UPLC-MS/MS in muscle samples of healthy rats treated with free simvastatin (oral or intravenous) or encapsulated simvastatin (PMs or FPMs).Results are represented as mean ± SEM. n = 3 animals per condition.
Figure 4 .
Figure 4.In vivo internalization of FPMs in liver cells and simvastatin quantitation in muscle.(A) Internalization of nanoparticles in different hepatic cell types (LSECs, Kupffer cells, HSCs and hepatocytes) after intravenous treatment of healthy rats with PMs or FPMs.Results are represented as mean ± SEM. n = 3-5 animals per condition.* p ≤ 0.05, ** 0.01, *** 0.001 vs. untreated; # p ≤ 0.05 vs. PMs.(B) Area of active simvastatin determined by means of UPLC-MS/MS in muscle samples of healthy rats treated with free simvastatin (oral or intravenous) or encapsulated simvastatin (PMs or FPMs).Results are represented as mean ± SEM. n = 3 animals per condition.
Figure 5 .
Figure 5. Acute toxicity studies (phases 1 and 2) of the maximum tolerated dose assay (A) Phase 1: Serum AST, ALT and CK levels of untreated healthy mice and after a single dose of FPM-CD36-Sim at 10, 20 or 50 mg/kg analyzed at 4 h, 48 h and 1 week.Values are plotted as mean ± SEM. n = 2 per group.(B) Phase 2: Evolution of body weight (up) of untreated healthy mice and after treating with intravenous FPM-CD36-Sim 10 mg/kg for 3 days/week for 2 weeks; serum AST, ALT and CK levels at the end of the study (values expressed in box plots; n = 5 for each condition) (center) and percentage of individuals with lobular inflammation (bottom).
Figure 5 .
Figure 5. Acute toxicity studies (phases 1 and 2) of the maximum tolerated dose assay (A) Phase 1: Serum AST, ALT and CK levels of untreated healthy mice and after a single dose of FPM-CD36-Sim at 10, 20 or 50 mg/kg analyzed at 4 h, 48 h and 1 week.Values are plotted as mean ± SEM. n = 2 per group.(B) Phase 2: Evolution of body weight (up) of untreated healthy mice and after treating with intravenous FPM-CD36-Sim 10 mg/kg for 3 days/week for 2 weeks; serum AST, ALT and CK levels at the end of the study (values expressed in box plots; n = 5 for each condition) (center) and percentage of individuals with lobular inflammation (bottom).
Figure 6 .
Figure 6.Sub-acute toxicity study (phase 3) of the maximum tolerated dose assay.(A) Change in body weight of healthy mice treated intravenously with vehicle (saline) or FPM-Sim 10 mg/kg for 5 days/week for 3 weeks.(B) Serum AST, ALT and CK levels after treatment.Values are expressed in box plots.n = 10 for each experimental condition.(C) Scoring of lobular inflammation.
6 .
Sub-acute toxicity study (phase 3) of the maximum tolerated dose assay.(A) Change in body weight of healthy mice treated intravenously with vehicle (saline) or FPM-Sim 10 mg/kg for 5 days/week for 3 weeks.(B) Serum AST, ALT and CK levels after treatment.Values are expressed in box plots.n = 10 for each experimental condition.(C) Scoring of lobular inflammation.
Figure 8 .
Figure 8. Efficacy of functionalized polymeric micelles in an experimental model of thioacetamideinduced cirrhosis.(A) Percentage of fibrotic area in the liver of untreated (n = 7) and treated rats after 2 weeks of 5 days/week treatment with oral simvastatin 10 mg/kg (n = 7), PM-Sim 5 mg/kg (n = 6), FPM-CD32b-Sim 5 mg/kg (n = 7), FPM-CD36-Sim 5 mg/kg (n = 7) or FPM-CD32b-CD36-Sim 5 mg/kg (n = 6).Values are presented in box plots.(B) Representative images of Sirius Red staining at 10X magnification in liver sections.(C) Correlation between portal pressure values and percentage of fibrotic area.The line represents the linear trend.The equation and the r value are displayed on the graph.(D) Relative quantification of COL1A1 mRNA expression via qRT-PCR in total liver samples.GAPDH was used as the endogenous control and results were normalized to the untreated group.mRNA levels are represented in box plots.n = 5 per group.
Figure 8 .
Figure 8. Efficacy of functionalized polymeric micelles in an experimental model of thioacetamidecirrhosis. (A) Percentage of fibrotic area in the liver of untreated (n = 7) and treated rats after 2 weeks of 5 days/week treatment with oral simvastatin 10 mg/kg (n = 7), PM-Sim 5 mg/kg (n = 6), FPM-CD32b-Sim 5 mg/kg (n = 7), FPM-CD36-Sim 5 mg/kg (n = 7) or FPM-CD32b-CD36-Sim 5 mg/kg (n = 6).Values are presented in box plots.(B) Representative images of Sirius Red staining at 10× magnification in liver sections.(C) Correlation between portal pressure values and percentage of fibrotic area.The line represents the linear trend.The equation and the r value are displayed on the graph.(D) Relative quantification of COL1A1 mRNA expression via qRT-PCR in total liver samples.GAPDH was used as the endogenous control and results were normalized to the untreated group.mRNA levels are represented in box plots.n = 5 per group.
Figure 10 .
Figure 10.Immunohistochemical assessment of CD32b expression in two models of liver disease.(A) Representative images of CD32b immunostaining at 10X magnification in liver sections of healthy control, TAA-induced cirrhosis and BDL rat models.Positive staining is clearly defining the liver sinusoids.(B) Bar chart showing immunohistochemical quantitation of CD32b expression levels, represented as mean ± SEM (n = 3 per group).
Figure 10 .
Figure 10.Immunohistochemical assessment of CD32b expression in two models of liver disease.(A) Representative images of CD32b immunostaining at 10× magnification in liver sections of healthy control, TAA-induced cirrhosis and BDL rat models.Positive staining is clearly defining the liver sinusoids.(B) Bar chart showing immunohistochemical quantitation of CD32b expression levels, represented as mean ± SEM (n = 3 per group).
Table 2 .
Hemodynamic measurements in 4-week BDL rats after 1 week of daily treatment.Values were taken 90 min after the last dose of treatment and are expressed as mean ± SEM. n = number of rats.** p ≤ 0.01 vs. untreated; # p ≤ 0.05 vs. PM-Sim. | 2023-10-16T15:04:08.629Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "b1cc151c9b07b5d3a2d4e6400c5f9224ec077205",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/15/10/2463/pdf?version=1697263318",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d89f36b4ac90f6f7ca9e03796e6c6e3f77f8e0d",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268709251 | pes2o/s2orc | v3-fos-license | Spatial dynamic simulation of beetles in biodiversity hotspots
Introduction: Coleoptera is the most species-rich order of animals with the widest distribution area; however, little is known about its global suitability
Introduction
Biodiversity hotspots, defined as sites with the highest species diversity or as the most threatened and diverse sites, have been widely used in multiple disciplines to identify the priority areas for conservation (Reid, 1998;Ceballos and Ehrlich, 2006;Jha and Bawa, 2006;Willis et al., 2007;Davies, 2010).Based on this definition, hotspots are usually described as the concentration areas with rare species, threatened species, and 70% loss of primary vegetation (Myers et al., 2000).Most of Earth's biodiversity is found in hotspots, and these hotspots provide a refuge or suitable habitat for vascular plants, mammals, amphibians, insects and others (Habel et al., 2019;Kidane et al., 2019;Trew and Maclean, 2021).Currently, 36 biogeographical regions are highlighted as global conservation priorities owing to their exceptional endemism, and there is a great threat to vegetation integrity (Hrdina and Romportl, 2017).The annual average cost of the global network of conservation zones is estimated to be US$ 27.5 billion, representing the largest investment in protection funds (Gössling, 2002;Mittermeier et al., 2011).Further determining priority conservation zones on different geographic scales for species considering hotspots helps determine the concentration of resources and reallocation (Jepson and Canney, 2001;Trew and Maclean, 2021).However, few studies have discussed the spatial suitability of species in biodiversity hotspots.Evaluating the distribution suitability of species in hotspots is important for biodiversity conservation.For instance, the potential distribution map regarding wild strawberry and Ageratina adenophora was obtained by two studies (Yang et al., 2020;Changjun et al., 2021), and they both demonstrated the importance of exploring the distribution in hotspots.Farashi and Shariati (2017) studied Iran biodiversity hotspots and used a niche model and setting amounts of buffers for mammal, bird, and reptile species.Exploring the habitat suitability for some species and groups in the context of biodiversity hotspots is conducive to the implementation of targeted measures and the further conservation of biodiversity.
Insects, a neglected group, contribute to many functions and services in the natural ecosystem (Noriega et al., 2018;Elizalde et al., 2020;Noriega et al., 2020).Some of their contributions, such as pollination, pest control, and nutrient cycling, are of high value (Potts et al., 2016;Dainese et al., 2019;Uhler et al., 2021).For decades, increasing evidence has indicated that insect assemblages are undergoing significant changes in biodiversity owing to a suit of anthropogenic stress and climate change, especially in hotspots (Fattorini, 2011;Cardoso et al., 2020;Halsch et al., 2021;Moir, 2021;Outhwaite et al., 2022).This is attributable to the fact that substantial extinction pressure is put on insect populations by the fragmentation of biodiversity hotspots (Fonseca, 2009;Stork, 2010;Sullivan and Ozman-Sullivan, 2021).Hochkirch (2016) stated that we must preserve invertebrate biodiversity and pay more attention to the insect crisis.Thus, the protection of biodiversity hotspots will facilitate the reproduction and survival of insect taxa (Samways, 2007;Stork and Habel, 2014).Beetles, also known as Coleoptera, is the most diverse and species-rich insect group, and more than 380,000 species have been described worldwide (Zhang et al., 2018).
Most beetles must rely on forests to survive, indicating that the increasing fragmentation of forest habitats will place enormous pressure on beetles.In addition to climate change, the alternation of natural landscapes by humans is a critical cause of insect biodiversity loss, especially for beetles.We focus on this group not only because it is the group with the largest number of species, but also because it has a high value for ecosystem services.For example, dung beetles perform functions such as nutrient cycling, bioturbation, secondary seed dispersal, and play an important role in increasing primary productivity and suppressing parasites in livestock (Nichols et al., 2008).Additionally, some beetles, such as ground beetles, are seen as bioindicators of the evaluation of environmental pollution and the recovery processes in postindustrial areas in accordance with their extreme sensitivity to ecological parameters, such as water quality and soil degradation (Elliott, 2008;Ghannem et al., 2018).Therefore, the critical role of beetles in ecological functions and ecosystems suggests that more attention should be given to beetle conservation.Understanding species distribution helps guide protection; however, there have been few studies on the distribution of beetles on the macro scale, which has fascinated many scholars.Consequently, exploring the spatial dynamics of the potential suitability distribution for beetles on several time scales may not only facilitate beetle biodiversity conservation, but also may be conducive to mitigating biodiversity degradation in hotspots.
For decades, the human footprint has spread all over the world, posing a huge threat to biodiversity, and thousands of species have lost their homes.As a means of determining global conservation priorities, exploring the spatial dynamics of beetles in hotspots is conducive to the conservation of biodiversity.Recently, the species distribution model (SDM) has been widely used to predict the potential geographic distribution of species, including the maximum entropy (MaxEnt) model, rule set prediction (GARP), ecological niche factor analysis (ENFA), and random forest (RF) (Pulliam, 2000;Farashi et al., 2013;Sheridan, 2013;Noriega et al., 2020;Yang et al., 2022).Among these, MaxEnt is preferred due to its impressive advantages, such as ease of operation, good performance, short run time, and relatively accurate results (Phillips and Dudıḱ, 2008;Merow et al., 2013).Kong et al. (2021) established a climate distribution model under four climate change scenarios and revealed that isolated, fragmented giant panda populations were more vulnerable than other populations to extinction risk.Chowdhury et al. (2021) found that approximately 15% of butterflies may be at elevated extinction risk in the tropics, and most migratory butterflies face strong seasonal variation in habitat suitability by using MaxEnt to simulate the seasonal spatial dynamics of butterfly migration.To identify a stable refugia for relict plant species, Tang et al. (2018) mapped the distribution patterns for relict species in East Asia to identify suitable regions and obtained a long-term refuge combined with an abundance map.We also utilized this modeling approach to simulate the spatially suitable distribution for beetles in biodiversity hotspots.
Here, we develop current niche models for beetles and identify the priority suitable habitat under decades scales combined with hotspots, which allows us to observe the characteristics of spatial changes over the past decades.The objectives of this study include the following: (1) to simulate the integrated spatial suitability of beetles; (2) to identify the priority suitable habitat and hotspots for beetles at five scales; and (3) to evaluate the congruence between the suitable habitat of beetles and biodiversity hotspots.
2 Material and methods
To better improve our results, we utilized 19 bioclimatic variables (WorldClim; https://worldclim.org/) to construct the MaxEnt model.The 19 bioclimatic parameters included annual mean temperature, mean diurnal range, isothermality, and others (Supplementary Table S1), which were identified by spatial interpolation based on weather stations from between 9000 and 60000 (Fick and Hijmans, 2017).
Methods
We implemented the MaxEnt model in R (Phillips et al., 2017) to simulate the spatial dynamics of beetles under current climate conditions.Bioclimatic parameters were utilized in our modeling process to comprehensively evaluate the distribution pattern in accordance with the degree of importance of bioclimatic factors to species (Chowdhury et al., 2021).For each ten-year scale, we attempted to adjust the parameters of the model to obtain more accurate results after inputting occurrence data and environmental variables and ultimately obtained different parameter assemblages of the model for various scales.The kuenm package in R version 3.6.3was utilized to optimize the regularization multiplier (RM) and feature class parameters (FC).For RM, the values were set between 0.5 and 4 (increments of 0.5, total 8 values), and 31 various combinations of FC based on L (linear), Q (quadratic), H (hinge), P (product), and T (threshold) were selected to ultimately determine parameter collocation.The final parameters used in the MaxEnt model depended on the results of the Akaike information criterion (AICc), significance (partial ROC), and omission rates (E=5%).In addition, the significant models needed to meet the following conditions: omission rates ≤ 5%, and delta AICc values must be ≤ 2 (Cobos et al., 2019).
Then, we executed the R program with 10 replicates to obtain the average results, a logistic output format and the output file type is "ASC" after perfect model parameter combinations were received for each period.The AUC value represented the accuracy of the model, and a higher value indicated more MaxEnt model results.Then, we mapped the habitat distribution based on the results of R produced under various time scales using ArcGIS 10.4 to further analyze the spatial change dynamics of beetles.
Spatial suitability of beetles
After optimizing all model combinations, we obtained the parameter settings for each time period.Specifically, when the RM value was 2 and FC was LQ for 1970-1980, the AICc value was the smallest with delta (AICc=0).The RM value setting for 1980-1990, 1990-2000, 2000-2010, 2010-2020, and 1970-2020 was 0.5, 2.5, 2, 1, and 3.5, respectively, while the choice of FC for these five periods was LHP, H, QP, LQ, and HP, respectively.Then, the MaxEnt model was used to model the spatial distribution of beetles under the optimal parameter setting.We developed several spatial habitat suitability maps for beetles by using the reclassification tool in ArcGIS 10.4 after obtaining the files originating from the results of the MaxEnt program.These distribution maps represented the suitability of beetles under the background of the GBIF database, not only one species, meaning that basic survival conditions such as temperature and precipitation were sufficient for most species, while the distribution of suitability for individual species may differ.Six global suitability maps were ultimately identified, including 1970-1980(A), 1980-1990(B), 1990-2000(C), 2000-2010(D), 2010-2020(E), and 1970-2020 (Total) (Total).Figure 1 shows the suitable distribution of beetles from 1970 to 2020.Overall, the distribution of suitable habitat was mainly concentrated in western and southern Europe and North America, and southern Asia was also a critical distribution region.Some countries, such as France, Germany, Poland, Sweden, the United States, China, and Japan, were at excellent levels in our assessment and occupied most of the beetles' suitable habitat.In contrast, the degree of habitat suitability for North Asia, North America, and Africa was low.From the perspective of biogeographic regions, most of these suitable zones belonged to the Palaearctic, Nearctic, and Holarctic, which are relatively rich in biodiversity and vegetation communities.
Furthermore, beetles showed a range of volatility in every tenyear period, and the spatial distribution dynamics over fifty years are shown in Supplementary Figure S1.We classified the habitat suitability of beetles into two levels after comprehensively evaluating the distribution of the beetles.In that case, Europe and North America included most of the suitable regions, and the distribution areas in southeastern Asia were relatively stable.In terms of unstable regions, the suitability in South America, Africa, and Asia fluctuated substantially, especially in central and southern South America, and the suitable distribution zones for beetles expanded and contracted clearly.The total area of the spatially suitable regions for the five scales continuously decreased from the first period to the fifth period, and approximately 11.72 × 10 7 km 2 and 6.85× 10 7 km 2 were then obtained for 1970-1980 and 2010-2020, respectively.This most dramatic trend may reflect the significant influence of climate changes on the habitat in which beetles live.We clearly observed that the overall spatial suitable distribution of the beetles changed from scattered to relatively concentrated, indicating that some association between elevation and beetles may exist.In other words, some areas that are very sensitive to global warming may suffer more from more of the potential for disappearance, although protective measures have been taken by humans.
Spatial dynamics of priority zones
To better address the increasingly significant threat of habitat loss, 36 geographical regions were identified as conservation priorities and named biodiversity hotspots.For each ten-year period, the spatially suitable distribution in biodiversity hotspots was determined by using ArcGIS 10.4 to cover them in suitable habitats of beetles around the world.Next, we mapped the geographic distributions for beetles in biodiversity hotspots to explore the spatial dynamics of these populations.Specifically, some priority areas were obtained by using the intersect tool to overlay the biodiversity hotspots and the suitable regions received with spatial niche models (Figure 2).For coleopteran, the distribution of spatial suitability was relatively average while still retaining the overall distribution dynamics.Southern Europe and North America have always been the focus of distribution for decades, indicating the stability of distribution, although climate change exacerbated the degradation of habitat.In southern Asia, the number of priority areas was concentrated at the border between China and other countries, which may be attributed to the strong enforcement of conservation measures.In South America, the suitable zones of beetles changed considerably, while sporadic distributions in Africa were always present.We hypothesize that small populations of beetles in fragmented and isolated habitat patches may face a high risk of local extinction.In total, the distribution of priority areas was similar to the global suitability of beetles, and their area also fluctuated to a certain extent.The habitat suitability of beetles globally in 1970-2020.We mapped the habitat suitability of beetles based on the GBIF database in six time periods, and the others are represented in the Supplementary Material.
Then, we counted the distribution area of suitable regions in hotspots on each decadal scale.The total areas of selected priority habitats for A, B, C, D, E, and Total were 2.76×10 7 km 2 , 2.03×10 7 km 2 , 2.14×10 7 km 2 , 1.66×10 7 km 2 , 1.65×10 7 km 2 , and 2.16×10 7 km 2 , respectively.In 1970-1980, the suitable areas covered by hotspots were mostly for decades and corresponded to 23.58% of the overall suitable area in this period.Notably, the performance of 1990-2000 was excellent, with approximately 27.15% of the suitable zones being identified.In the context of various periods, approximately no more than 30% of the habitat was comparatively better for beetles located in hotspots, and they generally showed a downward trend (Figure 3).To better observe the dynamic changes from one period to the next, we used a Venn diagram to compare them in the basis of habitat suitable for beetles.Ultimately, approximately 49.08% of the suitable zones remained constant during each transition in the five stages (Figure 3), indicating that these regions had greater survival advantages for beetles.From A to B, 0.2% of the geographic suitable regions increased, while approximately 26.19% of the suitable habitat disappeared due to a variety of reasons.Then, we obtained 7.32% of the new suitable regions, and approximately 0.16% of the areas among them were completely new when transitioning to the third stage.Fortunately, 59.01% of the new distribution areas metamorphosed into suitable regions when entering the 21 st century, which may be because the biodiversity hotspots in this stage were being established.Apparently, many habitats were recuperated after launching this initiative, which greatly increased the biodiversity of habitat and further encourage beetle reproduction.
Congruence evaluation between spatial suitability and hotspots
According to the statistical results of suitable areas in biodiversity hotspots, we generated a bar chart for every ten-year period to compare the area distribution of each hotspot.The top ten hotspots were identified based on the area of spatial geographic Frontiers in Ecology and Evolution frontiersin.orgdistribution, and three critical hotspots and the proportion of suitable areas occupied by them were determined (Figure 4).The Mediterranean Basin was the best region in this assessment, especially in 2000-2010, and had the most suitable habitat for beetles.Other hotspots, including Indo-Burma, North American Coastal Plain, Cerrado, and Irano-Anatolian, had a higher degree of suitability for the survival of beetles, and most of them were located in Europe, North America, and Asia.In 1970-1980, the Mediterranean Basin (11.74%),Indo-Burma (9.84%), and Cerrado (7.78%) included most of the suitable regions, while others, such as the North American Coastal Plain, Atlantic Forest, Tropical Andes, Mesoamerica, Hoern of Africa, Irano-Anatolian, and Caucasus, were relatively even.For D and E, the difference between various hotspots was somewhat large.In addition, some hotspots, including the Mediterranean Basin, Indo-Burma, and North American Coastal Plain, were comparatively stable when compared with others, such as Japan, Mesoamerica, Atlantic Forest, Himalaya, Mountains of Central Asia, and Chilean Winter Rainfall and Valdivian Forests.Consequently, the numbers of hotspot zones exhibited spatial dynamics in the different periods due to climate change.The areas described above with greater volatility may face greater pressure and challenges, indicating that local disappearance may occur for beetle communities.Therefore, the biodiversity hotspots of the three levels were obtained after comprehensively evaluating the changes in hotspots to distinguish the suitable dynamics of different hotspots at different ten-year scales.Specifically, a hotspot was selected as excellent if it appeared more than four times in the six periods (A, B, C, D, E, and Total) and was in the top ten on the scale of occurrences.Then, we determined other hotspots on the basis of the degree of stability of these hotspots at five scales (A, B, C, D, and E).We considered these hotspots with little fluctuation to be stable, while the identification of lower hotspots depended on the trend of overall decrease (Supplementary Figure S2).Finally, approximately 25% of hotspots (Mediterranean Basin, Indo-Burma, Irano-Anatolian, Mesoamerica, Atlantic Forest, Caucasus, Cerrado, Tropical Andes, and North American Coastal Plain) were determined to be excellent regions (Figure 5).Among them, five biodiversity hotspots, including the Mediterranean Basin, Indo-Burma, Mesoamerica, Cerrado, and Tropical Andes, generally decreased, but the suitable distribution of these hotspots was widespread, especially in the Mediterranean Basin, Indo-Burma and Cerrado.Therefore, corresponding vigilance should be considered, although these regions are significant for beetle reproduction and dispersal.Furthermore, a total of ten (28%) and seventeen (47%) hotspot regions were deemed stable and lower, respectively.We can clearly see that most stable hotspots covered many sea areas, which is a key reason why these zones were identified as stable regions.Owing to proximity to the ocean, the combination of the low influence of human interference on habitat fragmentation and insusceptibility of endemic biodiversity to climate change has resulted in a smooth evolution of the beetles' habitat suitability.For seventeen lower hotspots, the fragmentation of the landscape should be given more attention by managers, and sufficient eco-compensation payments and more conservation measures, such as reforestation or habitat recovery, could be implemented to reduce extinction possibilities.
The purpose of this study was to propose informative suggestions based on our results to better protect biodiversity hotspots.For beetles, the degree of suitability for survival often depends on the species richness in the forest, especially the vegetation diversity.Therefore, constructive suggestions for forest restoration are provided based on the suitability distributions of beetles.We carried out this exercise: a series of spatial suitability distribution maps of hotspot areas at six ten-year scales (A, B, C, D, E, and Total) are produced (Supplementary Figures S3-S5).Then, the geographic distribution dynamics of each hotspot for beetles from 1970 to 2020 was clearly observed.Intriguingly, some regions experienced substantial expansion and contraction, such as the Three levels of biodiversity hotspots identified.The excellent hotspots represent the areas with the highest biodiversity, the lower hotspots indicate the regions where biodiversity has been or is vulnerable to destruction, and the stable means that biodiversity in these regions has not fluctuated much over the past few decades.
Discussion
To identify the level of complexity warranted, quantitative evaluation constitutes a significant component of distribution modeling, and the selection of optimal models was based on the results of evaluating various levels of complexity, resulting in different parameter combinations for different species under different times and spaces (Warren and Seifert, 2011;Radosavljevic and Anderson, 2014;Cobos et al., 2019;Kass et al., 2021).In our study, the complexity of the MaxEnt model became a critical part of obtaining a suitable distribution of Coleoptera in the context of 1970 to 2020, with temperature and precipitation as important driving factors.We determined different parameter combinations for beetles to simulate the comprehensive distribution under current climate changes.Unsurprisingly, model performance was effectively improved, as reflected in the model evaluation metrics (AUC≥0.85)and the agreement between the potential and actual distribution of beetles.Ultimately, we coupled current biodiversity hotspots with potential distributions to model the spatial priorities of beetles, and the consistency between them was assessed.This study also revealed the extinction risk of beetles in every biodiversity hotspot and the spatial dynamics of changes.Due to the high human interference of hotspots, we paid particular attention to some regions whose extinction risk in local habitat was correspondingly higher.
Our study indicated that ≤30% of the suitable zones in every ten-year period fell within the biodiversity hotspots, while the majority of suitable habitat were outside the hotspots, which did not mean that these suitable regions were not ecologically significant.In contrast, a higher suitability may occur in some other regions, but more concentrated manifestations were found in biodiversity hotspot areas because the delineation of hotspots contains more ecological significance (Cincotta et al., 2000;Marchese, 2015;Grande et al., 2020).Not only are these regions especially rich in endemic species and particularly threatened by human interference, but the largest sum was assigned to a single protection project (Norman, 2003).We clearly observed that most of Earth's biodiversity is found in hotspots, and over 150,000 endemic plant species and approximately 13,000 endemic terrestrial vertebrates are sheltered (Bellard et al., 2014;Habel et al., 2019).However, the survival of insects and the richness of biodiversity go hand in hand, and thousands of modern insect extinctions are estimated to have occurred, which are the consequences of silent habitat loss and endemic biodiversity extinction (Dunn, 2005;Fonseca, 2009).The island biogeography theory proposed by Janzen (1968) states that any reduction in a plant species will result in a decrease in the richness of insect fauna.Direct conservation efforts have not been focused on insects.Therefore, we simulated the spatial distribution dynamics for Coleoptera, the largest order of insects, to explore the laws of changes and propose conservation recommendations.The results of this study indicated that the integrated suitability of beetles outside or in hotspots have similar dynamic changes.As the focus of distribution, Southern Europe and North America had a corresponding stability, while South America and Africa experienced a higher extinction risk, which may be attributed to climate changes, land use, and extensive growth of agriculture in recent decades (Higgins, 2007;Griffiths et al., 2010;Perrings and Halkos, 2015).
Although the overall distribution of suitable habitat in hotspots for beetles was relatively severe, these areas also demonstrated potential for beetle conservation and biodiversity restoration.Our results revealed that the potential areas for biodiversity improvement were 2.76×10 7 km 2 , 2.03×10 7 km 2 , 2.14×10 7 km 2 , 1.66×10 7 km 2 , 1.65×10 7 km 2 , and 2.16×10 7 km 2 for A, B, C, D, E, and Total, respectively.In particular, approximately 49.08% (1.38×10 7 km 2 ) of fragmented habitat will strengthen forest restoration to increase carrying capacity.Thus, corresponding conservation and management measures should be implemented in some critical countries, including Spain, Turkey, Morocco, Italy, America, and Brazil.Natural recovery and reduction of human disturbance in these regions are significant measures, while a certain level of forest management monitoring should be implemented.For zones with a small but very concentrated distribution of suitable habitat, legislation should be strengthened to facilitate administration, and the further expansion of anthropogenic land must be reduced.Forest areas in these regions will continue to expand, diverse species may be reintroduced, and biodiversity will be improved to accommodate more populations in the long term.China, Chile, South Africa, Australia, and Japan are important countries to prioritize, and cross-border protection is a considerable initiative such as China, which will enhance connectivity between different suitable areas.For other scattered regions, implementing protection planning measures is a huge challenge as a result of fragmented suitability habitat, and many countries are involved.We must remind these countries, such as Mexico, Colombia, Feru, Bolivia, Ecuador, Iran, Kenya, and Tanrania, that attention should be given and that the fragmented suitable regions identified in this study will provide a useful reference for their conservation work.
Additionally, the Mediterranean Basin, Indo-Burma, Irano-Anatolian, and other excellent regions are optimistic.Local habitat restoration of forests and improvement of habitat connectivity for beetles can reduce the extinction risk and increase the population size.Most of beetles have difficulty traveling long distances (Ribak et al., 2013;Chen and Jackson, 2017;Javal et al., 2018).The beetle's flight capacity is affected by environmental factors such as temperature, precipitation, elevation, and wind, so a significant influence produced by climate change on beetle dispersal and communication (Atkins, 1961;Evenden et al., 2014;Jones et al., 2019;Wijerathna and Evenden, 2020).This is also an important reason why the beetle's suitable habitat is threatened.The restriction of dispersal and movement of insects can act at the individual to local ecosystem level and then affect the balance of the ecosystem and Lushai, 1999; Hore and Banerjee, 2017;Misso et al., 2017).Consequently, the improvement of fragmented habitat and local connectivity is imperative.Apparently, some conservation work is being carried out in an orderly manner, such as the construction of artificial log pyramids, the stumps colonized by a large tree transplanter, and a large-scale reintroduction of insects in Denmark (Tochtermann, 1987;Ebert, 2011;Meńdez and Thomaes, 2021).Such protection measures specific to individual species are important for rare species and should be combined with measures such as afforestation and land use reduction to restore beetle habitat.All strategies, including increasing afforestation activities, reducing anthropogenic interference, and implementing other improvement measures, should be further implemented in different regions in combination with local management policies (Duffus et al., 2023).In addition, we recommend that the corresponding process of conservation work should consider the potential suitability habitat for species, which can indicate the direction and focus of future work.
According to the estimation areas for suitable regions in every biodiversity hotspot, the Cerrado, coastal forest of Eastern Africa, and others that experienced substantial expansion and contraction can formulate corresponding restoration measures combined with local forest distribution and biodiversity loss.Our results provide an important reference for their spatial dynamics and focus.As the first global study on the decadal spatial distribution of beetles, the dynamics of each biodiversity hotspot were expressed clearly.A detailed understanding of decade dynamics is crucial to evaluating the influence of climate change and anthropogenic threats to the suitable habitat of beetles, and we hope our study can provide useful references and recommendations for biodiversity conservation.In addition, some deficiencies exist for this study due to data limitations such as not every insect is adequately documented, substantial occurrence records from this database is significantly increasing from year to year, and the records keeping varies from region to region.All these limitations will affect the credibility and robustness of the model.Thus, our future work will focus on solving these challenges and collecting more accurate data, making our results more reliable.
FIGURE 5
FIGURE 5 Cerrado, Coastal Forest of Eastern Africa, Guinean Forest of West Africa, Horn of Africa, Mountains of Southwest China, the Philippines, Succulent Karoo, Sundaland, Tumbes-Choco-Magdalena, and Wallacea.However, some hotspots, including the Mountains of Southwest China, Succulent Karoo, and others, eventually tended to have a stable suitability distribution.In the Mediterranean Basin, the suitable habitat areas of landmasses had a high degree of consistency across six timescales for countries such as Spain, Morocco, Algeria, Italy, Greece, and Turkey, although large swaths of oceans were identified as hotspots.For the Coastal Forest of Eastern Africa, Guinean Forest of West Africa, and Horn of Africa, the dynamic changes in beetle habitats were surprisingly dramatic.Climate change, human interference, and the vulnerability of the original ecology contributed to large areas of local habitat loss, and conservation and restoration measures should be implemented in local countries, including Somalia, Kenya, Ghana, Nigeria, Liberia, and Mozambique. | 2024-03-27T15:32:37.492Z | 2024-03-25T00:00:00.000 | {
"year": 2024,
"sha1": "453e17daf961d7c8132793d0dfd8e32f848ba966",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fevo.2024.1358914/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3d88f17dc98fc766643d39377861a1f3a9aa1ade",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
224725473 | pes2o/s2orc | v3-fos-license | The ongoing COVID-19 pandemic will create a disease surge among cancer patients
With major parts of the United States in lockdown, parts of Europe and the UK possibly going back on lockdown or expecting a second COVID-19 wave and rapidly rising rates elsewhere other than Asia, many people are forgoing regular cancer screenings and prevention services. More worrisome, some may be experiencing early signs or symptoms, yet they are not seeking evaluation, treatment or surveillance examinations. The long-term impact of this on patients, families and health care providers will be substantial. Not only will this strain sophisticated health systems in developed countries, but it will also overwhelm the health care infrastructure in developing countries. Health-care executives, cancer center directors, oncologists and policy experts should focus now on serving this potential “third wave” of sick patients who have delayed treatment. Stopping COVID-19 is critical. However, it’s also essential to plan for the coming wave of patients who have delayed seeking care or don’t have access.
Introduction
With the world-wide COVID-19 pandemic continuing to surge, many people have missed routine cancer screening and preventive services with their physicians. Some may be experiencing early symptoms of illness, yet still are not seeking assessments due to fear of the virus or lack of access to oncology providers. Depending on cancer type and location, a delay of six weeks in most cases is problematic but surmountable; a delay of six months may lead to stage progression and dramatic increases in death rates in cancer [1]. The number of people who die as a result of these delays could end up rivaling or exceeding deaths due to COVID-19. Adding to this potential crisis are the increased burdens being placed on health systems and medical providers which will impede their ability to provide timely and optimal care.
Much was written about the potential of COVID-19 to recede in the summer and return aggressively in the fall. The causes of any increase now appear to be more nuanced based upon geography. The reductions in new cases appear to be more due to social distancing, mask wearing, hygiene and public health measures such as lockdowns and contact tracing. Even in those places where there is current respite from the ravages of the virus, there may be a major aftershock that could throw the health systems into further crisis: a flood of patients with other illnesses who are much sicker than they would be had they not delayed visits to their doctors for fear of coronavirus exposure. Healthcare executives and policy experts should focus on serving this potential "third wave" of patients; otherwise, the exuberance from having "flattened the COVID-19 curve" may be short-lived. Stopping COVID-19 is critical. It's also essential to plan for the coming wave of patients who may be suffering from untreated cancer.
In Ghana and many other Lower and Middle Income Countries (LMICs), specialized oncology services are performed in the largest urban centers. Restrictions on movement due to COVID-19 to major cities where cancer hospitals are located is likely to negatively impact the health and wellbeing of cancer patients. Furthermore, with limited skilled staff, any COVID-19 exposure has the potential to collapse the oncology service. COVID-19 has brought increased challenges to LMICs in areas of access, staffing, psychosocial needs, diagnosis, and treatment [2].
Challenges and early solutions during the pandemic
Cancer screenings, diagnosis, and treatment are being delayed. While certain tumors are indolent and slow growing, others are aggressive and call for early treatment. A recent UK modeling study concludes, "Our estimates suggest that, for many cancers, delays to treatment of 2-6 months will lead to a substantial proportion of patients with early-stage tumors progressing from having curable to incurable disease." [3] Additionally, a related UK based group has calculated that a 3 to 6 month delay in cancer surgery will have an average loss of life years gained of 0.97/2.19 LYG per patient [4]. While delays may be inevitable, optimizing treatment through use of available evidence will minimize adverse outcomes. Also of major concern is the impact of the pandemic on the cancer research enterprise, with the near-total cessation of clinical trial activity during the early phase of the pandemic in the US. Estimates suggest that patient enrollment in oncology trials has deteriorated by 10% each month, since the pandemic began [5,6]. What has emerged, however, are a number of improvements to the logistics and operations of many research programs, such as the implementation of streamlined trial activation processes, remote investigational drug delivery, remote patient consenting, and virtual patient monitoring, among others [7].
Oncologists around the world are taking steps to address this challenge. In New York, NYU has created a virtual clinic for people to call with suspicious symptoms [8]. In Hong Kong, Head and Neck surgeons have segmented tumors into 3 tiers based on their tendency to progress. Based on their experience during the SARS outbreak in 2003, they have established goals for treatment completion in each tier and have established intra city cooperation among surgeons to expedite surgery and treatment [9]. A multi institutional group of urologic oncologists from Europe and the US has evaluated evidence for worsening outcomes from delays in various urologic malignancies and provides evidence-based guidance for timing of treatment of patients with specific urologic malignancies [10]. At the University of Texas MD Anderson Cancer Center (MDACC) in Houston, disease teams have developed treatment guidelines to prioritize safe and optimal oncologic therapeutic decision-making for vulnerable populations [11][12][13][14]. By leveraging its national cancer network, MDACC has facilitated the care of its patients with collaborative health systems within its network. This has allowed patients to continue with therapeutic interventions, surveillance care and maintenance on clinical trials, thus minimizing high-risk travel during the early phase and peak of the pandemic.
We must take steps now to avoid a substantial increase in deaths from other illnesses
Here are our recommendations for what government and industry should do to prepare for the next eighteen months and after: • Initiate public awareness campaigns to promote the message that people should not delay care for suspicious symptoms and resume routine cancer screenings. • Reassure patients who may be avoiding care due to fears of contracting COVID-19 in health care settings that expanded testing and other protective measures make it possible to be treated in a safe environment. • Offer additional financial and technical support to providers, allowing them to increase capacity, design and accelerate telemedicine and patient triage programs.
• Encourage partnerships among providers, insurers, pharmaceutical companies and employers to improve coordination and quality of care. Discussions already occurring among these shareholders should be accelerated [15]. • Reduce infusion-based treatment in favor of oral therapy where possible [16]. • Restart clinical trials that have been halted or deferred due to shifts in funding.
• Encourage technological and operational enhancements for the conduct of oncology clinical trials to enhance activation and enrollment for novel investigational therapeutics. • Augment the implementation of artificial intelligence platforms to identify patients at highest risk due to delayed treatment. • Equilibrate reimbursement rates for a physician's services the same regardless of where a patient receives treatment-whether at a doctor's office or over video platforms-allowing patients to make care-based decisions rather than financially-based decisions. • Extend health care coverage for anyone who has lost private coverage as a result of layoffs, and speed enrollment onto public insurance products for a limited time so they do not delay seeking treatment. • Provide community-based support to promote mental health and early identification of worsening symptoms in chronic conditions.
Conclusion
COVID-19 is having a profound impact on healthcare systems worldwide and has the potential to negatively impact patients who have existing or newly diagnosed cancers. Time delays in treatment may result in stage migration and/or a more complicated course of treatment. There are mitigation strategies we believe that will help both clinicians and patients. These include: increased cooperation and multidisciplinary care, regulatory changes, public outreach, additional support for developing countries health care infrastructure, and psycho-social support, among other things. As cited earlier, we are seeing the rapid development of evidence-based protocols to guide and prioritize cases in various areas of cancer treatment.
Continued creativity, diligence, and flexibility will be needed to ensure patients do not suffer from the consequential effects of COVID-19.
Conflicts of interest
Mr. Meyer is a consultant to various academic/health systems, health plans and Pfizer in the US Dr. Bindelglas has been a consultant to Pfizer, Horizon Blue Cross, and Summit Medical Group.
Dr. Kupferman has no conflicts of interest to report.
Dr. Eggermont has no conflicts of interest to report.
Funding statement
No funding was received for this work. | 2020-09-16T00:14:50.989Z | 2020-09-02T00:00:00.000 | {
"year": 2020,
"sha1": "8eb789cf911d190e915700b453915d6f33295f3b",
"oa_license": "CCBY",
"oa_url": "https://ecancer.org/en/journal/editorial/105-the-ongoing-covid-19-pandemic-will-create-a-disease-surge-among-cancer-patients/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8eb789cf911d190e915700b453915d6f33295f3b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267578045 | pes2o/s2orc | v3-fos-license | A novel locomotion-based prepulse inhibition assay in zebrafish larvae
Sensory gating, measured using prepulse inhibition (PPI), is an endophenotype of neuropsychiatric disorders that can be assessed in larval zebrafish models. However, current PPI assays require high-speed cameras to capture rapid c-bend startle behaviours of the larvae. In this study, we designed and employed a PPI paradigm that uses locomotion as a read-out of zebrafish larval startle responses. PPI percentage was measured at a maximum of 87% and strongly reduced upon administration of the NMDA receptor antagonist, MK-801. This work provides the foundation for simpler and more accessible PPI assays using larval zebrafish to model key endophenotypes of neurodevelopmental disorders.
Response magnitude of responder larvae in startle alone and prepulse/startle trials across 100-1000 ms ISIs.Data are presented as average locomotion (mm) 100 ms after the startle stimulus.H. Response probability of all larvae across 100-1000 ms ISIs.Data are presented as average response probability per trial, 100 msec after the startle stimulus.I. Percentage PPI for responder larvae treated with MK-801 or a DMSO-control solution.J. Response magnitude of responder larvae treated with MK-801 or a DMSO-control solution in startle alone and prepulse/startle trials.Data are presented as average locomotion (mm) 100 ms after the startle stimulus.K. Response probability of all larvae treated with MK-801 or a DMSO-control solution in startle alone and prepulse/startle trials.Data are presented as average response probability per trial, 100 ms after the startle stimulus.For all N-numbers, see Table 1 in
Description
Impairments of sensory gating have been proposed as one of the most promising translational endophenotypes for various neuropsychological disorders, including schizophrenia.Sensory gating is defined as the ability of the autonomic nervous system to filter between relevant and irrelevant sensory information.Measurement of this endophenotype is via the prepulse inhibition (PPI) assay.Here, startle responses are reduced in trials when a lower intensity (prepulse) stimulus is presented 30-500 milliseconds (ms) prior to a higher intensity (startle) stimulus.Patients with schizophrenia and high-risk individuals show reliable PPI reductions (compared to healthy controls), maintaining a high startle response even if the startle stimulus is preceded by a prepulse stimulus (Li et al., 2021;San-Martin et al., 2020;Swerdlow et al., 2006;Takahashi & Kamio, 2018;Ziermans et al., 2012).PPI is detected across different species, including common model organisms, and therefore the assay has become highly relevant to investigate mechanisms underlying the aetiology of disorders.PPI responses can be measured in the zebrafish (Danio rerio).Using larval stages, the proportion of fish displaying a characteristic 'c-bend' startle response is reduced when the startle stimulus is preceded by a prepulse (Burgess & Granato, 2007).The parameters required for PPI in zebrafish larvae are strikingly similar to that used in human PPI assays (Wolman et al., 2011).Furthermore, pharmacological and genetic manipulations, that are used to model schizophrenia in zebrafish, also lead to reductions in the PPI response (Bergeron et al., 2015;Burgess & Granato, 2007), consistent with the observations in patients.
However, the c-bend startle must be measured using high-speed cameras.There are two reasons for this.Firstly, the c-bend startle is defined by precise angular changes in the orientation of the larva's tail with respect to its head (Bergeron et al., 2015).Secondly, there are at least two c-bend startles shown by zebrafish larvae: the short-latency c-bend (SLC) and the long-latency c-bend (LLC) (Burgess & Granato, 2007).The SLC is a probabilistic startle response, increasing in frequency with stimulus intensity (but not in amplitude), and occurring in just 10 ms or less.In contrast, the LLC startle is graded in amplitude and latency to stimulus intensities and occurs at a longer latency of 20-50 ms.Previous data show that only the SLC is susceptible to PPI (Burgess & Granato, 2007).Taken together, a camera therefore needs to be able to image detailed larval features at a high temporal resolution to measure PPI.Due to the expense of high-speed camera equipment and the extensive data analysis to characterise SLC startles, the PPI assay can be difficult to implement.
More recent studies assessing habituation in zebrafish larvae have begun to explore alternative behavioural read-outs of startle responses, including locomotion which can be done at lower camera frame rates, or general motion detection in adult fish for sensory gating experiments (Beppi et al., 2021;Kirshenbaum et al., 2019).However, such simpler approaches have not yet been created and validated for a PPI-type paradigm in larvae or in context with schizophrenia-relevant pharmacological modulation.In this report, we present the establishment of a locomotion-based PPI (LB PPI) assay based only on measuring locomotion as a read-out for the startle response in zebrafish larvae.We furthermore show that it is possible to observe a schizophrenia-reminiscent PPI reduction by administration of the glutamatergic NMDA receptor antagonist, MK-801.
The LB PPI assay was set-up using the Zantiks MWP (https://zantiks.com/),which is an automated system containing a lowspeed camera at 30 frames per second and 720 x 540 pixels image size.To produce the startle stimuli, vibrations were delivered using an in-built motor which is script-controlled.To generate a reliable LB PPI protocol using this system, we optimised two parameters, namely, vibration intensity and inter-stimulus interval (ISI, the time between startle and prepulse vibrations).
Based on previous literature, vibrations eliciting a variation in SLC startle responses in zebrafish larvae are between 100-1000 Hz (Beppi et al., 2021;Bergeron et al., 2015;Burgess & Granato, 2007).As such, we set the frequency of both the startle and prepulse stimuli to 200 Hz.To deliver the vibration, the motor was set to move by either 1.8°(1 full step), 0.9° (1/2 step), 0.45°(1/4 step), or 0.225°(1/8 step) in 4 x 5 ms clockwise and anti-clockwise movements.To evaluate if larvae would respond differently to different motor step sizes, eight single pulses (two at each step size), spaced at 5-minute inter-trial intervals (ITIs) were presented to larvae.Previous papers have used a minimum of 15 seconds (Burgess & Granato, 2007) and a maximum of 15 minutes ITI (Beppi et al., 2021), thus we felt confident that no habituation to startle stimuli would be observed at 5 minute ITI.Startle response magnitude and probability were measured using distance moved (locomotion) per 100 msec time bin.
Larvae showed a graded startle magnitude at varying pulse intensities, as is shown in the raw locomotion trace from -500 ms to +500 ms from the startle pulse onset (Figure 1A).The response magnitudes for responder larvae only (those fish that show a startle response within 100 ms of the vibration) was analysed across pulse intensities, using a one-way ANOVA (F(3, 279) = 10.67,p < 0.0001; Figure 1B).Larvae exhibited a significantly greater startle magnitude to 1 step compared to ½ step (p < 0.0001), ¼ step (p = 0.0196) and 1/8 step (p = 0.0003).This is likely reflective of the graded LLC response at different vibration intensities (although the lower frame rate of our camera set-up prevents absolute confirmation of this).However, PPI in the larval zebrafish has previously been defined as a difference in SLC probability when the startle stimulus is preceded by a prepulse stimulus (Burgess & Granato, 2007).As such, we explored whether we could detect changes in response probability across the different pulse intensities for all larvae (Figure 1C).A one-way ANOVA showed that response probability varied across pulse intensities (F(3,380) = 22.03, p < 0.0001).Notably, larvae exhibited a significantly greater response probability at 1 full step compared to ½ step (p < 0.0001) and 1/8 step (p<0.0001), and significantly lower response probability at 1/8 step compared to 1/2 step (p = 0.0230) and ¼ step (p<0.0001).On account of these findings, it is possible to observe changes in the probability of the larval startle response at varying pulse intensities, likely reflective of increased probability of SLC startle at higher pulse intensities.In summary, we were able to observe a change in startle response read-out based on startle magnitude, and this mirrored response probability, based on only locomotion recordings.
When translating these findings into a full two-pulse PPI paradigm, we wanted to explore whether it was possible to observe a modulation of the locomotion response by introducing a preceding lower intensity prepulse before the startle stimulus.Based on the above findings, the startle vibration was set at 200 Hz with one full motor step, and the prepulse vibration set to 200 Hz with 1/8 motor step.These startle and prepulse vibrations were applied to all subsequent PPI assays.Although both response magnitude and response probability were modulated by these stimulus intensities, it was unclear how this would manifest by applying PPI measurements and calculations.This is because both SLC and LLC startle responses are captured in 100 ms time bins and it has not so far been determined how these startle responses might equate with locomotion measurements.As such, we continued to consider both response magnitude and response probability, alongside the locomotion-based PPI measurement.
When running the LB PPI assay in full, we varied the inter-stimulus interval (ISI) from 100 to 1000 ms in a between-subjects experimental design (Figure 1D).This ISI range is consistent with earlier reports showing that the ISI greatly impacts PPI between 30 to 3000 ms, with the highest PPI percentages being observed at intermediate ISIs of around 300-500 ms (Bergeron et al., 2015;Burgess & Granato, 2007).In total, eight vibration trials were delivered in each PPI assay.Startle pulse alone and prepulse/startle trials were evenly interspersed with a total of four of each trial type (Figure 1D).
When observing the raw locomotion with a 300 ms ISI, there is a sharp increase in distance travelled within 100 ms upon presentation of the startle stimulus on startle only trials (Figure 1E).However, when the prepulse preceded the startle stimulus (on prepulse/startle trials), there was a much-attenuated increase in distance travelled in response to the startle stimulus.Using only distance travelled, a clear PPI effect was observed in responder larvae at all ISI values (Figure 1F; see Methods for calculation of LB PPI).A one-way between-subjects ANOVA of percentage PPI for each ISI was significant (F(5, 173) = 3.53, p = 0.0046), and post-hoc t-tests revealed that this was driven by a significantly lower percentage PPI between 200 ms ISI and 300 ms ISI (p = 0.0254), 400 ms ISI (p = 0.0469) and 500 ms ISI (p = 0.0127, Figure 1F).
In addition to just calculating locomotion-based PPI, we also considered how response magnitude and response probability were affected by the prepulse/startle trials.Considering response magnitude of responder larvae on each startle trial type and ISI (Figure 1G), a two-way ANOVA was significant for trial type (F(1, 254) = 19.78,p<0.0001),where an increase in response magnitudes on the startle trial compared to the prepulse/startle trial was observed across all ISIs.There was no main effect of ISI (F(5,254) = 1.19, p = 0.3142) and no interaction between ISI and trial type (F(5,254) = 2.11, p = 0.0646).When analysing response probability of all larvae for the trial types and ISIs (Figure 1H), there was a main effect of trial type (F(1,506) = 104.10,p < 0.0001), which was again due to a higher response probability on startle trials compared to prepulse/startle trials (two-way ANOVA).In addition, there was also an effect of ISI (F(5, 506) = 5.29, p < 0.0001), where response proportion was significantly greater with an ISI of 300 ms compared to 100 ms, 400 ms, 500 ms and 1000 ms (p < 0.05 for all) and significantly greater again at 200 ms versus 400 ms and 500 ms (p<0.01).Finally, there was an interaction between ISI and trial type (F(5,506) = 4.84, p = 0.0002).Here, key post-hoc analyses showed response probabilities were significantly greater in startle alone trials compared to prepulse/startle trials (all p < 0.05; Figure 1H).As such, both response magnitude (mostly driven by LLC startles) and response probability (mostly driven by SLC startles) contribute to a locomotion-based PPI.Though it seems that decreasing response probability on prepulse/startle trials compared to startle alone trials is the more consistent driver behind LB PPI, which is in line with what has been observed in standard measures of SLC for PPI.These findings led us to choose 300 ms as the ISI of the final LB PPI assay as the goal was to use an ISI with a strong and consistent reduction in response magnitude and response probability.
In order to show the LB PPI is applicable to neuropsychiatric models in the zebrafish, we tested whether PPI percentage could be modulated by dizocilpine (MK-801), a non-competitive NMDA receptor antagonist, commonly used to pharmacologically create animal models of schizophrenia.Larvae were exposed to the drug for the full duration of the assay while controls were exposed to a DMSO-matched solution.Consistent with previous studies, the LB PPI percentage of responder larvae was greatly reduced in MK-801-exposed larvae (29.48% ±12.04% SEM) compared to controls (77.40% ±7.25% SEM) (t(126) = 3.21, p = 0.0017; Figure 1I).In addition, a two-way ANOVA of responder larvae response magnitudes for trial type and drug condition showed no main effect of trial type (F(1, 633) = 0.0161, p = 0.8993), but a strong effect of drug condition (F(1, 126) = 31.47,p < 0.0001), caused by significant elevation in startle response magnitude for the MK-801 group on both types of trials (Figure 1J).However, there was no interaction between startle condition and drug condition (F(1,63)=1.78,p = 0.1860).With regards to response proportion (Figure 1K), there was a main effect of trial type (F(1,157) = 54.30,p<0.0001),where response probability was significantly greater on startle alone trials than prepulse/startle trials, and a main effect of drug condition (F(1,157) = 33.36,p < 0.0001), where startle probability was significantly greater in MK-801-treated larvae compared to DMSO controls (Figure 1K).Thus, both response magnitude and probability were increased uniformly across trial types by MK-801 administration, and across both drug conditions response probability is significantly greater on startle alone trials compared to prepulse/startle trials.It is thus likely a combination of both altered response probability and magnitude is driving the decrease in LB PPI for MK-801-treated larvae.
In summary, the data here show that a locomotion-based PPI paradigm can be conducted using larval zebrafish, using low frame rate cameras.Consistent to what has been observed when measuring c-bend startles in larval zebrafish, our LB PPI assay shows a reduction in larval startle probability in prepulse/startle trials, based on measuring locomotion only (Bergeron et al., 2015;Burgess & Granato, 2007).In addition, the startle magnitude is also decreased on prepulse/startle trials compared to startle alone trials, which can be used as a measure for PPI.
Our results further showed reductions in PPI after administration of the NMDA receptor antagonist MK-801, in line with previous studies using different PPI startle readouts (Bergeron et al., 2015;Wolman et al., 2011).This is particularly interesting as strong links have been made between glutamatergic abnormality, particularly in the hippocampal CA1/CA3 regions in mammals, and schizophrenia risk/symptoms (Briend et al., 2020;Demjaha et al., 2014;Egerton et al., 2018;Park et al., 2021;Uno & Coyle, 2019).There is also a direct translatability in the PPI response between animal models and humans.Notably, the ISI parameters of the PPI assay for larval zebrafish assays are nearly identical to those used with human participants, allowing direct comparability between endophenotypes (San-Martin et al., 2020;Swerdlow et al., 2006).
High temporal resolution PPI that measures larval c-bends are useful for more fine-grained analysis of neuronal circuits in motor movement and therefore might be still selected for certain experimental paradigms (Eaton et al., 1977;Liu & Fetcho, 1999;Medan & Preuss, 2014).However, our LB PPI assay is clearly suitable to provide a tool for rapid assessment of sensory gating in larvae.This is particularly useful for conducting high-throughput screens of disorder-associated genes and novel compounds, one of the strengths of the zebrafish larval system.
Drug Preparation
MK-801 hydrogen maleate (dizocilpine) was dissolved in 100% DMSO and then further diluted to 100 µM of MK-801 and 0.01% DMSO.The control comparison solution was 0.01% DMSO in Danieau's medium.
Prepulse Inhibition (PPI) Assay
All experiments were carried out between 12:00-17:00 in a Zantiks MWP behavioural box (https://zantiks.com/).Larvae were habituated to the procedure room 30 minutes prior to the experiment.All larvae were then placed into a 96-well plate for PPI vibrations and locomotion tracking.Each well was filled with 200 µl of 1X Danieau's medium.In the MK-801 experiment, larvae were placed in 200 µl of DMSO-matched control or 100 µM MK-801.
Startle stimuli were delivered using a motor mechanism that could produce script-controlled vibrations.The motor was programmed to move in alternating directions, clockwise and anti-clockwise for a total of 4 step movements, each lasting 5 ms at a 200 Hz frequency.This was consistent in all single-pulse and PPI assays shown here.In the process of optimising the PPI assay, the vibration stimuli were presented at varying step sizes (1, ½, ¼ and 1/8 step) where 1 full step size was a 1.8° motor movement.The inter-stimulus interval (ISI) was varied between 100 ms and 1000 ms.In the final PPI assay (see Figure 1D), 8 stimulus trials occurred with a 5-minute inter-trial interval (ITI) and a 300 ms ISI.Trial types alternated randomly between startle stimulus alone and prepulse/startle stimulus trials to avoid habituation to trial types.The startle stimulus was 1 step size and the prepulse stimulus was 1/8 step.
Data Analysis
Data was extracted using Matlab (R2022b), and statistically analysed and graphed using GraphPad Prism (version 10).Prior to data analysis, larvae showing 0 mm locomotion +/-500 ms around the startle vibration were excluded from further analyses, as was data that was subject to technical errors.PPI percentage was calculated using the following equation: %PPI = ( ( Startle_alone − ( Prepulse Startle ) ) / Startle_alone ) * 100 "Startle alone" was the mean locomotion at +100 ms in startle alone trials and "Prepulse/Startle" is the mean locomotion at +100 ms in prepulse/startle trials (Figure 1E).0 ms was the time bin that the startle pulse was delivered in (see Figure 1E).For analyses of % PPI and startle magnitude, larvae were classified as a "responder larvae" if they showed locomotion of >0 mm in the +100 ms time bin (i.e., a startle response).For analyses of startle probability, all larvae were included.N-numbers for larvae included in each experiment and analysis type can be found below in Table 1.In the case where post-hoc analyses were conducted, Šidák's or Tukey's multiple comparison corrections were applied.
Figure 1 .
Figure 1.Locomotion-based prepulse inhibition and modulation by MK-801: A. Locomotion trace showing the response of all larvae (6dpf) after within-subject presentation of eight 200 Hz 20 ms vibrations (two vibrations per motor step, randomised order).Data are presented as the average locomotion (mm) for each pulse intensity.B. Response magnitude of responder larvae.Data are presented as the average locomotion (mm) for each pulse intensity 100 ms after the startle stimulus.C. Response probability of all larvae.Data are presented as the average response probability per trial 100 ms after the startle stimulus.D. Diagram to show the finalised PPI assay procedure.E. 300 ms ISI raw locomotion data.Data are presented as the mean distance travelled (mm) by all larvae in startle pulse only (blue) and prepulse/startle (purple) trials.Arrows indicate time points where the prepulse (blue) and startle (purple) stimuli are delivered Methods section.* = p<0.05,** = p<0.01,*** = p<0.001,**** = p<0.0001.
Table 1 .
N-numbers per group in each vibration/PPI experiment. | 2024-02-11T05:08:50.322Z | 2024-01-24T00:00:00.000 | {
"year": 2024,
"sha1": "2de7ae1bb60088e1a2acf45634b88d202e565d20",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2de7ae1bb60088e1a2acf45634b88d202e565d20",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231819615 | pes2o/s2orc | v3-fos-license | ‘Candidatus Phytoplasma asteris’ subgroups display distinct disease progression dynamics during the carrot growing season
Aster Yellows phytoplasma (AYp; ‘Candidatus Phytoplasma asteris’) is an obligate bacterial pathogen that is the causative agent of multiple diseases in herbaceous plants. While this phytoplasma has been examined in depth for its disease characteristics, knowledge about the spatial and temporal dynamics of pathogen spread is lacking. The phytoplasma is found in plant’s phloem and is vectored by leafhoppers (Cicadellidae: Hemiptera), including the aster leafhopper, Macrosteles quadrilineatus Forbes. The aster leafhopper is a migratory insect pest that overwinters in the southern United States, and historical data suggest these insects migrate from southern overwintering locations to northern latitudes annually, transmitting and driving phytoplasma infection rates as they migrate. A more in-depth understanding of the spatial, temporal and genetic determinants of Aster Yellows disease progress will lead to better integrated pest management strategies for Aster Yellows disease control. Carrot, Daucus carota L., plots were established at two planting densities in central Wisconsin and monitored during the 2018 growing season for Aster Yellows disease progression. Symptomatic carrots were sampled and assayed for the presence of the Aster Yellows phytoplasma. Aster Yellows disease progression was determined to be significantly associated with calendar date, crop density, location within the field, and phytoplasma subgroup.
Introduction
Understanding the progression of pathogen infections resulting in disease phenotypes in agricultural crops is critically important for determining effective pest management strategies. Aster Yellows phytoplasma (AYp), 'Candidatus Phytoplasma asteris' is one agricultural pathogen of concern, particularly for commercial growers of carrot, lettuce and celery in the effector is also responsible for enhanced colonization by insect vectors on plants [15], a phenotype that is dependent on SAP54 interaction with RAD23 [15]. An effector known as tengu-su inducer (TENGU) has been demonstrated to interact with two phytohormones (auxin and jasmonate) resulting in flower sterility and was first identified in onion yellows [23]. Tomkins et al. used a multi-layered model to predict the complex interactions between leafhopper, phytoplasma and plants related to the expression of effector genes SAP11 and SAP54 and suggested that the effectors contribute to disease spread [6]. The majority of SAP genes lie in close proximity to each other on apparent pathogenicity islands that resemble conjugative transposable elements that were named potential mobile units (PMUs) [22,24]. These PMUs are variably present and horizontally exchanged among phytoplasmas [25]. In the genome of AY-WB phytoplasma, the gene for SAP11 lies on a PMU-like genomic region adjacent to other candidate effector genes encoding for SAP56, SAP66, SAP67, SAP68. The genes for 34 of the 56 SAPs are also located on PMUs, while 7 are located on plasmids [17,20,22]. In the current investigation, our overarching goal was to investigate the disease progression of AY within a carrot field, emphasizing the spatial and temporal dynamics of AY incidence and the composition of SAP effectors genes associated with AYp strains found in infected plants. We further examined the effects of within-field location, time, and planting density on AY incidence and characterized the genetic markers related to the composition of AYp. We found that the patterns of disease spread are not random but structured, and are the result of multiple biological parameters. The findings provide insight into the movement and colonization of AYp in a carrot field and will lead to a better understanding of disease progression and management.
Data availability
All relevant data are contained within the paper and its supporting information files.
Ethical statement
This article does not contain studies with any human participants and no specific permits were required for field collection or experimental treatment of Macrosteles quadrilineatus for the study described.
Disease progression and leafhopper pressure
On May 7, 2018, carrot plantings (cv. 'Canada') were established at the University of Wisconsin's Hancock Agricultural Research Station (HARS) in field K3 (44.118181˚N, -89.549045˚W). Seed was purchased through a commercial vendor (Seedway LLC) which conducts phytosanitary procedures necessary to screen for pathogens and was certified disease free. Within a larger, 180m by 45m carrot field, a central 66m wide section was divided in half and planted at either a high density (1360k seeds per hectare) or low density (556k per hectare) seeding rate. Each planting was further divided into five rows of 18 beds with 4m bare alleys separating rows (90 beds per density). Each 3-row raised bed measured 2m wide by 6m long. The first and last rows of beds were considered "edge" beds and were bordered by other, noncarrot crops to the north (sorghum) and the south (green bean). Carrot beds were subsequently managed with a standard fertilizer program, but no crop protection pesticides (herbicides, fungicides, insecticides) were applied at any point in the season. Weeds were managed by hand-pulling every two weeks.
Using a random number generator, two beds from each row of the high-and low-density plantings were selected as plots for biweekly stand counts and AY disease monitoring. Initial counts occurred on Jun 27, 2018, when plots were staked, stand counts of all carrots within selected beds were conducted, and AY disease phenotype of each carrot observed. Possible AY disease phenotypes included proliferation of shoots ("witches-broom"), retrograde development of flowers into leaves, virescence, formation of shortened internode and elongated leaves, and yellowing/reddening of foliage. Plots were scouted in this way every two weeks (Jun 27, Jul 11 and 25, Aug 8 and 21, Sep 6 and 18, Oct 2 and 16) for a total of 9 observations. The exact location of each phenotypically-described, diseased carrot was recorded and marked within the field relative to the front stake of each row. Each symptomatic carrot had a small sample of both petiole and stem removed for genetic confirmation of the presence of the AYp pathogen in the carrot tissue. Symptomatic carrots were tagged with a small plastic stake to prevent double-counting over successive sample dates.
To assess ALH abundance and infection prevalence within the leafhoppers within the carrot fields, 1000 sweeps of the plant canopy were performed in both the high-and low-density plantings using a standard, 15-inch sweep net during each biweekly disease monitoring event. Experimental design of the plot layout and methods can be found in S1 Fig Collected insects were bagged in 3.8 L sealable plastic bags and placed on ice for transportation to the University of Wisconsin-Madison for processing, where the total number of ALH was determined and a subset of individuals (N = 40) obtained from each planting density/sample time were stored for later analysis. A taxonomic key to the family Cicadellidae was used to confirm the identity of all adult ALH [26].
DNA isolation from plant and leafhopper tissue
Tissue samples from all symptomatic carrots and 40 leafhoppers per density/sample time were analyzed. Tissue samples (10 mg) collected from carrots were processed as individual samples based upon tissue type (petiole, stem). Samples were placed in corresponding 1.5 ml sterile microcentrifuge tubes and homogenized with a sterile plastic pestle in 500 μl 2% CTAB (Cetyltrimethyl-ammonium-bromide) (bioWORLD, Dublin, OH) buffer with 1 μl of 0.2ng/μl RNase A (Thermo Fisher Scientific, Waltham, MA). Homogenates were incubated at 60˚C for 30 min, centrifuged at 12,000g for 5 minutes, and the supernatant was transferred to a fresh tube. A single volume of chloroform was added and the samples were gently mixed for 10 minutes. Samples were then centrifuged for 10 minutes at 12,000g and the supernatant was again transferred to a fresh tube. DNA was precipitated by adding 1 volume of cold isopropanol and mixed by inversion for 10 minutes. Samples were centrifuged to pelletize DNA and washed with 75% EtOH. After washing, EtOH was discarded and samples were allowed to completely air dry; methodology was adapted from Marzachi et al. [27]. DNA was suspended in DNase/ RNase free H 2 0, and subsequently quantified using a NanoDrop, microvolume spectrophotometer (Thermo Fisher Scientific, Waltham, MA), and brought to a final volume of between 20 μl -100 μl depending on the measured concentration. Samples were then frozen at -20˚C for further analysis.
Confirmation of Aster Yellows phytoplasma within tissue samples
To confirm the presence of AYp in DNA isolations from tissue samples, a P1/P7 PCR amplification was conducted, followed by a R16F2n/R16R2 nested PCR on the P1/P7 PCR product to amplify the 16S rRNA sequence. DNA primers were obtained from Smart et al. 1996 (P1/P7) [28] and Gundersen et al. 1996 (R16F2n/R16R2) [29] and are presented in S1 Table. Specifically, 25 μl PCR reactions were conducted with GoTaq1 Green Master Mix (Promega Corporation, Madison, WI). Reaction conditions included 240 seconds at 94˚C as the initial denaturing step, followed by 30 cycles of 30 seconds at 94˚C for denaturation, 60 seconds at 64˚C (P1/P7) or 60˚C (R16F2n/R16R2) for annealing, 90 seconds at 72˚C for extension and a final extension of 300 seconds at 72˚C. PCR amplification products were run on a 1.5% agarose gel to confirm the presence of corresponding DNA fragments. The corresponding DNA fragment observed from the P1/P7 amplification was 1.8 kbp, while the nested product was 1.2 kbp. Total DNA of only extracted carrot tissue through CTAB procedure was used to analyze the genetic composition of subgroup and effector proportions. Subgroup identification was determined using both nucleic acid sequencing and restriction fragment length polymorphism (RFLP). The identity and position of five unique, single nucleic acid polymorphisms (SNPs) (S2 Fig) were compared between sequences to identify AYp subgroup. Furthermore, a RFLP assay using the restriction endonuclease Hha1 (Promega Corporation), was conducted at 37˚C for 90 min and run on 1.5% agarose gel. RFLP was used as a supplementary assay to confirm the subgroup designation resulting from SNP assessment of the sequencing data [11].
Effector determination by 'MonsterPlex' sequencing
To assess the genetic variability of Aster Yellows phytoplasma approximately 400 ng of each AYp-positive carrot sample's DNA was submitted to Floodlight Genomics LLC (Knoxville, TN) for 'MonsterPlex' amplification and Illumina DNA sequencing, samples that were deemed as co-infected were not used in the Monsterplex analysis. Floodlight Genomics used an optimized Hi-Plex approach to amplify targets in a single multiplex reaction with 21 targets, including 16S rRNA (control gene) and 20 effector sequences (S2 Table). The sample-specific barcoded amplicons were sequenced on the Illumina HiSeq X platform according to the manufacturer's directions. Floodlight Genomics delivered sample-specific raw DNA sequence reads as FASTQ files. Annotation of the raw reads was performed with Geneious bioinformatics software (Auckland, New Zealand). Raw reads were aligned to reference sequences and annotated. To determine subgroup designation, the reads were annotated for known SNPs associated with each subgroup [11]. Effector reads were aligned to reference sequences, and the total number of effector reads were standardized to the number of 16S rRNA within each sample to generate a reads ratio indicating the number of reads of each effector per 16S rRNA read. Effectors were classified as being present in a sample if the read number after standardization was greater than 1. Effectors SAP21, 36, 54, and 67 did not amplify in either subgroup and were removed from further analysis, as amplification might have been the result of primer efficiency within the 'MonsterPlex' analysis. Samples were considered AYp-infected if both the PCR and genetic sequencing were positive for the presence of the phytoplasma (16S rRNA sequences were searched against the non-redundant nucleotide NCBI database using BLASTn).
Data analysis
The effect of date, planting density, and plot location (edge/interior of field) on AY disease incidence was quantified using a maximum likelihood regression model with a beta binomial distribution and a logit link function in JMP Pro 13.2.1 (SAS Institute, Cary, NC). Beta binomial models require a total count and a diseased count variable, both of which were generated for each plot on each scouting date. Differences in effector copy number by AYp subgroup were analyzed in R version 3.6.1 (R Core Team, Vienna, Austria) by running a binomial regression with a logit link function for each effector and performing a Chi-squared test on the resultant model.
Edge-and density-dependent disease progression
Aster Yellows disease progression increased over the growing season and was also dependent upon initial planting density. On the first sampling date (27 Jun 2018), no symptomatic carrots were identified. Disease incidence gradually increased throughout the season until October, when it increased exponentially, reaching an average of 4.5 ± 1.8% (edge plots) and 4.3 ± 1.9% (interior plots) in the high-density planting, and an average of 11.0 ± 5.9% (edge plots) and 8.2 ± 2.2% (interior plots) in the low-density planting (Fig 1 and S3 Table).
Aster Yellows disease incidence appeared to be influenced by planting density (high versus low density), by plot location within the field (field-edge versus interior plots), and was strongly dependent on date. To evaluate each of these factors, a mixed model was constructed to include density (high/low), plot location (edge/interior), and week number, and all two-way interactions as fixed effects, and plot as a random effect (S4 Table). In the incidence model, density (F = 9.06, P = 0.0083), week (F = 479.49, P<0.0001), and the density � week interaction (F = 26.81, P<0.0001) were significant. The random effect of plot was also significant (Wald's p-value = 0.040), indicating disease incidence differences between specific plots in the field independent of all other fixed effects. Plot location within the field showed a non-significant trend (F = 2.63, p = 0.12). This model confirms a significant difference in disease incidence between the high-density and low-density plantings, as well as a difference in the rate of
PLOS ONE
change in disease progress between the two densities. However, a similar mixed model using only the number of diseased carrots per plot rather than the stand count-adjusted disease incidence values only, showed week as a significant component (f = 497.82, p < .0001), with density, location, and all interaction terms non-significant. In total, 350 symptomatic carrots were recorded in the experiment, with 163 present in low density and 187 in the high density planting portions of the experimental plot. This suggests that the progression of AY through both the high-and low-density plantings was not limited by the number of available host plants, but rather by other factors.
Aster leafhopper abundance and infectivity
Cumulative ALH abundance and rates of AYp-carrying ALH peaked in late August of 2018 and followed a similar bell-shaped distribution over both high and low planting densities. As the principal insect vector of the AYp in Wisconsin, ALH populations were monitored to evaluate vector pressure throughout the 2018 growing season in the high-and low-density carrot plots. From each leafhopper collection, a subset of adult ALH was assayed for phytoplasma to quantify the infectivity of the in-field leafhopper populations at unique time points throughout the study. Cumulative ALH abundance and rates of AYp-carrying ALH captured within the two plots peaked in late August of 2018 and followed similar bell-shaped distributions over both high and low planting densities (Fig 2). Aster leafhopper abundance in the field peaked from late July through late August, with populations declining through September and October (Fig 2). The fraction of captured ALH carrying the AYp followed a similar pattern, with a mean peak infectivity of 8.75% coincident with peak ALH populations (Fig 2).
Aster leafhopper abundance was greater in the high-density carrot plantings in July and August. Abundance peaked in the high-density planting at 32.85 ALH/ 20 sweeps. Further,
PLOS ONE
phytoplasma-carrying ALH similarly peaked at 17.5% within the ALH populations in high density plantings on August 21 (Fig 3). The combination of high ALH counts together with high phytoplasma detections within the ALH coincide with high AYp disease incidence in carrot fields.
Aster yellows phytoplasma genotypic determination
Aster Yellows disease in carrots predominantly manifested late in the growing season with a higher proportion of AYp subgroup 16SrI-A (67%). In addition to evaluating carrot plots for disease progress, tissue samples from each symptomatic carrot (laboratory-confirmed AYppositive samples) were genotyped to determine phytoplasma subgroup. In this experiment, only 16SrI-A and 16SrI-B AYp subgroups were observed within the field, with the overall subgroup composition changing significantly during the growing season (Fig 4). In the early portions of the sampling season, AYp samples were comprised of only 16SrI-B subgroup whereas a greater proportion of mid-season samples were mixed, containing varying proportions of both 16SrI-A and16SrI-B. The majority of AY diseased carrots possessed overt symptoms late in the growing season, and the greatest proportion of these infected carrots were classified as AYp subgroup 16SrI-A (67%).
Secreted AY-WB effector identification
The distribution and proportion of SAP effectors depends upon AY subgroup designation. By employing 'MonsterPlex' parallel PCR amplification techniques, we were able to quantify the abundance of known effector sequences in the AYp genome and PMUs in 290 unique AYp samples collected from symptomatic carrots. To generate a "read ratio" for each effector, the number of reads per effector was normalized to the number of 16 rRNA gene reads for each sample. These normalized read values revealed significant differences in the presence of effectors genes between the two AYp subgroups observed in the investigation (Fig 5). From among the 20 effectors amplified, SAP11, 13,15,19,45, and 68 were found to be specific to the 16SrI-A subgroup and not detected in the 16SrI-B subgroup (above an established threshold). Effectors SAP05, 06, 27,35,41,42,44,48,49, and 66 were present in both subgroups. Of the effectors present in both subgroups, SAP05, 06, 41, 42, and 66 amplifications generated more than double the amount of reads from samples that were infected with the 16SrI-A subgroup compared to the 16SrI-B subgroup. Both SAP06 and SAP66 were observed at an average read number 35.3 and 21.4 times greater, respectively, in the 16SrI-A subgroup compared to the 16SrI-B subgroup. The effectors with the highest number of reads in subgroup 16SrI-A samples were SAP41 (81.40 reads per 16S rRNA read) and SAP42 (78.34 reads per 16S rRNA read), while SAP48 (62.69 reads per 16S rRNA read) and SAP49 (68.34 reads per 16S rRNA read) had the highest copy within subgroup 16SrI-B (S2 Table). Only SAP44 generated slightly more reads from 16SrI-B versus 16SrI-A within infected samples.
Discussion and conclusions
The movement and spread of AYp relies on multiple insect vectors, including the ALH [3]. Within susceptible agricultural crops the disease is monitored by growers and proactively controlled with insecticides when the vector is deemed in sufficiently high numbers [3]. However, if not controlled properly, AYp can manifest itself as AY disease, which can have significant ramifications in terms of crop yield and raw product quality. We sought to understand the progression of this disease in an insecticide-free environment to determine the ecological factors that influence AY disease progression. We examined the variables: time, planting density, location within the field (edge versus interior plot effects) and the associated genetics of the AYp (subgroup designation and effector composition) and hypothesized that disease progression within carrot fields by AYp was not random. However, we determined that the progression of AY disease is influenced by more than just the genotypic construct of AYp; it is also influenced by sample location, planting density and time during the crop season.
The overall progression of AY disease was influenced by sample location within field, planting density, and time of year. We observed that AY disease progressed at higher rates at lower planting densities. This suggests that infected leafhoppers were perhaps more readily attracted to infected plants, or that infected plants had greater apparency to mobile insects in less dense aggregations. Further, plants located along the outside edges of sample plots had a higher likelihood of infection by AYp, though only a non-significant trend (P = 0.12). This could suggest that the movement of leafhoppers is directional and that leafhoppers are colonizing the field from the plot edges and inward. Similar to other members of the Hemiptera, adult leafhopper host location cues involve contrasts, increasing the likelihood that landing and initial inoculations may occur along field or plot edges. Leafhoppers were observed within our field as early as June 21, 2018, with the highest abundance on August 21, 2018. The highest rates of new infections were observed several weeks after peak leafhopper counts, in mid-September, which roughly corresponds to the previously described period of latency for development of visual AY symptoms in newly infected carrots, which can vary from 2-3 weeks [30]. While the observed infection rates within the lowdensity carrots were statistically higher than the high-density planting, the actual, and nonadjusted numbers of carrots infected in the low-and high-density planting were statistically similar. Although overall ALH numbers and the associated incidence of AYp infection within the insects was greater in the high-density plantings, the incidence of AYp infection within susceptible carrots (carrot cultivars that have been evaluated to be susceptible to infection by the AYp, and develop overt symptoms as a result of AYp infection) [31] was correspondingly lower, suggesting that the numbers of potentially inoculative insect vectors may not be limiting disease progress.
It has been previously established that AYp consists of multiple, genetically-distinct subgroups that can cause AY disease in carrots [11]. In Wisconsin there are at least two AYp subgroups, 16SrI-A and 16SrI-B, that are known AY disease agents [2,11]. These earlier observations correspond to the findings of this study where we observed 16SrI-A, 16SrI-B, or a coinfection with both subgroups present in our experimental fields. The predominant subgroup at the beginning of the growing season was represented by 16SrI-B (100%), however as the season progressed the subgroup composition significantly shifted to 16SrI-A (67%) by the end of the season. The higher proportion of 16SrI-B at the beginning of the season could be an artifact of the initial low number of infected carrots. However, the shift in subgroup proportion is important to highlight and suggests that the 16SrI-A subgroup is being selected for in comparison to the 16SrI-B subgroup within our experiment. We did detect a small fraction of carrots infected with both 16SrI-A and 16SrI-B, suggesting that the pathogen can co-occur within the same host. The fraction of carrots deemed as coinfected with both 16SrI-A and 16SrI-B subgroups were determined through the use of RFLP and 16S rRNA sequencing. There is a possibility that the phytoplasma in this faction of carrots could actually be a new phytoplasma subgroup with two copies of the 16S rRNA gene (One 16SrI-A and the other 16SrI-B) and should be further evaluated in future studies. Differences in SAP effector genes between the 16SrI-A and 16SrI-B subgroups may contribute to the higher abundance of 16SrI-A later in the season.
Secreted AY-WB effector genes were detected in symptomatic carrots in Wisconsin. Secreted AY-WB genes were amplified from field-collected plants infected with 16SrI-A phytoplasmas, consistent with primers designed to the 16SrI-A AY-WB phytoplasma genome. In addition, some effector genes were present in field-collected 16SrI-B infected plants, indicating that SAPs are present in both the 16SrI-A and 16SrI-B genomes found in Wisconsin. Differences in effector repertoires between the 16SrI-A and 16SrI-B subgroups could contribute to the distinct disease progression dynamics that are observed in the fields of Wisconsin. Secreted AY-WB proteins have been documented to manipulate host presentation to insect vectors. The effectors have been shown to have important biological ramifications on insect vector colonization (attractiveness) and reproduction (fitness). The exact function of all effectors investigated is still unknown. However, the functions of SAP11 and SAP54 have been well documented. Secreted AY-WB protein11 promotes oviposition (egg laying) by gravid, adult female ALH and SAP54 attracts insect vectors to plants by suppressing innate immune responses to herbivores. We noted SAP11 effectors were significantly more abundant in 16SrI-A. It is possible that the SAP11 effector may be more effective at promoting 16SrI-A phytoplasmas when there are more leafhoppers. However, how SAP11 modulates phytoplasma dynamics in a field situation remains to be determined. Overall, the effector proportion was significantly skewed towards 16SrI-A. Only SAP44 was significantly more abundant in the 16SrI-B phytoplasma subgroup. This observation suggests that subgroup 16SrI-A could be the predominant subgroup found in our field location due to the enhanced potential for pathogen spread associated with the increased proportion of effectors within this subgroup.
The data presented here represent an evaluation of a select set of factors which may influence progression of AY disease in susceptible carrots within central Wisconsin. We examined the factors: time during the growing season, initial planting density, sample location within the field and AYp subgroup and effector composition. Other factors could be evaluated in future studies to further complement our current understanding of the AY disease system including temperature, latitude, cultivar and cropping system. A further limitation of the study was that it was conducted over a single year and a single production season, but the experimental design implemented did allow for robust statistical analysis in terms of temporal changes observed within this timeframe. Here we demonstrate that AY disease was greater along plot edges and adjusted, final season incidence was greater in plots with lower, initial planting density. We also examined the genetic makeup of the phytoplasma and correlated subgroup 16SrI-A to higher effector proportion and greater disease spread. This information will lead to a better understanding of AYp movement in commercial crops, and contribute new knowledge towards describing factors that contribute to disease progress in susceptible carrots. | 2021-02-06T06:17:34.609Z | 2021-02-04T00:00:00.000 | {
"year": 2021,
"sha1": "046d3bc566006b9b8e343296b5469d68040acd99",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0239956&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "20020e41c49160256c4c505ab7feb1e82884e197",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19299310 | pes2o/s2orc | v3-fos-license | Clinical features, course and treatment of methamphetamine-induced psychosis in psychiatric inpatients.
BACKGROUND
Over the past few years, methamphetamine-induced psychosis (MIP) has increased in Iran, accounting for a significant percentage of psychiatry hospital admissions. The present study was conducted with an aim to investigate clinical symptoms, and course and treatment methods of MIP inpatients in Shafa Psychiatry Hospital in northern Iran.
METHODS
Participants were 152 MIP inpatients. Brief Psychiatric Rating Scale (BPRS) subscales of suspiciousness, unusual thought content; hallucinations and hostility were used to measure psychiatric symptoms. Data regarding suicide and homicide and violence were also obtained through interviews with the inpatients and their family. Based on their lengths of recovery time, the inpatients were categorized into 3 clinical groups. These inpatients received their usual treatments and were monitored for their psychiatric symptoms and clinical course of illness. The data were analyzed by descriptive statistics.
RESULTS
The most frequent psychiatric symptoms were violence (75.6 %), intimate partner violence (61.2 %), delusions of persecution (85.5 %), delusions of reference (38.5 %), delusions of grandiosity (32.9 %), delusions of infidelity (30.2 %), auditory hallucinations (51.3 %), visual hallucinations (18.4 %), suicidal thoughts (14.5 %), homicidal thoughts (3.9 %), suicide attempts (10.5 %) and homicide attempts (0.7 %). Recovery from psychotic symptoms in 31.6 % of the inpatients took more than one month. 46.1% of the inpatients were treated with Risperidone and 37.5 % with Olanzapine. Persecutory delusion and auditory hallucination were the most frequent persistent psychotic symptoms. 20.8 % of the inpatients with duration of psychosis more than one month were treated with electroconvulsive therapy (ECT) along with antipsychotics.
CONCLUSION
All forms of violence are highly frequent in MIP inpatients. Our finding agrees with many other studies suggesting that recovery from MIP can take more than a month. Initial promising findings were found regarding the efficacy of Electroconvulsive therapy in MIP patients.
of methamphetamine. This country has experienced several major epidemics of the substance. Japanese psychiatrists believe that stimulants-induced psychosis can be divided into different clinical groups. The first group is the transient psychosis group in which the incidence and duration of symptoms are limited to a maximum period of 4 to 5 days after the intoxication, which can be seen in the withdrawal or intoxication periods. In the second group, psychotic symptoms can be resolved in less than a month [7,8]. In the third group, symptoms could be of much longer duration of up to several months, or of several years [7][8][9][10]. Some experts believe that 5 to 10 % of the MIP patients may not fully recover from their psychotic symptoms even after a long time [7,8]. The studies recently carried out as well as those in Iran have provided strong clinical evidence to support such a classification [6,7,11,12]. In Yui K et al. [7] study, 64 % of patients recovered from their symptoms within ten days, and 82 % gained full recovery within a month, and in 18 %, psychosis persisted more than a month. In a previous study in Iran, 8.75 % of the cases with MIP had symptoms that continued more than a month [6]. In the 3 major epidemics in Japan, a significant number of patients still had the psychotic symptoms after a month [7,8]. Some researchers believe that MIP can imitate both positive and negative symptoms of schizophrenia, and some consider it, a short-term psychosis with positive symptoms. It is difficult to distinguish the Japanese persistent psychosis from a primary psychosis like schizophrenia triggered by the use of amphetamines [9,12].
There are still many unanswered questions regarding clinical characteristics of MIP. Unfortunately, Iran has experienced an epidemic of methamphetamine use in recent years, which has provided an opportunity to study clinical symptoms, and course and treatment methods of MIP inpatients.
Methods
This cross sectional study was conducted at Shafa Psychiatry Hospital, the only psychiatric hospital, referral and psychiatric emergencies center in Rasht, the capital of Guilan Province, with a population of 2.5 million, in northern Iran, from August 2013 to August 2014.
Participants were 152 MIP inpatients who were admitted to Shafa Hospital due to clinically significant symptoms of hostility, grandiosity or suspiciousness according to a score of 5 or above, and hallucinations, unusual thought content or suicidality according to a score of 6 or above on the Brief Psychiatric Rating Scale (BPRS) [13]. The hostility in the month preceding admission was considered violence. The urine screening tests for detecting amphetamines, morphine and cannabis were performed at the time of admission in the emergency ward. The patients were interviewed about their drug and substance abuse and dependency. The family observations of intoxication periods and substance related behaviors in the preceding month of admission were obtained in separate face-to-face interviews and were used to determine the abuse of any probable substances. The medical records were used to determine schizophrenia, bipolar disorder, antisocial and borderline personality disorder. If positive urine tests or any history of abuse of alcohol, cannabis or morphine in the preceding month of admission was detected or if the patients were diagnosed with any of the above-mentioned primary psychiatric disorders, they were excluded from the study. Fixed doses of methadone were accepted. All of the patients of this study had positive urine tests for methamphetamine. A psychiatrist evaluated the patients and concluded that they met the DSMIV-TR diagnostic criteria for amphetamine-like use disorders (dependence; 304.40 or abuse; 305.70) and amphetamine-like induced psychotic disorders (292. 11, 292.12).
BPRS was administered according to the guidelines provided by Lukoff [13] at admission time and then twice weekly to assess psychiatric symptoms and to monitor the patients' improvement. After the patients were admitted to the psychiatric wards and the primary evaluations were completed, treatment was undertaken by certified psychiatrists. The inpatients were monitored for primary psychiatric symptoms and clinical course of treatment, and they received usual evaluation and treatment for psychotic disorders. There was no intervention or compromise regarding treatment methods, whether medicinal or otherwise. In order to be able to compare the results of this study with other studies, considering the patterns of recovery of patients in the previous studies, we divided the patients in into 3 clinical groups: Group 1 (full recovery equal or less than one week), Group 2 (full recovery more than one week but equal or less than one month), and Group 3 (full recovery more than one month). Considering that there is no global agreement and clear definition for the terms "transient" and "persistent", we preferred to use numbered groups in this paper. Achieving a score of 3 or lower in any of the mentioned subscales was the criteria for the inpatient's recovery in that particular subscale, and full recovery was determined by recovery in all of the subscales of BPRS. BPRS is a tool developed for assessing the degree of severity of psychiatric symptoms and is widely used in clinical and research assessments, and enjoys a high degree of inter-rater reliability when used by skilled clinical specialists. Certified psychiatrists and an experienced clinical psychologist who had been trained in all aspects of the interviewing skills, conducted interviews.
Data regarding suicide, homicide and violence were also obtained from interviews with the inpatients and their family by certified psychiatrists and a clinical psychologist, as well as from other referral sources and, if necessary, from local investigations by social workers. We asked the family members whether the patients had assaulted them with no harm repeatedly, for example, slapped or pushed them, whether the patients had severely destroyed the properties in the house or outside, for example, knocked over furniture, broken windows, and burned things, or whether the patients had assaulted them with definite possibility of harming or with actual harm, for example, assaulted them with a hammer or a knife? In this study, we only considered sever physical violence.
While hospitalized, we monitored the inpatients closely, regarding abuse of illicit drugs and substances every 3 days, using clinical evaluation and urine screening tests. Inpatients who did not continue with the treatment or, for any reason, tested positive for any substance abuse during the hospitalization period were excluded from this study.
The written consents were obtained both from the patient and from a responsible (legally authorized) family member in each case. This study received ethics approval from the Research Ethics Committee of Guilan University of Medical Sciences (IR.GUMS.REC.1394.253). The data were analyzed by SPSS-15. The Chi-Square test was used for statistical analysis, and values less than 0.05 were considered meaningful.
Results
Out of 2600 Shafa Hospital admissions from August 2013 to August 2014, 1173 patients had positive urine tests for methamphetamine. Eventually 152 MIP inpatients formed the sample of this study and the others were excluded due to exclusion criteria. Table 1 shows the demographic characteristics of our study. The average age of the population was 35.7 (8.07) years of age, with the highest frequency in the 30-40 year age group. Table 2 shows the clinical characteristics of our study. 115 of the inpatients (75.6 %) had at least one form of violent behavior in the past month. Of the 49 married inpatients, 30 married inpatients (61.2 %) displayed intimate partner violence. It must be noted that a person may have multiple aggressive behaviors; therefore, the statistics yielded various answers. 146 of the inpatients (96.1 %) had delusions or hallucinations. 6 of the patients (3.9%) showed no symptoms of delusions or hallucinations at the time of admission, but they were hospitalized for disorganized behaviors. No statistically meaningful relation was found between psychotic symptoms (delusions and hallucinations) and different forms of violence. The only significant statistical difference was found between intimate partner violence and delusions of reference (p < 001). Persecutory delusion and auditory hallucination were the most frequent persistent psychotic symptoms. The frequencies of grandiosity delusion (p < 0.01) and auditory hallucination (p < 0.05) were significantly different in clinical groups ( Table 2). Table 3 shows the relation between the treatment methods and clinical groups. The patients were treated with antipsychotics, supportive therapy and ECT (Electroconvulsive Therapy). Serotonin-Dopamine Antagonists, especially Risperidone, (46.1 %) and Olanzapine (37.5 %) were the most frequently prescribed antipsychotics. The use of typical antipsychotics was significantly low. Table 4 shows the frequency of the use of antipsychotic drugs in different clinical groups.
Discussion
Methamphetamine is the second most frequently abused illicit drug in Iran [11,14] and many individuals who referred to psychiatric emergency wards abused methamphetamine [1]. Among those admitted, other substances usually accompany methamphetamine abuse, which makes it difficult to relate psychiatric symptoms to one particular substance. Due to the exclusion criteria in the present study, it may be claimed that this study can provide a more accurate account of psychiatric symptoms connected with methamphetamine abuse [3,5].
Like other studies, persecutory delusion and auditory hallucination were the most frequent psychiatric symptoms, and almost all inpatients had experienced a few psychiatric symptoms. These symptoms, along with delusion of reference, grandiosity delusion and delusions of infidelity to the spouse can hardly be differentiated from other primary psychotic disorders such as schizophrenia [15][16][17]. In addition, delusion of grandiosity and aggression can be frequently seen in manic episodes of bipolar disorder.
In comparison with the previous study conducted in Iran, the average age of the population was a little higher (35.7 vs. 30.44), and an approximately two-fold increase in the number of female inpatients (8.6 % vs. 4.5 %) was cited [6]. The percentage of improvement of the symptoms in the first week was the same, but the persistence of the symptoms over one month was significantly higher in our study (31.6 % vs. 8.75 %). This means that methamphetamine takes a huge toll on the health care system. On the one hand, not only has there been no decrease in the rate of mood disorders and schizophrenia and other psychotic disorders and substance-related disorders, but the health care system is also faced with a phenomenon that has caused a majority of beds in psychiatric hospitals and emergency wards to be occupied. The findings of our study are more in line with the Japanese studies which reported the rates of persistent psychosis in the 3 epidemics in Japan as 23, 18 and 41 respectively [6]. This may of course have been partly due to our sampling methods, since the probability of psychosis induced by non-amphetamine substances and primary psychotic disorders in our group have been reduced to a minimum. It is also probable that our Homicidal thought 0(0) 4(4.6) 2(4.2) 6(3.9) 1.000 Homicide attempt 0(0) 0(0) 1(2.1) 1(0.7) 0.428 a: Full recovery equal or less than one week b: Full recovery more than one week but equal or less than one month c: Full recovery more than one month *Pearson chi square and for table with at least 1 cell expected count less than 5, Fisher's exact test. p <0.05 considered significant ** Intimate partner violence a: full recovery equal or less than one week b: full recovery more than one week but equal or less than one month c: full recovery more than one month patients were exposed to the substance for a longer period or a larger dose of the substance. Intimate Partner Violence (IPV) is a form of domestic violence in which the spouse will be a target for physical, and psychological, economic and sexual harm [18]. Regarding the particular symptoms of this disorder such as paranoia, and delusions of infidelity, it is assumed that these symptoms can make for intimate partner violence.
There is a high rate of IPV in certain groups of the society, including substance abusers, especially methamphetamine abusers [19][20][21][22][23]. However, in our study, only sever physical violence was studied, which calls for further research. In the present study, three-thirds of those sampled displayed at least one form of violence (toward the spouse, family, and society), and 61.2 % of them met the criteria for severe physical IPV. Although our descriptive study may not be able to demonstrate a causal relationship between methamphetamine abuse and violent behavior, due to exclusion criteria of our study, it can be concluded that the violent behaviors only emerge after having abused methamphetamine. McKetin et al. [17] have reported that there is a clear increase in violent behaviors among methamphetamine abusers compared to when they do not abuse the substance. The findings of the present study indicate that there is a significantly high rate of physical violence in MIP inpatients. Special attention must be paid to the spouses and children of the married patients, and parents and siblings of the single patients who spend many days with the methamphetamine-addicted patients, and consequences of violence such as depression, anxiety and post-traumatic stress disorder should be explored [24,25]. Unfortunately, admissions in the psychiatric hospitals in Iran mainly focus on removing psychiatric symptoms until the patient leaves the hospital, and do not much focus on the impact of the illness on the family members. It seems that the facilities available now may not be suitable or efficient enough.
Persistent psychosis has only been mentioned in the published works of East Asian researchers. In recent clinical studies, such as those done by Grelotti DJ et al. [12], more attention has been paid to studies in the East Asia. These studies have accepted that persistent psychosis can still emerge after methamphetamine abuse [12].
Published works of Iranian researchers in recent years confirm such a claim too [11,14]. There is a growing belief that adequate amounts and the period of methamphetamine abuse, without any special psychiatric history, can lead to MIP due to damage caused to Dopaminergic, Serotoninergic, and Noradrenergic cells [26][27][28][29]. Medhus et al. [30] found that among the patients with MIP, one third was diagnosed with schizophrenia during a 6-year follow up. In a similar study in Thailand, 38 % of MIP cases were given a diagnosis of schizophrenia, 6-years after their hospitalization [31]. Niemi-Pyntarri et al. [32] reported that the 8-year cumulative risk to receive a schizophrenia spectrum diagnosis was 30 % for MIP patients. This has been argued for using MIP as a model for primary psychotic disorders and schizophrenia [33].
Despite the widespread abuse of methamphetamine and the related psychosis, there is no structured guideline for treating the MIP. The number of clinical trials has been too low to help formulate a comprehensive manual to be used by the therapists [34][35][36][37][38]. In the present study, Risperidone and Olanzapine had the highest frequency of use of all antipsychotics, which can partly be the result of the current clinical trials indicating the positive effects of Risperidone in treating MIP [39]. These drugs are effective in restoring the weight, which patients considerably lost because of methamphetamine-induced appetite loss and hyperactivity. Almost all the patients in group1 received antipsychotic drugs. The overall approach is supportive treatment followed by the application of benzodiazepines in order to reduce restlessness and agitation [40]. Findings indicate that our therapists prefer the application of antipsychotics. The reasons may be the possibility of better control of violent behaviors, the therapist's expectation that the patients will not be classified in group 1, or the number of our hospital beds that is not proportionate to the number of patients. The one-week period within which the patients are not given antipsychotics can extend the length of hospitalization. In this approach, the clinician is not able to determine whether the patient responded to antipsychotics, or they recovered on their own, that can increase the risk of inappropriate use of antipsychotics. a: full recovery equal or less than one week b: full recovery more than one week but equal or less than one month c: full recovery more than one month *ECT Electroconvulsive therapy The recovery in 70% of the patients happened in less than a month, but at the end of the one month, at least 30% of the patients still had the symptoms. The application of ECT in group 2 aimed simply at controlling severe aggression and violent behaviors, thoughts of suicide and homicide. The ECT was applied for group 3, only when clinicians still detected persistent psychotic symptoms with no response to antipsychotics. The type of prescribed medication did not change during treatment with ECT in group 3. After 6 to 9 sessions of ECT, symptoms began to disappear. ECT helped improve psychotic symptoms alone or in combination with medication in nonresponsive-to-antipsychotic cases.
We did not come across any report of the application or effectiveness of ECT in treating MIP. It might seem that the application of ECT could be rather restricted in other countries. We found the only report of the effectiveness of ECT in treating MIP in a case study by Grelotti et al. [12]. Another reason might be that psychotic disorders exceeding more than one month, according to the American Psychiatric Association diagnostic criteria, are definitely considered primary psychiatric disorders [30][31][32][33],and therefore, the application of ECT in this case is allowed and effective. This reasoning rules out any such thing as persistent psychosis in the literature, let alone any treatment for it.
About 10% of the population in this study had suicidal thoughts one month prior to admission. Given the aggressive nature unique to these patients, in a hasty judgment, these patients are considered a source of aggression and hazard to other people. Additionally, compared to other psychiatric symptoms, suicide and self-aggression receive less attention. Methamphetamine abuse, in particular, has been followed by suicidal behaviors as well as fatal suicide attempts [41][42][43][44][45][46][47]. In blood and urine samples for toxicological analysis of suicide victims, alcohol and methamphetamine amounts were found to be significantly high [44]. In psychiatric emergency wards, the methamphetamine abusers are more likely to have experienced suicide attempts compared to other groups of inpatients [48].
Marshall and Werb [49] considered suicide and methamphetamine overdose as the main cause of death among adolescents and young adults abusing methamphetamine. It seems that the psychiatric symptoms of methamphetamine abusers along with depression may have contributed to this tendency [44,45]. Many of participants in methamphetamine-related treatment programs suffer from depression [45,46,49]. The admissions in hospitals or the sudden intervals of substance withdrawals can be followed by depression and serious thoughts of suicide [50][51][52][53][54][55]. Thoughts of suspicion and constant thoughts of infidelity toward the spouse and feelings of fear and confusion over being always stalked can cause such feelings of helplessness in the patient of which, they begin to believe; only death can get them relieved. In addition, in the periods of intoxication, an increase in substance-related agitation and visual hallucinations were followed by a rise in the risk of suicide attempts [43,44].
Homicide is the endpoint of violent behavior. Physical violent behavior can easily turn into homicide. Certain homicide attempts might result from the individual's violent behavior and not prior intentions, or might follow other dominant thoughts of homicide. In a study, the risk of homicide by a methamphetamine abuser was estimated to be 9 times that by the control group. Methamphetamine seems to be the only substance with such a strong link with homicide [22].
There are some limitations for this study. First, this study mainly focused on recovery from the positive symptoms. What we expect to see in the long-term clinical course are spontaneous relapses and negative psychotic symptoms, which could be induced by neurotoxic and degenerative effects of the substance on nerve pathways. Second, only severe physical violence has been taken into account and other important types of violence have not been considered. Third, this study was descriptive with a local focus; therefore, further analytical research on other forms of violence with a national focus seems necessary. Fourth, although we have a remarkable number of psychiatrists and mental health professionals and trained general practitioners in mental health system in Guilan, but we do not have any structured program for detecting people in early stages of developing psychosis or schizophrenia. It is not clear whether methamphetamine acts as a trigger for the first episode of psychosis in vulnerable people, or whether it starts an independent neuropathologic process.
Conclusion
The violence in these patients is significant and highly frequent. The frequency of persistent psychosis is 3 times the previous study in Iran, and closer to the statistics of the methamphetamine epidemics in Japan, indicating that the recovery time from psychotic symptoms is becoming longer in Iran, which agrees with many other studies suggesting that recovery from MIP can take more than a month. There were initial promising findings regarding the efficacy of treatment with electroconvulsive therapy, especially in treating persistent psychosis or reducing violent behaviors. In summary, the data present in this study can act as a motive to draw more attention to the harmful aspects of methamphetamine abuse in the Iranian society, especially regarding psychological health and social crimes. The results of this study are a serious warning, which calls for attention from psychiatrists and judicial authorities in devising proper preventive measures. | 2018-04-03T01:28:28.886Z | 2016-02-25T00:00:00.000 | {
"year": 2016,
"sha1": "07ae1ffb26f0cc3db3f5acfbec791ef8fba9e76a",
"oa_license": "CCBY",
"oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/s12888-016-0745-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07ae1ffb26f0cc3db3f5acfbec791ef8fba9e76a",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
235285566 | pes2o/s2orc | v3-fos-license | Measurement of thermal diffusivity for food products under natural convection cooling
In the present communication, experimental analysis has been done for evaluation of relationship between the thermal diffusivity with temperature of Potato and Brinjal (a spherical shaped food products) subjected to a natural convection. Free convection is a type of flow of motion of fluid which is not generated by an external source but by some parts of the fluid being heavier than other parts. The analysis simulates the one-dimensional Fourier equation experimentally, applicable to the regular shapes of the product (cylindrical and spherical shaped products). The experimental setup consists of a deep freezer maintained at 263 K and 1.013 bar pressure. Variation of product temperature inside the product has been measured along the radial direction of the shape.
Introduction
Being healthy is not an overnight phenomenon. Natural products are healthier than man made products [1]. A diet high in fresh fruits and vegetables in the form of salad can help to protect us from many diseases [2]. Natural products are biodegradable in nature so improper food storage can lead to several problems like growth of bacteria and moulds [3]. Fruits and vegetables are highly sensitive in nature and their spoilage costs wastage of billions of dollars annually worldwide [4]. Due to negative impact on the environment include air pollution, climate changes, soil and water pollution. The cost in transport sector growth dramatically which is neither by the consumer nor by the transport services but due to environment. A study say that road transport contributes up to 92% of all the cost when compared to other transport modes. Post-harvest losses of fruits and vegetables are more serious in developing countries than those in developed countries. Food preservation is a technique used to prevent food from spoiling. It also increases shelf life of food products. It includes method such as drying [5], irradiation [6], pasteurization [7] and the addition of chemical additives [8]. The very first stage of food spoilage can be detected through its appearance, foul. Smell, colour etc. Major factor of food spoilage is due to change in pH values and changes in climatic conditions like temperature, air etc. [9]. Zhang et al. [10] have found the thermal properties of IOP Publishing doi:10.1088/1757-899X/1146/1/012016 2 pea fiber and potato pulp and seen the effect of these property of extruded starch thermoplastics. Kostaropoulos and Saravacos [11] have presented the thermal diffusivity of granular and porous foods at low moisture content. Arif et al. [12] presented the experimental methods for determination of thermal diffusivity of food products in forced convection environment. The studied carried so far on the food preserving system, they have presented the effect of thermal properties for few fruits and vegetable. However deep investigation is missing in the literature. Also, the analysis on the Brinjal is lacking in the literature. Additionally, in the literature most of the works are reported on non-veg products. The aim of this work is to develop a simple method to determine the thermal diffusivity of selected fruits and vegetables (Potato and Brinjal) as a function of surface temperature. To the best of author knowledge, the present work will contribute the literature in the field of food and fruit preservation system. Figure 1(a) show the experimental setup of the deep freezer for natural convection. The inside view is shown in Fig. 1(b). Food preservation methods is an interesting area in engineering aspects. Potato and brinjal with precise spherical geometry have been chosen as a sample for testing and experiment. They are washed with tap water to remove its dirt before performing the experiment. Dimensions of the sample has been measured using Vernier calliper. This is an effective experimental setup consist of a thermo flask filled with ice. Temperature is measured with the help of calibrated copper constantan thermocouple which is pre connected with a digital DC microvolt diameter. Thermocouple is inserted into the potato in radial direction. The sample is hanged in a deep freezer which is initially maintained at -10°C. Observation is recorded uniformly at an interval of five minute. Temperature is measured at five different location from centre towards periphery. In order to simplify the analysis following assumption are made: 1. The rate of heat transfer is only along radial direction. 2. Losses due to transpiration from the surface of the product is assumed to be negligible 3. The food product is assumed to be Spherical and homogenous throughout. 4. There is no moisture in the cooling medium. 5. Specification of deep freezer are as follows: (Model-RQF-265(D), volume is 265 litre, recommended voltage stabilizer is -2 KVA, minimum temperature ranges from -35 to -40 o C, frequency is 50 Hz, compressor make: Tecumseh, Model: MCB 2410.
Experimental Procedure
The governing equation for cooling of solid food product (with characteristic dimension x), placed in air medium at constant temperature (T∞) is essentially time-dependent heat conduction equation (or Fourier's equation) without internal heat generation and moisture loss.
where, θ is the dimensionless temperature difference, T∞ is the temperature of cooling medium, TI is the initial temperature of the sample and, x = r/ro, where, ro (cm) is the radius of the sample under study, r (cm) is the position of thermocouple.
The left-hand side of equation (1), represents the double derivative of non-dimensionlized temperature difference with respect to x. This is calculated at the skin (x = 1) and for each value of time by using temperature variation across radius of the product. While, the differential on right hand equation (1) i.e., ∂θ/∂t represents single derivative of temperature variation at surface of the product with respect to time.
Result and Discussions
In the present work, the variation of thermal diffusivity with skin temperature and their empirical corelations are presented for spherical shaped potato and Brinjal in natural convection cooling. Temperature measurements along the radial direction during pre-cooling of food products in natural convection environments are carried out. These temperature distributions were plotted against radius ratios and were extrapolated up to the skin to predict skin temperature (refer Figs. 3
and 4). Figures 4 and 5 show temperature variation with time at different radius ratio for Potato and Brinjal respectively.
It is observed that at particular time temperature decreases as radius ratio increases for both Potato and Brinjal. Figures 6 and 7 show the variation of thermal diffusivity with skin temperature for Potato and Brinjal respectively and their empirical correlations are developed and is given by Eq. (2) with the coefficients listed in Table 1. The terms of the Fourier equation were evaluated and thermal diffusivity at the skin for each value of time was calculated. Finally, thermal diffusivity was plotted against temperature of the skin. It is a general observation for all the samples that at higher temperature thermal diffusivity shows strong dependence on temperature. But below -2°C, thermal diffusivity changes abruptly. The abrupt changes observed in thermal diffusivity of the products may be the result of one or more reason discussed below: 1.
due to phase change of the various constituents of the products.
2.
due to decrease in temperature of water. 3. thermal conductivity and specific heat variation of the product. 4. density variations below the freezing temperatures of the product.
However, the thermal diffusivity below -2°C temperatures illustrate abrupt variations and no regression curve closely represent the situation, due to this reason we have restricted the temperatures higher than -2°C. Cubical regression curves are found to be the best fit for thermal diffusivity (α) and are therefore fitted to data point. The thermal diffusivity (α) with temperature in general form are represented as; α = AT³ + BT² + CT + D (2) where the regression coefficient A, B, C and D for different samples are given in Table 1.
If a table is divided into parts these should be labelled (a), (b), (c) etc but there should only be one caption for the whole table, not separate ones for each part.
CONCLUSIONS
In this study variation of thermal diffusivity with temperature for potato and brinjal are measured in natural convection regime. Thermal diffusivity is obtained using one-dimensional (1D) Fourier's equation. The data obtained is shown graphically. Its shows a linear relationship of thermal diffusivity with temperature. Result obtained in the present study confirm that the variation of thermal diffusivity with temperature for proper geometry of food sample is simple and effective. Such temperature profile helps to understand various result over food product like deterioration rate shelf life, storage time for particular perishable food product. Also, the empirical relation has been developed for potato and brinjal under natural convection environment that will be help to those who are working in this area. | 2021-06-03T00:18:24.071Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "6606c611abf527d4f267789ce04385b5c87931ac",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/1146/1/012016/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6606c611abf527d4f267789ce04385b5c87931ac",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
270270529 | pes2o/s2orc | v3-fos-license | THE IMPACT OF MOVIE COLORIZATION BY ARTIFICIAL INTELLIGENCE ON CINEMATIC SYMBOLISM: A CASE STUDY OF SATYAJIT RAY’S ‘PATHER PANCHALI’
The objective of image colorization is to add color to a monochrome input picture to generate a colorful outcome, which is a classic and essential issue in visual effects. In cinema, color has a significant role in on various levels. But it was not the fundamental component of cinema. Although color was added in film later, but for a long time people were used to watching movies on their black and white television set. This study depends on the effort of a Bengali professor based in the United States who used artificial intelligence to colorize Satyajit Ray’s ‘Pather Panchali’ as a quarantine experiment. This research analyzes the use of colorization in ‘Pather Panchali’ to determine whether the colors work well to emphasize the film's symbolic meaning. It also examines whether there is still a place for sentimentality after colorization and how well AI works at coloring in black and white movies. The research objectives include evaluating the role that colorization plays in bringing the film's meaning to light with regard to certain situations, examining among specialists the nostalgia/emotions linked with classic black and white films, and gauging experts reaction to the practice of using AI to colorize previously black-and-white movies. This study uses in-depth interviews as a qualitative research approach for gathering expert opinions. This study concludes that the cinematic symbolism of the black and white version of ‘Pather Panchali’ is lost in the colorized version. Expert interviews revealed the complex connection between colorization and the film's visual style. The findings emphasize the need for careful consideration and preservation of the original black-and-white format, while also recognizing the advancements and limitations of artificial intelligence in the colorization process.
INTRODUCTION
Films have increasingly moved towards a more realistic style.The introduction of sound heightened the impression of reality for listeners.The next phase was color, which included the chromatic senses.Motion movies may now properly recreate all sensory experiences, both audio and visual.Hence, it was seen that color The Impact of Movie Colorization by Artificial Intelligence on Cinematic Symbolism: A Case Study of Satyajit Ray's 'Pather Panchali' ShodhKosh: Journal of Visual and Performing Arts 914 and the story had a contentious connection.This is understandable given how recently the use of color has become standard in filmmaking.
The concept of adding color to black and white films is not revolutionary in the modern era.With the advancement of technology, it is now possible to colorize a black-and-white film in a few hours using the artificial intelligence, when earlier it was a very difficult and time-consuming process Lavvafi et al. (2010).But the use of colorization in film has been a subject of debate among filmmakers and film enthusiasts for decades.While the addition of color to films is often seen as a way to enhance the visual appeal and realism of a movie, it is also criticized for altering the original artistic vision of the filmmaker and detracting from the film's intended impact.The purpose of this research paper is to explore the role of colorization in cinematic storytelling, specifically through a case study of Satyajit Ray's 'Pather Panchali', a classic black and white film from the 1950s.The study also plans to look at how recent advances in AI technology have lowered the barriers to entry for colorizing black-and-white films.
According to the news report, Aniket Bera, a 30-year-old professor at the University of Maryland, has published a 2.14-minute video on YouTube of Satyajit Ray's 'Pather Panchali' that has been digitally colorized and upscaled.The video, which was posted on May 14, 2020, was an academic experiment inspired by Bera's admiration of Ray's work.Bera up scaled the footage to 60fps, 4K Ultra High Definition, and digitally colorized it using deep neural networks.He explained that artificial intelligence technology functions similarly to the human brain by analyzing millions of real-world videos to 'dream' of the original hues and details.Completely automated, Bera's method took roughly seven hours Dasgupta (2020).An FTII alumnus, Sriram Raja, has conducted an experiment called #imaginecolour during the lockdown.He coloured the Apur Sansar clip and conceptualised the 'Jaane kya tune kahi' song from Pyaasa in part black-and-white and part sepia-toned.Raja compares his work with amateur independent musicians making 'cover' versions of popular songs TNN.(2020).
In addition to the attempts by Aniket Bera and Sriram Raja to colorize clips of classic Indian films using AI and manual techniques respectively, a full colorized version of 'Pather Panchali' can also be found on YouTube.This version, uploaded by Bjgtjme -Free Movies, is over two hours long and presents the entire film in color.It is not clear what techniques were used to colorize the film, but it has received mixed reactions from viewers Bjgtjme.(2022).
As we become acquainted with and empathize with each character, 'Pather Panchali' builds inexorably to a dramatic climax.Ray invests time and cares into creating a universe that seems real and authentic.No aspect of the picture, from its characters to its speech to its plot, rings fake.The feelings provoked by what happens in 'Pather Panchali' are genuine and authentic, rather than the manufactured results of manipulative formulae.Ray helps us feel along with, rather than simply for, his characters Berardinelli (2016).Attempting to colorize a classic movie like this is both a challenge and a thrilling endeavor.There are a variety of considerations involved in this task that requires a great deal of effort and attention to detail.The creator must consider the original lighting and color palette, as well as the historical and cultural context of the film.Additionally, they must ensure that the colorization process does not compromise the artistic vision of the original director.Despite these challenges, the process can also be exhilarating, as it offers a unique opportunity to reinterpret and breathe new life into a beloved classic.
The significance of this research lies in its contribution to the ongoing debate about the use of colorization in film.The findings of the research will provide insights into the impact of colorization on the artistic integrity of a film and its intended emotional impact on the audience.The research will also contribute to the understanding of the role of artificial intelligence in film colorization and its potential benefits and drawbacks.
ABOUT THE FILM 'PATHER PANCHALI'
'Pather Panchali' is a critically acclaimed Indian film directed by the Oscar award winner Satyajit Ray and produced by the Government of West Bengal, released in 1955.It is Satyajit Ray's directing debut and the first film of the Apu trilogy, followed by Aparajito and Apur Sansar, which portrays the story of the growth and maturation of Apu, a young boy from a poor Brahmin family living in a rural village in Bengal.The film's use of black and white cinematography is seen as a deliberate artistic choice, with Ray stating that he used it to evoke a sense of nostalgia and timelessness in the story TOI. ( 2022).Featuring Kanu Banerjee, Subir Banerjee, Karuna Banerjee, Pinaki Sengupta, Uma Dasgupta, and Chunibala Devi, the drama film "'Pather Panchali'" received all-time Best Indian Film award from the International Federation of Film Critics (FIPRESCI) E-Times.( 2022).
The film is based on the novel of the same name by Bibhutibhushan Bandopadhyay.It is known for its realistic portrayal of rural life in India, also capturing the beauty and hardship of the people and their environment.The story follows the struggles of Apu's family, including his father Harihar, mother Sarbajaya, and sister Durga, as they face poverty, illness, and death, while striving to find joy and hope in their daily lives TOI.(2022).Ray had conveyed a major lesson via the picture that hardship brings out one's heroic potential.It might be Apu or Sarbajaya.What happens to a family when a daughter or sister dies, especially if they are already struggling financially because of economic hardship?The film's small moments of creative peaking make it timeless.It's true that 'Pather Panchali' is a cult film that preserved the humanity of interpersonal connections.So, even though it's about life in rural Bengal, it has universal appeal Getbengala.(2020).
The film 'Pather Panchali' is a masterpiece of Indian cinema and is widely regarded as a landmark in the development of Bengali cinema.Global viewers are connected with the film's common themes of poverty, family, and the fight for survival.According to SatyajitRay.org'sreport, "Pather Panchali" received critical acclaim and garnered numerous awards, including the President's Gold & Silver Medals in New Delhi (1955), the Best Human Document at Cannes (1956), a Diploma Of Merit at Edinburgh (1956), the Vatican Award in Rome (1956), the Golden Carbao in Manila (1956), Best Film andDirection at San Francisco (1957), the Selznik Golden Laurel in Berlin (1957), Best Film in Vancouver (1958), Critics' Award for Best Film in Stratford, Canada (1958), Best Foreign Film in New York (1959), the Kinema Jumpo Award for Best Foreign Film in Tokyo (1966), and the Bodil Award for Best Non-European Film of the Year in Denmark (1966) SatyajitRay.org.( 2024).It has since become a classic of world cinema and is considered as one of the greatest films ever made.
Satyajit Ray, the director of 'Pather Panchali', was one of the most important filmmakers of the 20th century, and the first Indian director to gain international recognition.His works are known for their realistic portrayal of Indian life, and their exploration of themes such as identity, family, and tradition.Ray's influence on Indian cinema and his contributions to world cinema have been widely acknowledged, and he is regarded as one of the greatest filmmakers of all time.
REVIEW OF LITERATURE
For a long time, it has been debated whether or not to utilize AI to add color to previously shot black-and-white films.It is obvious that there was a gap between the conception and development of color.People expected color in motion pictures to be identical to color in nature, which led to its pervasive use.From the aforementioned points of view, early color films failed miserably in representing "real colors."Thus, by the 1930s, the black-and-white realist codes were wellestablished.It was familiar to the audience.The introduction of color to the cinema was a radical change that took some getting used to Costa (2011).
Wilson Markle introduced the concept "colorization" in the 1970s to refer to the practice of digitally enhancing previously black-and-white imagery with additional hues.The phrase is now often used to refer to any method of coloring previously black and white photos or videos Koleini et al. (2010).Since 1980, the technique of colorizing black-and-white film and images has gained a lot of traction in the movie business and the world of computer graphics.The fundamental principle behind any and all colorization techniques is to swap out the original monochrome image's luminance (the gray level) for a vector color space Lavvafi et al. (2010).
917
Several different colorization techniques have arisen since the introduction of digital video processing.Drawing scribbles to spread color to nearby pixels is one such method, but it takes a lot of human input.A different method for adding color to a black-and-white picture involves copying the colors from a reference image into the new one.Colorize grayscale photos using CNNs trained on large-scale image datasets Chen et al. (2018).According to Shiguang Liu, traditional manual coloring method consumes a lot of manpower and material resources, and may not get satisfactory results.Given a source image or video, colorization methods aim to automatically colorize the target gray image or video reasonably and reliably, which thereby greatly improves the efficiency of this work Liu (2022).
Coloring techniques have evolved throughout time and may be grouped into three different types: hand coloring, semi-automatic, and automated coloring.Coloring by hand is an age-old art form that has been utilized to showcase the skills of many creative minds.In 1988, for instance, it took almost two months and about US$ 450,000 to finish coloring the iconic film Casablanca.This approach required investigating historical costume notes from the original movie's set to find the actors' and actresses' most common colors.Black Magic, a commercial software suite designed for colorizing still images, gives the user access to a wide variety of strokes and palettes of colors.One major issue is that all segmentation work must be done by hand Semary et al. (2007).
The semi-automatic approach for adding color to black-and-white photographs was introduced by Levin et al.Similar to the technique reported by Levin in 2004, Li et al. demonstrated a semi-automatic approach for colorizing grayscale photographs.However, this algorithm took advantage of the edges gradients, and advanced gradient directions information available in the grayscale images to fill in the user's scribbles with color Li et al. (2015).
A plethora of research using gradient-steered diffusion, heat transfer equations, and inpainting to facilitate colorization surfaced in the early 2000s.These techniques let users enter simple color strokes, and algorithms would fill up the marked spaces without going over bounds.But a breakthrough was made with the introduction of neural networks, especially convolutional neural networks (CNNs).CNNs are excellent at recognizing objects and can efficiently combine colorization and recognition tasks if trained with large-picture datasets.The literature presents a range of strategies that use several network architectures, including auto encoders and GANs (Generative Adversarial Networks) Titus & N.M (2018) In their paper, Koleini et al. have discussed the texture based colorization method for black and white videos.In order to make use of MSMD's strengths in edges and texture-related data extraction, they mapped the black and white scenes' Gabor filter-based features to the optimal location within the HLS range using a multi-layer perceptron (MLP).A combination of Gabor filter banks (feature extractor) and a multilayer perceptron (mapper) achieved promising results with the objective of successfully colorizing black-and-white films.In order to ensure that their procedure was accurate, they took into account both the colorization's aesthetic quality and the MSE inaccuracy Koleini et al. (2010).
Older colorization techniques produce movies that have less contrast, seem flatter and whiter, and have washed-out colors.Nonetheless, notable breakthroughs in colorization technology throughout the 1980s brought about advances.Since then, some black-and-white films and TV shows have been given realistic-looking color makeovers.Colorization approaches usually entail assigning colors to particular regions within a frame and monitoring those regions over many frames.In the early 2000s, numerous studies emerged employing techniques like gradient-steered diffusion, heat transfer equations, and inpainting to aid in the colorization process.These methods allowed users to input minimal color strokes, after which algorithms would seamlessly fill the designated areas without exceeding boundaries.However, the advent of Neural Networks, particularly Convolutional Neural Networks (CNNs), marked a significant advancement.CNNs excel in object recognition, and when trained with extensive image datasets, they can effectively combine recognition and colorization tasks.Literature showcases various approaches utilizing diverse network structures such as Autoencoders and GANs (Generative Adversarial Networks) Boutarfass & Besserer (2020).
To colorize a video automatically, Mohiy, Noura, and Alaa.M.Abbas developed a system that did it shot by shot, instead of frame by frame.This allowed for a variety of approaches to be provided, including shot cut identification, motion estimates, and similarity characteristics across pictures, and colorization.The fact that each shot in a movie had common framing cues served as the inspiration for their concept.Therefore, there was no need to go frame-by-frame and colorize the film.It was sufficient to color the first frame of each shot (the key frame) and then use a transferring method to apply those colors to the other frames.Their paper successfully proposed and implemented an end-to-end automatic colorization system tailored to motion pictures, and they came close to realizing their vision Hadhoud et al. (2010).
Mohammad, Seyed Amirhassan, and Payman developed a method for colorizing black-and-white video footage utilizing artificial neural networks and digital image processing methods, with the goal of minimizing the need of a human operator.The suggested method utilized ANN to automatically colorize black-and-white films.
While training an ANN took a considerable amount of time, this could be reduced with the help of more powerful computers or more efficient training algorithms.It was estimated that this approach of colorization was almost 50 times quicker than those in which every frame had to be colorized by hand, and that they could colorize a series of 50 frames on average with each colored frame and developed neural networks, which was about two or three seconds of a film.In the actual procedure, a source black and white frame was first manually colorized.After that, they tried a Multi-Layer Perceptron (MLP) neural network with these 2 images as inputs.(A black-and-white film was fed into a machine that was supposed to spit out a color film).Next, the neural network's input was the sequence of black-and-white frames; the network's output was the matching color data for those frames Lavvafi et al. (2010).
OBJECTIVES
The aim of this research is not just analyze the satisfactory factor of colorization of 'Pather Panchali', but also discuss the film in terms of accuracy and authenticity.Through this study the researcher tried-1) To assess significance of colorization in highlighting the symbolism in the film with reference to specific scenes.2) To dissect the nostalgia/emotions associated with the classic black and white films among the experts.3) To evaluate the reception of use of artificial intelligence for colorization of grayscale films.
RESEARCH METHODOLOGY
For this study, the research methodology employed is a qualitative approach using in-depth interviews with experts in the film industry.The objective of this method is to gather insights and opinions from professionals who have experience and knowledge in the field of film colorization and its effects on cinematic storytelling.The interviews have been conducted in a structured manner, with a pre-determined set of questions related to the research objectives.For the interview, interviewees were provided with a YouTube link to the colorized version of 'Pather Panchali' and a short montage created by Bera for their feedback.
To collect data, we selected participants for the study based on their expertise in film color, symbolism, and the use of artificial intelligence for colorization.Participants were required to work or have worked in the film industry and possess experience with colorization techniques.Participants gave their informed permission before being interviewed via the phone, in person, or through online video chat.In most interviews, five standard questions were used.These five questions are as follows: • Which version of this film, black and white or color, would you prefer to watch, and why? • How does the use of color in 'Pather Panchali' enhance the film's narrative and themes?• Is the colorization adequate for highlighting cinematic symbolism?
• Do nostalgic and emotional aspects remain relevant post-colorization?
• How successful is artificial intelligence in colorizing 'Pather Panchali', and is the outcome acceptable and satisfactory?Interviews have also been conducted with a small number of students who comprehend film language and who watch films on a variety of platforms in order to ascertain their perspectives and feedback regarding colorization.Thematic analysis was used to assess the data acquired from the interviews.Finding commonalities and organizing them systematically into meaningful patterns that can be used to answer research questions and draw conclusions is at the heart of this strategy.Researchers conducted the analytic procedure through multiple coding phases, with themes and patterns emerging as we progressed with the investigation.
DATA ANALYSIS AND INTERPRETATION
• Interview: 1 In an online interview, Atanu Ghosh, a National Award-winning filmmaker, shares his perspective on the colorization of Satyajit Ray's 'Pather Panchali'.He states unequivocally that he prefers the black-and-white version of the film, considering it to be the original and therefore significant.As the original film was shot in black and white, he claims that color doesn't add anything to his interpretation of 'Pather Panchali'.Atanu emphasizes that the lighting scheme, tone, texture, and aesthetics of the film were meticulously designed to suit the grayscale format.He believes that imposing color on a film created with black and white The Impact of Movie Colorization by Artificial Intelligence on Cinematic Symbolism: A Case Study of Satyajit Ray's 'Pather Panchali' ShodhKosh: Journal of Visual and Performing Arts 920 parameters would have a detrimental effect on its overall artistic vision.Atanu Ghosh argues that the movie's colorization dilutes the film's symbolic meaning.He criticizes the artificial appearance of skin tones, backdrops, and props, which lack depth and authenticity.Ghosh contends that the colorization method, which involves wide sampling, cannot duplicate the original's unique subtleties.He further asserts that colorization alters the aesthetics of the original film, going against the artistic and intellectual brilliance of its creators.
Ghosh firmly states that nostalgic and emotional aspects do not remain relevant post-colorization, without providing further elaboration.Moreover, the mise-enscene would have been drastically altered by the addition of color.Colorizing it, thus, changes its aesthetic value from the original.He believes that colorizing classic black-and-white films can only serve entertainment purposes, highlighting concerns about the violation of the creative and intellectual rights of the director, cinematographer, and art director through the colorization process.He emphasizes the importance of preserving the original black and white version, questions the relevance and adequacy of colorization in enhancing the film's narrative and symbolism, and raises concerns about the violation of artistic value through the use of artificial intelligence in colorization.
• Interview: 2 Debasish Sen Sharma, a filmmaker, thespian, and academician, shares his perspective on the colorization of Satyajit Ray's 'Pather Panchali' in a face-to-face interview.When asked about his preference between the black and white and color versions of the film, Sen Sharma expresses a strong inclination towards the black and white format.He explains that this preference is rooted in the audience's familiarity with the film presented in its original black-and-white form, which holds great nostalgic value, particularly for Bengali viewers.Sen Sharma takes an unfavorable perspective on the use of color in 'Pather Panchali' and its influence on the film's narrative and themes, believing that colorization has no substantial impact on the film's narrative or ideas.According to him, the original black-and-white presentation conveyed the intended message and emotional depth effectively, rendering the addition of color extraneous and ineffectual.
The colorization, he says, could do a better job of communicating or emphasizing the film's symbolic aspects.This indicates that adding color does not help bring out the film's more profound significance.Sen Sharma thinks the film's sentimental and nostalgic elements are less effective once they've been colorized.He points out that colorization leads to the loss of the iconic palette, as well as the diminished impact of light and shade effects in certain scenes.As a result, there is a major shift in the picture away from the sentimental aspects of the original blackand-white version.While the AI colorization is technically impressive, Sen Sharma argues that the ultimate result still has an unnatural feel.He emphasizes the difficulty of combating the collective reminiscence engrained in the minds of a generation, indicating that the artificial colors fail to resonate with the viewers' sentimental attachment to the black-and-white version.
• Interview: 3 In an online interview, Somdev Chatterjee, an Assistant Professor of Television Production at the Satyajit Ray Film & Television Institute (SRFTI), firmly expresses his preference for the black-and-white version of the film.He thinks it's vital to keep Ray and cinematographer Subrata Mitra's vision intact, and that any kind of Tanmay Samanta, and Dr. Ramesh Kumar Rawat ShodhKosh: Journal of Visual and Performing Arts 921 interference, including colorization, messes with the integrity of the film.According to Chatterjee, the black-and-white version its own and requires no additional modifications.Chatterjee expresses skepticism about the effects of color on the film's storyline and ideas while analyzing the usage of color in 'Pather Panchali'.He believes the black-and-white format and Subrata Mitra's skilled cinematography effectively convey the intended message.According to him, the film's use of color clashes with its otherwise harmonious aesthetic.
Somdev doubts the assumption of the question, suggesting that colorization effectively draws attention to cinematic meaning.He wonders whether there wasn't already enough symbolism in the black-and-white version to justify making it colored, and he argues that the picture doesn't need to be colored to make its meaning clearer.Somdev recognizes the possibility for diversity among viewers when examining the significance of nostalgic and emotional qualities after colorization.Having watched the original black and white film so many times, the color version makes him uneasy.He claims that he does not feel more moved by the picture because of the use of color.He stresses the significance of retaining the original vision, doubts the need for colorization to enhance story and themes, and emphasizes the subjective nature of colorization's emotional and nostalgic influence.
• Interview: 4 In this interview, Asok Dasgupta, a National and International award-winning cinematographer, documentary filmmaker, and academician, expresses a clear preference for the black-and-white version.He states that the tonal separation is much better in the black-and-white version, implying a stronger visual impact.Dasgupta believes that the use of color detracts from the viewing experience of the film rather than enhancing its storyline and concepts.He specifically mentions that the skin tone of the characters is not accurately rendered on the screen, implying a lack of authenticity.He states categorically that the colorization does not correspond to the intended cinematic symbolism.This indicates that the film's symbolic elements are not effectively conveyed through colorization.
According to Asok Dasgupta, sentimental and emotional qualities are completely irrelevant after colorization.This implies that the emotional impact and nostalgic resonance of the film are diminished or lost through the colorization process.Dasgupta notes that the color screen is not effectively maintained while evaluating the performance of AI in colorizing 'Pather Panchali'.Occasionally, he observes, the film is colored, while a few portions remain black and white.However, he does say that the sight of the green foliage is appealing to the eye.According to his analysis, the colorized version loses cinematic symbolism, nostalgia, and emotion and is inconsistently rendered.The black-and-white version, on the other hand, has greater tonal separation.
• Interview: 5 In the interview with Soumya Shubra Das, a multifaceted individual proficient in the domains of filmmaking, acting, and academia, his profound observations pertaining to the process of colorization employed in Satyajit Ray's seminal work, 'Pather Panchali', offer a unique and discerning viewpoint.Das vehemently rejects the color rendition, deeming it a derisive and unsuitable portrayal for a globally renowned masterpiece.He staunchly asserts his stance as a cinema connoisseur and is vehemently against the use of color in 'Pather Panchali.'He emphasizes the apparent irrationality of such a choice and argues that incorporating color in the film deviates from its original artistic intent, which affects its storyline and themes.Das believes that the use of color impacts the overall integrity of the film.The author emphasizes the careful and detailed preparation that filmmakers engage in when envisioning their films, and argues that the introduction of color undermines the planned monochromatic approach of the director.Das criticizes the efficacy of colorization in accentuating cinematic significance, underlining that cinematic symbolism is not exclusively reliant on color.He contends that the use of color may undermine the original aims of Satyajit Ray, who did not employ color for symbolic reasons in the first monochromatic rendition.
Das critically assesses the significance of nostalgic and emotional elements after the process of colorization.He denounces the artificiality of the colorization, deeming it to be of worse quality than Technicolor and highlighting flaws in the depiction of skin tones, notably in the train scenario.Das argues that skin tone has influenced the surroundings.Durga's skin tone exhibits variations, ranging from pitch black to red-black, and numerous shades in between.In his opinion, the colorization has a negative impact on the sentimental and emotional elements, disturbing the recollections linked to the original monochrome film.Das vehemently opposes the use of artificial intelligence for colorization, finding it infuriating.He argues that certain works, particularly those that embody the unique concepts of directors, need to be conserved.He underscores that AI colorization may serve as an experiment, but it can never replace or be deemed suitable in comparison to the original.
Drawing upon the insights shared in the aforementioned interviews, a comprehensive chart has been devised to highlight preferences across various conditions, encompassing symbolism, emotional aspects, and the acceptance of colorization from both versions.The findings unequivocally indicate a unanimous sentiment among the experts, as none of them express a liking or recommendation for the color version.
Based on the insights garnered from the aforementioned interviews, a second table has been formulated to encapsulate the distinctive viewpoints of the five film experts concerning the defined objectives.Table 1 encapsulates the valuable perspectives provided by these experts.
923
A group discussion and interviews were organized with 45 students from the Department of Mass Communication at St. Xavier's University, Kolkata.Among them, the majority of the students (40 out of 45) expressed dissatisfaction with the colorized version of "Pather Panchali."In a detailed analysis of the data, three primary reactions emerged regarding the colorization of the film.The first is technical.Many students appreciated Professor Bera's efforts to use AI for film restoration and enhancement.However, they noted that AI technology, while beneficial, does not yet match the quality of hand-tinted color.An example they preferred was the colorization in "Mughal-E-Azam," which seemed more natural compared to the sometimes artificial-looking skin tones in "Pather Panchali." The second reaction is emotional.The students expressed a deep emotional connection with "Pather Panchali" and its creators-Satyajit Ray, Bibhutibhushan Bandyopadhyay, and Pandit Ravi Shankar-who are revered figures.Characters like Apu, Durga, Indir Thakrun, and Sarbajaya are ingrained in their hearts, representing familial archetypes and emotional touchstones.The students felt that any alteration to this deeply emotional content was hard to accept.
Lastly, the reaction is psychological.Colors significantly influence our psychological responses, playing a subtle but powerful role in shaping our filmviewing experience.The students discussed how filmmakers manipulate emotions through the use of color, a technique that should feel natural and unforced.In "Pather Panchali," the imposition of color was seen to distract and diminish the viewing experience, shifting focus in a way that could disrupt the natural engagement with the film.
Apart from that, various surveys and interviews with filmmakers and film critics, conducted by multiple media houses, have consistently opposed the colorization of 'Pather Panchali'.This widespread sentiment reflects a strong preference for preserving the film in its original black-and-white format.Film critic and author Amitava Nag notes that a significant survey conducted by the Kolkatabased TV channel, 24 Ghanta, revealed an overwhelming 96% of participants believed that 'Pather Panchali' does not require colorization to enhance its quality or appeal to modern audiences.Nag also mentions that the debate over the colorization of black-and-white films began in Hollywood during the 1980s Chatterji (2020).Filmmaker Sandip Ray, son of Satyajit Ray, has labeled it as "artificial," though he acknowledges the difficulty in creating overly tacky results due to technological advancements.Ray expresses discomfort with the departure from the eternal black-and-white frames, citing a lack of consistency and a deviation from the original directorial vision.He emphasizes consulting the original cinematographer to maintain authenticity and understand the tonal quality Dasgupta (2020).Professor Madhuja Mukherjee, a Film Studies lecturer at Jadavpur University and filmmaker, has strongly criticized the colorization of 'Pather Panchali'.She argues that it undermines the original work of cinematographer Subrata Mitra by obliterating the film's nuanced gray scales and lighting variations.According to her, the colorization homogenizes skin tones, merges elements inappropriately, and flattens the visual depth, likening the effect to being "washed with chlorine" Chatterji (2020).
FINDINGS AND CONCLUSION
The findings of this study indicate that the colorization of 'Pather Panchali' does not adequately emphasize the cinematic symbolism depicted in the original black- and-white version.Experts expressed concerns about the artificial appearance of skin tones and the lack of depth in certain backdrops and props.It was observed that the colorization process did not effectively replicate the specific nuances of the original film, leading to disconnect between the colorized version and the intended symbolism.
Furthermore, the research revealed that the nostalgic and emotional aspects associated with classic black-and-white films do not remain relevant postcolorization.Experts voiced displeasure with the colorized version, emphasizing the significance of preserving the original aesthetic and the emotive impact it has on audiences.It was felt that the inclusion of color detracted from the director, cinematographer, and art director's aesthetic vision and intellectual brilliance.
Analysis of the use of AI to colorize black-and-white films revealed both its strengths and weaknesses.While experts acknowledged the technical sophistication of the colorization process and the capabilities of artificial intelligence, they raised concerns about its inability to replicate the original black-and-white format's authenticity and aesthetic value.The colorization was done more for the sake of audience enjoyment than to improve the film's aesthetic or symbolic value.
Numerous articles could be written about the role and importance of colors in visual storytelling.However, focusing on the main issue, the decision to colorize 'Pather Panchali' lacks creativity.If Satyajit Ray had chosen to colorize the film himself, he likely would have approached it differently.The original script, production design, and costumes were all crafted with a black-and-white format in mind.This underscores the complexity of colorization and restoration of classic films; it involves much more than merely adding colors.Recent successful colorizations in India, such as 'Mughal-e-Azam' and 'Naya Daur', which are both epic dramas, contrast with 'Pather Panchali'.The latter film's nuanced, lyrical, and realistic nature demands not just technical expertise, but also a deeper level of creative engagement Sarkar (2020).
In conclusion, this study highlights the relevance of colorization, symbolism, and AI in the context of 'Pather Panchali' by Satyajit Ray.Expert interviews elucidated the nuanced relationship between the film's colorization and its original aesthetic intent.The findings underscore the importance of preserving the original black-and-white format while recognizing both the potential and limitations of artificial intelligence in colorization.More investigation into the effect of colorization on cinematic storylines and the emotional involvement of viewers is required.
RECOMMENDATION
Based on the analysis and insights derived from the interviews with film experts regarding the colorization of classic black-and-white films, specifically focusing on Satyajit Ray's 'Pather Panchali', the following recommendations are proposed in alignment with the study objectives: • It is crucial to uphold and respect the original artistic vision of filmmakers, especially for classics that have significant cultural, historical, and cinematic value.• Colorization could be considered for specific instances where it genuinely enhances the narrative, symbolism, or viewer experience without detracting from the original aesthetic and emotional impact.• The emotional and nostalgic aspects associated with the classic films must be preserved.
• Before undertaking colorization projects, it is advisable to engage with the film community, including directors, cinematographers, film historians, and the audience, to gauge their perspectives and preferences.• The colorization process should navigate ethical and legal considerations, particularly regarding the intellectual property rights of the original creators.By adhering to these recommendations, the film industry can navigate the delicate balance between innovation and preservation, ensuring that the legacy of classic films is honored while also exploring new dimensions of storytelling through colorization.
CONFLICT OF INTERESTS
None.
TheFigure 1 Figure 1 Figure 2 Figure 2
Figure 1 For example, neural technology was used in the 2003 release of Black Magic, a The Impact of Movie Colorization by Artificial Intelligence on Cinematic Symbolism: A Case Study of Satyajit Ray's 'Pather Panchali' ShodhKosh: Journal of Visual and Performing Arts 918 commercial image-colorization program.This tool allows users to choose from various color schemes and patterns, and segmenting pictures is user-driven.Samanta (2023).
The
Impact of Movie Colorization by Artificial Intelligence on Cinematic Symbolism: A Case Study of Satyajit Ray's 'Pather Panchali' ShodhKosh: Journal of Visual and Performing Arts 922 924
Table 1 Key Points of the Experts for Preferring Black-and-White Version
ShodhKosh: Journal of Visual and Performing Arts The Impact of Movie Colorization by Artificial Intelligence on Cinematic Symbolism: A Case Study of Satyajit Ray's 'Pather Panchali' ShodhKosh: Journal of Visual and Performing Arts | 2024-06-06T15:13:47.333Z | 2024-06-04T00:00:00.000 | {
"year": 2024,
"sha1": "eef9378ee901b50d81865eb273dbc9bd463bdef7",
"oa_license": "CCBY",
"oa_url": "https://www.granthaalayahpublication.org/Arts-Journal/ShodhKosh/article/download/1001/916",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "cc641732621103180fb39ea157950751f0de632f",
"s2fieldsofstudy": [
"Computer Science",
"Art"
],
"extfieldsofstudy": []
} |
31996068 | pes2o/s2orc | v3-fos-license | Update on intraoperative radiotherapy: new challenges and issues
Intraoperative radiotherapy (IORT) for breast cancer has challenged the standard external beam radiotherapy (EBRT) and has been shown to be non-inferior for treating early breast cancer in the past decade. Several technologies have been tested for IORT and various randomised controlled trials are still ongoing. Different methods of application of IORT have also been evaluated, from early breast cancer to tumour bed boost radiotherapy amongst high risk women. TARGIT-A and ELIOT trials have reported a low incidence of local recurrence and good survival in both arms. Moreover, mortality has been found to be lower amongst women who underwent partial breast radiotherapy compared to those treated with EBRT in a recent meta-analysis. Despite this, IORT has not been introduced in the current clinical practice as yet, and many clinicians do not mention this treatment option to patients awaiting breast cancer surgery. The scientific community does not unanimously support the effectiveness of IORT and still raises concerns about introducing IORT as a standard treatment option for breast cancer. Current evidence demonstrates that IORT is ready for roll-out; it is time to let well-selected and informed patients be offered this treatment option in the current clinical practice.
Background Intraoperative radiotherapy (IORT) is the administration of radiation therapy at the time of surgery, accurately defining the target volume of the breast. IORT has gained interest as an alternative to external beam radiation treatment (EBRT) in the past two decades [1]. The rationale for IORT is based on the observation that over 90% of local recurrences after breast conserving surgery (BCS) occur at or near the original operation site [2,3]. The rate of local recurrence in the remaining breast tissue in other quadrants is 4% [4,5], which approaches the estimated risk of developing contralateral breast cancer. The milestone study of Holland et al [4] has shown that multiple tumour foci occur in up to 60% of mastectomy specimens away from the index quadrant. Trials comparing BCS and mastectomy have demonstrated equivalent survival, suggesting that these small disease foci are not clinically relevant [6].
IORT can be delivered using different techniques, using either low-voltage X-rays or electrons. Two randomised controlled trials, TARGIT-A and ELIOT, have shown, in selected patients, encouraging results in terms of local recurrence and survival.
ELIOT Trial
ELIOT was a prospective single-centre randomised phase III equivalence trial (ClinicalTrials.gov Identifier: NCT01849133). The aim of this trial was to compare a 21 Gy single-dose IOERT delivered using the ELIOT technique to conventional whole breast EBRT [7]. The pre-specified equivalence margin was 7.5% (90% of statistical power at 5% significance) and a non-inferiority margin of 4.5%. A total of 1,305 women were randomised, aged between 48 and 75 years, with clinically invasive T1-T2 ≤ 2.5 cm breast cancers suitable for BCS. Detailed inclusion criteria are listed in Table 1. After five years' follow-up, the ELIOT trial showed a 4.4% of local recurrence rate (LRR) amongst patients who underwent BCS and IORT. The Italian retrospective analysis of the ELIOT technique showed that the five-year LRR increased as patients moved from ASTRO 'suitable' to 'cautionary' to 'unsuitable' groups (1.5%, 4.4% and 8.8%, respectively). Similarly the five-year LRR for 'low risk', 'intermediate risk' and 'high risk' groups according to the GEC-ESTRO guidelines was 1.9%, 7.4% and 7.7%, respectively [8][9][10][11].
TARGIT-US trial
TARGIT-US is a phase IV registry trial launched in 2012 by Michael Alvarado and colleagues in the USA. This study proposes to investigate the efficacy and toxicity of IORT after BCS, with or without EBRT as indicated by pathologic risk factors, in women with early stage breast cancer (ClinicalTrials.gov Identifier: NCT01570998). The primary endpoint is in-breast local failure. The secondary endpoints are toxicity and morbidity, relapse-free survival and overall survival. The estimated sample size is 750 patients. Patients selected for BCS, who are considered to have a low risk of local recurrence, are eligible for the registry trial once given their informed consent. Inclusion criteria are shown in Table 3. Patients receive IORT as a single fraction over 15-40 minutes at the time of lumpectomy. The technique and doses used are the same as the TARGIT-A trial, but the accrual is only open to IORT at the time of the initial lumpectomy and there is no postpathology randomisation. The trial is still ongoing and is recruiting patients -it is expected to achieve complete accrual in 2017.
TARGIT -Retrospective
In September 2016, Valente et al [12] published the first analysis of a multi-institutional retrospective registry using Intrabeam® in North America. 19 institutions from the United States and Canada retrospectively entered data on the use of IORT. 935 women underwent lumpectomy and IORT either concurrent to surgery (prepathology stratum) or after reopening the wound or even like a programmed boost from 2007 to 2013. The registry has shown that the number of women treated with Intrabeam® has increased over the years (p = 0.005). The median age was 66.8 years. 90% of the patients selected presented with T1 tumours with oestrogen receptors positive. 83% of tumours were grade 1 or 2, 79% were in the prepathology stratum, 7% in the postpathology stratum, whereas 14% received IORT as a boost. At a median of 23.3 months of follow-up, 2.3% of in-breast true recurrences were observed. In a per-stratum analysis, 2.4% of recurrences were found in the prepathology cohort, whilst in the postpathology cohort the recurrence rate was 6.6%. The TARGIT-R study confirmed that IORT performed concurrently at the time of lumpectomy is the preferred approach.
TARGIT-B Trial
In February 2013, the TARGIT-B trial was launched (ClinicalTrials.gov Identifier: NCT01792726) by University College London. TARGIT-B aims to compare IORT boost with EBRT boost in early breast cancer. TARGIT-B is a multicentre randomised controlled trial designed www.ecancer.org ecancer 2018, 12:793 to test the hypothesis that tumour bed boost delivered by IORT is superior to the standard external beam tumour bed boost administered in five fractions over five days. The trial is still recruiting patients. The device used to deliver IORT is Intrabeam® by Carl Zeiss. Eligible patients are those awaiting BCS who are found to possess one or more risk factors for local recurrence at core biopsy. The accrual goal is 1,796 patients. Table 4 reports inclusion criteria for TARGIT-B trial. Patients are randomised into two groups, namely, the boost group and the EBRT group. In the boost group, 20 Gy IORT boost is delivered to the tumour bed after tumour resection over 20-35 minutes, whereas a standard external beam tumour bed boost is administered along with EBRT to the EBRT group. All patients enrolled within this trial receive postoperative EBRT and adjuvant treatments according to the pathology final report. The primary outcome is local tumour control. The secondary outcomes are site of relapse; five years' relapse-free survival; overall survival; local toxicity and morbidity; and quality of life.
TARGIT-E Trial
TARGIT-E(lderly) is a multicentre single-arm prospective phase II study of IORT in elderly patients with small breast cancer (ClinicalTrials.gov Identifier: NCT01299987) [13]. The TARGIT-E trial is based on the TARGIT-A trial and IORT is administered using Intrabeam® (Carl Zeiss). The aim is to investigate the efficacy of IORT amongst elderly patients with small breast tumours. Table 5 shows inclusion criteria for the TAR-GIT-E trial. EBRT is administered only if the final pathology demonstrates additional risk factors for local recurrence. The TARGIT-E trial is based on the rationale that local recurrence among women aged 70 years and older is about 4% and drops to 1% when radiotherapy plus tamoxifen is given [14]. Launched in 2011 by Universitätsmedizin Mannheim, TARGIT-E has recruited 538 patients, although the estimated accrual goal was 265. On November 2017, final data collection for the primary outcome measure is expected to be released [15]. The primary outcome is local relapse rate. Secondary outcomes include cancer-specific and overall survival, rate of contralateral breast cancer, quality of life and cosmetic outcome. The expected local relapse rates are 0.5%, 1.0% and 1.5% after 2.5, 5.0 and 7.5 years, respectively.
Latest ongoing trials
Prospective cohort study TARGIT-C (Consolidation) trial is a prospective phase IV trial first launched on October 2014 (ClinicalTrials.gov Identifier: NCT02290782) by Universitätsmedizin Mannheim. This prospective, multicentre single-arm phase IV study is based on the protocol of the international TARGIT-A and Intrabeam® is used to deliver IORT. The endpoints are the same as those for the TARGIT-E trial, namely, local relapse rate, cancer-specific and overall survival, rate of contralateral breast cancer, quality of life and cosmetic outcome. The expected local relapse rates are 0.825-1.375% after 3-5 years, respectively. The estimated sample size is 387 patients. Inclusion criteria are shown in Table 6. This trial is currently recruiting participants. The rationale for this trial is based on the observation that the efficacy of radiation of the tumour bed only in a selected group can be non-inferior to whole EBRT [16][17][18]. TARGIT-C trial aims to confirm the efficacy of a single dose of IORT in a well-selected group of patients with small breast cancer and absence of risk factors, as has been shown in the TARGIT-A trial by Vaidya et al [17,18].
IORT delivered with Xoft® Axxent® eBx™
The purpose of the phase IV Xoft® Axxent® eBx™ IORT trial is to assess the safety and efficacy of the Xoft® Axxent® eBx™ System when used for single-fraction IORT in early stage breast cancer (ClinicalTrials.gov Identifier: NCT01644669). Xoft® is a balloon catheter born for brachytherapy with single-entry, which can be inserted into the tumour cavity by the surgeon at the time of surgery or after operation. The Xoft® Axxent® eBx™ System has been used to treat early breast cancer with a multifraction accelerated partial breast irradiation (APBI) technique on an outpatient basis as a part of two multicenter studies [19]. The disadvantage of brachytherapy balloons is that the radiation treatment is not concluded at the time of the operation, but once the device has been placed; radiation is delivered in ten fractions twice a day over five consecutive days. The Xoft® Axxent® eBx™ System balloon, which uses low-energy X-rays (50 kV), is the only balloon device now being tested for single-dose IORT [20]. Results from this phase IV trial study aimed at assessing clinical efficacy and safety are still awaited. The accrual goal of this trial is 1,200 patients and the first patient was recruited in 2012. The primary outcome is ipsilateral breast tumour recurrence at five years' follow-up. The secondary outcomes are regional breast tumour recurrence; disease-free survival and overall survival as well as cosmetic outcome, at five and ten years. Eligibility criteria for enrolment are shown in Table 7.
Effects on mortality of IORT in early breast cancer
A meta-analysis of randomised trials by Vaidya et al [21] analysed mortality differences in randomised trials of partial-breast irradiation (PBI). Nine randomised trials of PBI versus whole breast external beam radiation in invasive breast cancer were identified, although PBI was delivered with different techniques. For the TARGIT-A trial, data from 1,222 patients were available for the meta-analysis as only this subgroup of patients reached five years' follow-up. Five-year outcomes were available for non-breast cancer mortality in five trials and for breast cancer mortality in four trials. The overall survival was 94.6% for PBI versus 91.85% for EBRT. There was no difference in the proportion of patients dying of breast cancer (difference, 0.000% [95% CI 0.7-þ 0.7]). Non-breast cancer mortality with PBI was lower than with whole EBRT (difference, 1.1% [95% CI 2.1-0.2%]). Total mortality with PBI was also lower than with whole EBRT (difference, 1.3% [95% CI 2.5-0.0%]). The authors concluded that use of PBI instead of whole EBRT results in a lower five-year non-breast cancer and overall mortality. Moreover, the authors stated that patients should be informed about these data when breast conserving therapy is proposed.
Discussion and conclusions
The use of IORT as an alternative to EBRT in selected groups of patients has been a fundamental change in approaching breast cancer therapy. The North American TARGIT-R (Retrospective) Registry has shown low recurrence rate and low complication after lumpectomy and IORT at a median follow-up of 23.3 months [12]. Two randomised controlled trials, TARGIT-A and ELIOT, have shown that IORT is non-inferior to EBRT in terms of LRR when delivered to patients with early breast cancers and specific tumour characteristics. Based on the TARGIT-A findings, the TARGIT-US trial represents a pragmatic registry designed to follow the outcomes of IORT around the USA. The German TARGIT-C trial is still ongoing and aims to consolidate outcomes from TARGIT-A by using the same technique. The German TARGIT-E trial was launched to demonstrate that elderly patients, who are often undertreated as they often do not comply with standard 3-6 weeks of EBRT, should be treated at the time of surgery with IORT when they present with a small breast cancer. The ongoing TARGIT-B trial is evaluated IORT boost amongst young and high-risk patients, as the tumour bed boost is often missed (20-90%) due to the tissue displacement and the frequent lack of cavity clips during oncoplastic BCS [22,23]. Moreover, new technologies are now tested to deliver radiation therapy entirely at the time of surgery, such as Xoft® Axxent® eBx™ System balloon, born and launched in the market for brachytherapy to deliver multifraction radiotherapy for five consecutive days. The Xoft® Axxent® eBx™ System balloon is now used for single-fraction IORT and its efficacy is still being evaluated in the USA. With regard to mortality associated with radiation treatment, the recent meta-analysis published by Vaidya and colleagues demonstrated benefits from PBI compared to whole EBRT, although there was heterogeneity between the trials for many of the outcomes. It still remains unclear how many years of follow-up are needed to obtain solid information on non-breast cancer-related death as historically 10-15 years should be awaited before this data can be confirmed.
To conclude, current evidence suggests that it is time for a paradigm shift to inform patients about IORT and offer selected patients the option of IORT during BCS for cancer. www.ecancer.org ecancer 2018, 12:793 | 2018-01-12T16:08:11.321Z | 2018-01-10T00:00:00.000 | {
"year": 2018,
"sha1": "5c3d38dcf23b0da4966e59a815e332e58a7d1e1d",
"oa_license": "CCBY",
"oa_url": "https://ecancer.org/en/journal/article/793-update-on-intraoperative-radiotherapy-new-challenges-and-issues/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c3d38dcf23b0da4966e59a815e332e58a7d1e1d",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256227645 | pes2o/s2orc | v3-fos-license | Suitability and Sustainability Assessment of Existing Onshore Wind Farms in Greece
: Site selection for wind farm projects is a vital issue that should be considered in spatial energy planning. This study explores the deployment of onshore wind farms (OWFs) in Greece and assesses their suitability and sustainability using geographic information systems and multicriteria analysis techniques (the analytical hierarchy process—AHP and Technique for Order of Preference by Similarity to Ideal Solution—TOPSIS). Their suitability is assessed in terms of seven exclusion criteria and constraints provided in the Specific Framework for Spatial Planning and Sustainable Development for Renewable Energy Sources (SFSPSD-RES), while their sustainability is assessed in terms of nine environmental, technical-economic, and social assessment criteria in five different scenarios. The obtained results indicated that 81.4% of the existing wind farms are included within suitable areas and the highest percentage of improper siting refers to the installation of wind farms in sites that are within the boundaries of the Natura 2000 protected areas. The existing wind farms located in a part of Peloponnese, at the point bordering the Administrative Region (AR) of Attica, are characterized as more ideal in four out of five of the examined scenarios in the sustainability assessment. The proposed framework of this study is practical and effective in assessing the suitability and sustainability of existing wind farms in a country, and could contribute to spatial energy planning.
Introduction
One of the renewable resource technologies with the quickest growth is wind energy [1,2].It is a quick and relatively easy-to-install sustainable energy source that does not contribute to acid rain or global warming and does not release CO 2 , CO, or NO x emissions [3].The issue of wind farm siting has gained great interest the last decades.Many countries have fostered national legislative frameworks and regulations that could aid in accelerating the expansion of wind energy and in the proper siting of wind farms (WFs) on a national and regional scale, considering environmental, economic, technical, and social constraints.The key issue in siting and spatial planning issues regarding renewable energy sources is that there is a gap between theory and practice.Although there are tools to pre-select the optimal siting location for the development of wind energy facilities, in practice they are not used.As a result, wind farm siting planning is often characterized locally and in fragments.
There are numerous studies found in the international literature that aim to identify either suitable or the most suitable sites for wind farm (WF) deployment using geographic information systems (GIS) and multicriteria decision making (MCDM) methods, while considering various exclusion and evaluation criteria (e.g., [3][4][5][6][7][8][9][10][11][12][13][14][15]).It should be noted that the majority of them are implemented on a regional scale (e.g., [5,[16][17][18][19][20]) and only a few of them on a national scale [8,11,21].However, there are a handful of studies that discuss the suitability of existing wind farms based on relative legislative frameworks and policies [15].Numerous studies have created a framework for site selection without considering or even discussing laws, rules, and policies pertaining to the siting of wind farms or renewable energy sources (RES) in general [22].
The main objective of this study is to develop a reliable framework for assessing existing onshore wind farm (OWF) installations.This study reflects the current situation of onshore wind farms (OWFs) in Greece and evaluates their suitability and sustainability, using geographical information systems and multicriteria analysis techniques (the analytical hierarchy process-AHP and Technique for Order of Preference by Similarity to Ideal Solution-TOPSIS).In order to achieve this, the suitability of the existing WFs is examined in terms of seven (7) exclusion criteria and constraints provided in the national legal framework (Specific Framework for Spatial Planning and Sustainable Development for Renewable Energy Sources (SFSPSD-RES)) [23] and their sustainability is then investigated on the basis of a range of nine ( 9) environmental, technical-economic, and social assessment criteria.The proposed framework is expected to assist planners in developing and managing a wind energy strategy.
The main contributions and noteworthy aspects of the present study are as follows: (i) numerous economic, technical, environmental, and social criteria are used based on the existing national legal framework and the literature of wind farm siting; (ii) in line with the determined criteria, the compatibility and the sustainability of the existing WFs are examined and assessed using GIS, AHP, and TOPSIS approaches; (iii) to the best of the authors' knowledge, this is the first study that refers to the suitability and sustainability assessment of existing wind farm deployment at the national level; (iv) this study provides an important background for future planning decisions of WFs' development; and (v) the proposed approach can be an effective tool for making strategic decisions on the development and planning of Greece's wind energy potential as well as provide directions for future decisions concerning the land usage of areas occupied with wind farms after their lifecycle.
The rest of the paper is structured as follows: Section 2 describes the two multicriteria methods applied (AHP and TOPSIS) in this study.Section 3 includes a short description of the study area as well as a detailed presentation of the methodological framework for identifying and assessing the existing OWFs at the national scale.Section 4 provides and discusses the main results of the present study, while Section 5 concludes with useful remarks.
Analytical Hierarchy Process (AHP)
The AHP is one of the most frequently used methods in multicriteria decision making (MCDM) and was developed initially by Saaty in 1980 [24] (Saaty, 1980).The AHP method is a pairwise comparison-based multicriteria decision making approach that has been used in many wind farm siting studies (e.g., [4,[7][8][9][12][13][14]16,[25][26][27]).Weights are derived for decision criteria using pairwise comparisons and a nine-point scale (Saaty fundamental scale) for measuring preferences according to Saaty [24].The initial step of the AHP method includes the creation of an "nxn" pairwise comparison matrix.The nine-point scale used for these pairwise evaluations is presented in Table 1.The pairwise comparison matrix is normalized by: (a) summing the values in each column; (b) dividing each element in the matrix by the sum of its column; and (c) averaging the values in each row.These average values represent the priority vector or else the relative weights (w) of the assessment criteria.The AHP provides the possibility to specify whether judgments for each criterion are compatible with one another by computing the consistency index (CI) and the consistency ratio (CR) through Equations ( 1) and ( 2), respectively, as follows [28]: where λmax is the maximal eigenvalue and its value corresponds to the sum of the elements of the column of each criterion of the initial matrix (n × n) with the corresponding priority vector; and the value n is the number of assessment criteria.
where RI depends on the size of the matrix (n × n) and is called a random consistency index (Table 2) [29].A CR rating of 10% or less (CR ≤ 10%) is generally considered acceptable and reliable.Whenever the CR is greater than 10%, the pairwise comparisons are repeated with various judgments until a suitable result is obtained for the consistency check.
Technique for Order Preference by Similarity to Ideal Solution (TOPSIS)
The TOPSIS method (Technique for Order Preference by Similarity to Ideal Solution) was developed by Hwang Ching-Lai and Yoon in 1981 [30] and is based on the fact that the selected alternative must be as close as possible to the positive ideal solution and as far away as possible from the negative ideal solution.
The method considers the following six steps, after the definition of the initial assessment matrix, which consists of "n" alternatives and "m" criteria.Each alternative's intersection with each criterion is denoted by x ij : Step 1: Normalize the initial assessment matrix.Each element of the initial assessment matrix is normalized using Equation (3), where i = 1, . . ., n is the number of alternatives, and j = 1, . . ., m is the number of criteria: Step 2: Calculate the weighted normalized decision matrix.The weighted normalized decision matrix is created by multiplying the weights of each criterion by the weights of the normalized values of the alternatives, using Equation (4), where w j is the weight of the j-th assessment criterion.
where J is for the benefit criteria, while J is for the non-benefit criteria.
Step 4: Calculate the Euclidean distance of the alternatives from the positive and the negative ideal solutions, using Equations ( 7) and ( 8) as follows: Step 5: Calculate the relative closeness (C + i ) of each alternative to the positive ideal and the negative ideal solution using Equation (9), and thus identify the ranking order of the alternatives.
Step 6: Rank the alternatives based on the preference order.The alternative with the highest value of C + i is the most preferable solution.
Study Area
Greece is geographically a rather privileged country as it constitutes the main linking crossroad between Europe, Asia, and Africa.It consists of 13 geographic regions occupying an estimated surface area of 132 thousand km 2 .Being on the southeastern edge of Europe, it is almost surrounded by the Mediterranean, specifically by the Aegean, the Ionian, and the Libyan Seas, thus having a coastline of approximately 150,000 km [37].Crete and Evia are its two biggest islands followed by its basic island groups of the Cyclades, the Sporades, the Dodecanese, the Ionian Islands, the East Aegean Islands, and the Saronikos Islands.Greece also possesses a significant number of uninhabited rocky islands, most of which have been declared as nature reserves as they host endemic birds as well as flora and fauna of exceptional beauty.Others present investment interest, as they meet the required standards and are suitable for the installation of renewable energy sources.All the above, in conjunction with the excellent and rich wind power potential of the country, indicate Greece's ideal conditions and potential for the dynamic deployment of wind farms (both offshore and onshore).The total wind capacity to the grid (MW) for the year 2021 reached 4451 MW [38].
The social, demographic, cultural, and environmental profile of Greece stands as follows.According to the recent population census of ELSTAT [39], the population of the country comes up to 10,432,481 inhabitants, with the majority, 36%, residing in the region of Attica.There are 13,548 settlements recorded in the country [40].There are also 924 traditional settlements, mainly small ones (less than 500 inhabitants) but with significant architectural characteristics which constitute part of the newer cultural heritage of the country and are protected cultural areas by law [41].In accordance with the archaeological cadaster [42], there are currently 17,000 monuments, 3100 archaeological sites, 420 historical sites, 844 protected zones, and 220 museums, some of which are acknowledged and listed as global heritage sites by UNESCO.Regarding protected areas, Greece belongs to the European Network of Protection Zones and Nature Conservation "Natura 2000".Specifically, 202 Special Protected Areas (SPA) have been recognized nationwide alongside 241 Sites of Community Importance (SCI) [43].It should be noted that the areas SPA and SCI present overlaps in their acreages.In addition, the country hosts 581 beaches and 15 marinas have been awarded "Blue Flags" for their high quality of seawater [44].
Methodological Framework for Identifying and Assessing Existing OWFs in Greece
The methodological framework developed and applied in this study is shown in Figure 1.The framework includes three distinct stages.
OR PEER REVIEW 5 of 21 traditional settlements, mainly small ones (less than 500 inhabitants) but with significant architectural characteristics which constitute part of the newer cultural heritage of the country and are protected cultural areas by law [41].In accordance with the archaeological cadaster [42], there are currently 17,000 monuments, 3100 archaeological sites, 420 historical sites, 844 protected zones, and 220 museums, some of which are acknowledged and listed as global heritage sites by UNESCO.Regarding protected areas, Greece belongs to the European Network of Protection Zones and Nature Conservation "Natura 2000".Specifically, 202 Special Protected Areas (SPA) have been recognized nationwide alongside 241 Sites of Community Importance (SCI) [43].It should be noted that the areas SPA and SCI present overlaps in their acreages.In addition, the country hosts 581 beaches and 15 marinas have been awarded "Blue Flags" for their high quality of seawater [44].
Methodological Framework for Identifying and Assessing Existing OWFs in Greece
The methodological framework developed and applied in this study is shown in The first stage aims to spatially analyze the details and spread of the existing wind farms in the country, which include the number of wind farms, the wind power, the year of operation, and the number of wind turbines.Data are retrieved from [38] and are elaborated with the use of geographic information systems (GIS) and Microsoft Excel.The first stage aims to spatially analyze the details and spread of the existing wind farms in the country, which include the number of wind farms, the wind power, the year of operation, and the number of wind turbines.Data are retrieved from [38] and are elaborated with the use of geographic information systems (GIS) and Microsoft Excel.
The second stage entails the suitability assessment of the existing wind farms.A GIS database is created that generates individual thematic maps indicating a set of seven exclusion criteria.The exclusion criteria are those that have an impact on the study area because they make it impossible for a wind farm to be deployed in a particular location due to forecasted environmental zones, uses that are incompatible with other forms of economic activity, or protection rules.The exclusion criteria in this study are mainly retrieved from the SFSPSD-RES [23] and are related to environmental as well as technological and social constraints.Unsuitable zones are found by superimposing the aforementioned maps, and the percentage of the existing wind farms that fail to satisfy the above restrictions is calculated.
In the third stage of the framework, AHP and TOPSIS are applied to assess the sustainability of the existing wind farms and determine their preference order.The multicriteria problem is formulated in three levels as follows: the goal (sustainability assessment of existing wind farms), nine assessment criteria, and the existing wind farms.There are many competing criteria for various aspects, including environmental preservation, economic viability, technical limitations, and even public acceptability of the project.To determine sustainable siting, these parameters must be taken into consideration.The assessment criteria considered in this study are selected from the international literature and are related to technical-economic, environmental, and social factors.The decision problem subsequently lies in selecting the most sustainable sites that best contribute to the success of the objectives, which in our study are as follows: environmental protection, social acceptability, and economic prosperity.
The AHP is employed in the second level and five different scenarios are applied, while the TOPSIS method is performed in the third level.TOPSIS method is used as the number of alternatives (existing wind farms) are quite large and the computation process of this process is not complex; the results are obtained easily and can even be programmed into a simple Excel spreadsheet.Another significant advantage of the selected method is that the optimal solution is not only closer to the ideal solution but is also more distant from the ideal negative solution.It should be noted that the GIS database is also used to create thematic maps depicting the assessment criteria, and these maps are used to support the TOPSIS method implementation.
Details about the exclusion and assessment criteria, their data sources, as well as the quantification of the assessment criteria's relative weights employed in the AHP for the five different scenarios are provided in Sections 3.3 and 3.4, respectively.
Criteria Used in Wind Farm Siting Suitability and Sustainability Assessment
A variety of exclusion and assessment criteria should be considered when determining whether sites are acceptable for wind farms, which is a crucial step in the spatial planning process.In our study, seven exclusion and nine assessment criteria were included in the suitability and sustainability assessment of existing WFs in Greece.Tables 3 and 4 present the exclusion and assessment criteria, respectively, while their explanations are summarized in the following sub-sections.
Archaeological, Historical and World Heritage Sites
Greece is a country with intriguing cultural and historical heritage.Consequently, a lot of sites with monuments of archaeological and historical significance are situated there.Wind farms should be located at a minimum distance of 500 m from any archaeological or historical site.In cases of properties enlisted as World Heritage sites by UNESCO, the minimum distance of WF siting is set at 3000 m in order to preserve the aesthetic value of the cultural environment and avoid visual distortion and reflections.In the international literature there are a lot of scientists who have classified the archaeological and historical places of high importance and cultural monuments (e.g., [16][17][18][19]25]).
Protected Areas
Wind farms should not be located inside protected areas, as it is crucial to preserve and protect the natural environment.Many researchers have considered various types of protected areas as exclusion criteria in their analyses (e.g., [16][17][18][54][55][56]). The Natura 2000 sites (SPA, SCI, SPA&SCI) considered in this study are excluded from WF siting in order to limit environmental damage and diminish negative impacts, especially for the avifauna.
Settlements
The SFSPSD-RES determines which settlements are excluded from WF siting.Settlements with equal to or more than 2000 inhabitants should be kept at a minimum distance of 1000 m away from WF installations.WF siting is excluded for a buffer zone of 500 m away from settlements with a population of less than 2000 inhabitants.The distance from settlements is essential to reduce social impacts, such as visual and acoustic disturbances of wind farms, as well as for safety reasons (e.g., [9,11,[16][17][18]57]).
Traditional Settlements
According to the SFSPSD-RES, there should be a distance of 1500 m between a WF and any traditional settlement.This measure is necessary for the protection and preservation of the cultural environment, as well as for the elimination of visual, acoustic, and aesthetic impacts.Many researchers refer to the minimum distance of WFs from traditional settlements (e.g., [16][17][18]58]).
Bathing Waters and Water Bodies
Rivers and lakes generally referred to as surface water bodies, are excluded from WF siting, as these places constitute natural reserves and host numerous kinds of flora and fauna.It is crucially important to preserve the biodiversity and protect these areas with a buffer distance from onshore wind farm installations (e.g., [9,11,19]).
Land Cover
Both environmental and economic factors determine a set of constraints based on land restrictions.Using the database of Corine Land Cover (2018) and aiming to avoid land conflicts, the following CLC classes are excluded from WF siting: 1.
Airports and Roads
For safety reasons, wind farm installations should be located at a minimum distance from airports and road networks.In addition, it is possible that wind turbines have negative impacts on operating aircraft by interfering with aviation radar signals.A buffer distance around airports and road networks is very common in similar studies (e.g., [9,11,[17][18][19]56,57]).
Wind Velocity
Wind velocity (m/s) is an assessment criterion which intends to choose the most favorable and suitable territories with the highest average wind speeds.The mean wind speed is a measure which indicates wind resources.High wind velocity could play a significant role in the optimization of the facility.Many researchers have considered wind velocity in their studies (e.g., [11,[16][17][18][19]25,57,59]).
Distance from High-Electricity Grid
Wind farms should be located at the minimum possible distance from national electricity grids and power stations, in order to reduce the initial costs and electricity losses.In the international literature, the distance from high-electricity grids is a very common assessment criterion (e.g., [5,18,25,[54][55][56]60]). The closer the site of a WF installation to the electrical grid is situated, the more ideal the location is considered in terms of the efficiency of the facility.
Slope
Slope (%) could be a restrictive factor in a WF siting, as the higher the land inclination of a place is, the less suitable it is for WF siting.Steep slopes increase the costs dramatically and make the accessibility for installation and maintenance of wind turbines difficult.Flat land spots constitute a more favorable choice for wind farm installations.(e.g., [9,11,[16][17][18]25,57,61,62]).
Distance from Road Network
One of the most important criteria for WF deployment is the distance from the road network.Wind farm installations should be located at the minimum possible distance from road networks.The closer a WF project is to the road network, the less construction and maintenance costs it has.Many studies in the international literature have included the aforementioned assessment criterion (e.g., [5,9,11,[16][17][18]25,54,55,57,60]).
Installed Capacity
The installed capacity (MW) provides useful information about the contribution of wind farms' and renewable energy sources' installation output to the total energy balance.The greater installed capacity a WF installation has, the more "green" energy is produced.In the international literature, there are a few case studies that have considered installed capacity as an assessment criterion (e.g., [55,59]).
Distance from Protected Areas
The adequate distance of WF projects from protected areas guarantees the preservation of the natural environment and ensures the protection of biodiversity, especially avifauna.WF projects should be deployed as far as possible from protected areas to reduce potential negative impacts on the environment.Many researchers use this criterion as an exclusion or/and assessment criterion and recommend various values as a minimum distance (buffer zone) aiming to respect environmental constraints (e.g., [9,11,[16][17][18]25,54,55,60,62,63]).
Year of Operation
The year of wind farm operation is selected as an environmental criterion as it provides useful information for wind farm installation (e.g., [53,55]).Firstly, the more recent the year of operation is, the higher the possibility is of including recent advances and technological trends for wind turbines.At the same time, wind farms with a short operation life cycle present increased capability of contributing to the energy mix for at least the next 20 years (life cycle of the project) and therefore promote sustainable development.In addition, this assessment criterion could identify projects that are at the end of their lifetime and therefore their sites should be reassessed for future wind farm installations.To the authors' knowledge, this is the first time that this criterion has been used in a wind farm siting decision making process.The relative weights of the evaluation criteria under all the performed scenarios are presented in Figure 2, using the computations described in Section 2.1.From Figure 2, it can be seen that the relative weights of assessment criteria strongly depend on the scenario performed.For example, in the alternative scenario TES, the criteria with technical-economic aspects, i.e., criteria that are related to functionality, energy efficiency, and minimization of the financial costs of the wind installation, such as Wind Velocity (AC1), Distance from High Electricity Grid (AC2), Slope (AC3), and Distance from Road Network (AC4), present the most important assessment criteria for determining the sustainability of the existing wind farms.In addition, a high weight is attributed to Installed Capacity (AC5), as it is an assessment criterion with a partially financial nature, which guarantees the shortest payback period of the project.
Results and Discussion
The outcomes of the current investigation are provided and discussed in the next subsection.First, a spatial analysis and the variability of the existing wind farms of the From Figure 2, it can be seen that the relative weights of assessment criteria strongly depend on the scenario performed.For example, in the alternative scenario TES, the criteria with technical-economic aspects, i.e., criteria that are related to functionality, energy efficiency, and minimization of the financial costs of the wind installation, such as Wind Velocity (AC1), Distance from High Electricity Grid (AC2), Slope (AC3), and Distance from Road Network (AC4), present the most important assessment criteria for determining the sustainability of the existing wind farms.In addition, a high weight is attributed to Installed Capacity (AC5), as it is an assessment criterion with a partially financial nature, which guarantees the shortest payback period of the project.
Results and Discussion
The outcomes of the current investigation are provided and discussed in the next subsection.First, a spatial analysis and the variability of the existing wind farms of the country are provided (Stage 1 of the proposed methodological framework, Figure 1).Next, the suitability assessment of the existing wind farms is presented (Stage 2 of the proposed methodological framework, Figure 1).Finally, the priority ranking of the existing wind farms in Greece is then established using the findings of the AHP and TOPSIS (Stage 3 of the suggested methodological framework, Figure 1).
Spatial Variability Analysis of Existing Wind Farms
The generated power in Greece amounted to 4451 MW at the end of 2021 [53], while for the same year, a total of 374 existing wind farms were recorded, distributed throughout the thirteen (13) administrative regions (ARs) of the country, which consist of a minimum of one wind turbine up to a maximum of forty-one (41) wind turbines.
According to Figure 3, the majority of wind farms are concentrated in the administrative region (AR) of Central Greece (37.3%).Additionally, a significant number of wind farms are located in the AR of Crete (11.3%),Peloponnese (10.3%), and Eastern Macedonia and Thrace (10.3%).The lowest numbers of wind farms (less than five) are found in the ARs of Thessaly (0.8%) and Epirus (1.3%).Regarding the energy produced, the AR of Central Greece has by far the largest wind potential (41.3%).Then, the ARs of Peloponnese and Eastern Macedonia and Thrace are next, with the percentages of generated wind power being 13.9% and 11.3%, respectively.It is noteworthy that the AR of Western Greece, although it is not first in the national ranking regarding the number of wind farms in Greece, it comes fourth (8.3%) in terms of the energy produced per region.On the contrary, the AR of Crete contributes to the production of only 4.6% of the total wind power of the country, even though it is second in terms of the number of wind farms.The contribution of the ARs of Western Macedonia (4.5%) and Attica (4.1%) are also very low.
out the thirteen (13) administrative regions (ARs) of the country, which consist of a minimum of one wind turbine up to a maximum of forty-one (41) wind turbines.
According to Figure 3, the majority of wind farms are concentrated in the administrative region (AR) of Central Greece (37.3%).Additionally, a significant number of wind farms are located in the AR of Crete (11.3%),Peloponnese (10.3%), and Eastern Macedonia and Thrace (10.3%).The lowest numbers of wind farms (less than five) are found in the ARs of Thessaly (0.8%) and Epirus (1.3%).Regarding the energy produced, the AR of Central Greece has by far the largest wind potential (41.3%).Then, the ARs of Peloponnese and Eastern Macedonia and Thrace are next, with the percentages of generated wind power being 13.9% and 11.3%, respectively.It is noteworthy that the AR of Western Greece, although it is not first in the national ranking regarding the number of wind farms in Greece, it comes fourth (8.3%) in terms of the energy produced per region.On the contrary, the AR of Crete contributes to the production of only 4.6% of the total wind power of the country, even though it is second in terms of the number of wind farms.The contribution of the ARs of Western Macedonia (4.5%) and Attica (4.1%) are also very low. Figure 4 shows the evolution over time of the existing wind farm installations in Greece; the data are retrieved from Hellenic Wind Energy Association [38].Although the rate of the year of operation of new wind farm facilities fluctuates, the majority of the existing wind facilities have a year of operation in the period 2019-2021 (32.4%).Figure 4 shows the evolution over time of the existing wind farm installations in Greece; the data are retrieved from Hellenic Wind Energy Association [38].Although the rate of the year of operation of new wind farm facilities fluctuates, the majority of the existing wind facilities have a year of operation in the period 2019-2021 (32.4%).
x FOR PEER REVIEW 12 of 21 Regarding the number of wind turbines in the existing wind farms, almost half of them (48.7%)consist of one (1) to a maximum of five (5) wind turbines, which is likely due to economic, spatial, and social reasons.The number of wind farms with twenty-one (21) wind turbines or more is quite small, as only 4% of the existing wind farms consist of 21 to 25 wind turbines and only 1.6% of them exceed the number of twenty-six (26) wind turbines.
Suitability Analysis
Thematic maps related to exclusion criteria of SFSPSD-RES (Table 3) are superimposed to determine the areas that are not appropriate for the deployment of wind farms.The corresponding results are illustrated in Figure 5.The exclusion areas for wind farm deployment are marked in gray, the suitable areas in white, and the existing wind farm installations in red.
Examining the results of Figure 5, by superimposing the existing wind farms with the exclusion areas, it can be seen that only 66.4% of the existing wind farms are included within suitable areas (proper siting), while the remaining 33.6% of the existing wind farms are located (either in part or wholly) within the exclusion areas (improper siting).It should be noted that from this percentage, 36% of them had started their operation before the national legal framework (SFSPSD-RES) came into force.Therefore, the percentage of improper siting finally decreases to 18.6%.Table 6 lists the percentage distribution of improper siting based on the respective exclusion criteria.Regarding the number of wind turbines in the existing wind farms, almost half of them (48.7%)consist of one (1) to a maximum of five (5) wind turbines, which is likely due to economic, spatial, and social reasons.The number of wind farms with twenty-one (21) wind turbines or more is quite small, as only 4% of the existing wind farms consist of 21 to 25 wind turbines and only 1.6% of them exceed the number of twenty-six (26) wind turbines.
Suitability Analysis
Thematic maps related to exclusion criteria of SFSPSD-RES (Table 3) are superimposed to determine the areas that are not appropriate for the deployment of wind farms.The corresponding results are illustrated in Figure 5.The exclusion areas for wind farm deployment are marked in gray, the suitable areas in white, and the existing wind farm installations in red.
Examining the results of Figure 5, by superimposing the existing wind farms with the exclusion areas, it can be seen that only 66.4% of the existing wind farms are included within suitable areas (proper siting), while the remaining 33.6% of the existing wind farms are located (either in part or wholly) within the exclusion areas (improper siting).It should be noted that from this percentage, 36% of them had started their operation before the national legal framework (SFSPSD-RES) came into force.Therefore, the percentage of improper siting finally decreases to 18.6%.Table 6 lists the percentage distribution of improper siting based on the respective exclusion criteria.The highest percentage of improper siting (27.19%) is due to the installation of wind farms in places that are within the boundaries of the "Natura 2000" protected areas.Most of these areas (67%) are characterized as Special Protection Areas (SPA).However, it should be noted that a significant part of these projects may be properly placed if: (i) an environmental impact assessment (EIA) study and (ii) a specific ecological assessment (SpEA) study accompany the deployment of the project.
In addition, 7.68% of the existing wind farms are either adjacent to settlements with less than 2000 inhabitants or within a buffer zone of 500 m from them.The aforementioned siting may amplify social reactions and the NIMBY phenomenon on the one hand, but on the other may contribute to familiarizing residents with the image of wind farms in the landscape.Regarding land cover, 1.32% refers to the improper siting of wind farms within a section of permanently irrigated land, but also within areas where quarrying, mining, and extractive activities are developed.
Sustainability Analysis
The 374 existing wind farms are assessed and ranked using the AHP and TOPSIS methods in order to determine the sustainability order for the existing wind farms in The highest percentage of improper siting (27.19%) is due to the installation of wind farms in places that are within the boundaries of the "Natura 2000" protected areas.Most of these areas (67%) are characterized as Special Protection Areas (SPA).However, it should be noted that a significant part of these projects may be properly placed if: (i) an environmental impact assessment (EIA) study and (ii) a specific ecological assessment (SpEA) study accompany the deployment of the project.
In addition, 7.68% of the existing wind farms are either adjacent to settlements with less than 2000 inhabitants or within a buffer zone of 500 m from them.The aforementioned siting may amplify social reactions and the NIMBY phenomenon on the one hand, but on the other may contribute to familiarizing residents with the image of wind farms in the landscape.Regarding land cover, 1.32% refers to the improper siting of wind farms within a section of permanently irrigated land, but also within areas where quarrying, mining, and extractive activities are developed.
Sustainability Analysis
The 374 existing wind farms are assessed and ranked using the AHP and TOPSIS methods in order to determine the sustainability order for the existing wind farms in Greece.Following the process described in Section 2.2, the existing wind farms are ranked based on the relative closeness (C + i ) of each alternative to the positive and negative ideal solutions.It should be noted that the values of the initial assessment matrix have been provided using the GIS thematic maps, which are related to the assessment criteria.The nearest distance (minimum in meters) from the high electricity grid, the road network, the protected areas, and the settlements is calculated.The wind velocity of each wind farm installation results from the average values that appear within the wind polygon, while, correspondingly, the slope at the specific location of the installation takes the dominant value.Data on the installed capacity, the year of operation, and the number of wind turbines are retrieved from [53].
The weights of each assessment criterion in the five different scenarios (Figure 2) have been used for the calculation of the weighted normalized decision matrices.
Table 7 presents the positive (S + ) and negative (S − ) ideal solutions in the five different scenarios using the computations described in Section 2.2., while Figure 6a-e presents the most sustainable wind farms in Greece for Scenario 1-Scenario 5, respectively.Regarding the Equal Weight Scenario (EWS), wind farms located in the AR of Peloponnese and Attica present the most sustainable wind farm siting (Figure 6a).This is attributed to the fact that these areas present the most favorable conditions, i.e., a relatively high wind velocity at the siting location, relative proximity to the electricity grid and road network, relatively far distance from protected areas and settlements, relatively low slope, relatively recent year of operation, high installment capacity, and few wind turbines.
Conclusions
With regard to the Environmental Scenario (ES), wind farms with an ideal siting are those that are far away from protected areas, have recently started their life cycle, are equipped with wind turbines with new technology (higher energy production), and respect the aesthetics of the landscape.The ten wind farms that meet the above assumptions are concentrated in that part of the Peloponnese that falls within the boundaries of the AR of Peloponnese and Attica (Figure 6b).
Continuing with the Technical/Economic Scenario (TES), the most ideal wind farms are characterized as those located in sites with high wind velocity, at a fairly close distance from the high electricity grid and the road network, in sites with a low slope, and present a high installment capacity. Figure 6c depicts the ten ideal existing wind farms that are spread in almost 50% of the of the country (Central Greece, Eastern Macedonia, and Thrace, Peloponnese, Attica, Ionian Islands, and Western Greece).
The Social Scenario (SoS) concerns a more social approach, aiming at the reduction in land conflicts (wind farm-settlement), social reactions, and tensions in local communities.The appearance of the "not in my backyard" (NIMBY) phenomenon is common in cases in which the question of wind farm siting arises [69,70] and specific wind energy projects encounter local opposition.Although people support wind energy, individual wind farms can have undesirable characteristics, such as visual and noise effects.Ideal areas for wind farm siting are considered those that are far enough from settlements (e.g., 8 km, 7.5 km, 6 km, 3 km), and at the same time present satisfactory values in the other assessment criteria (usually not having positive or negative extremes).The aforementioned wind farms are mainly sited in the AR of Eastern Macedonia and Thrace, Central Greece, Peloponnese, and Attica (Figure 6d).
With regard to the Subjective Scenario (SuS), the 10 most sustainable wind farms can be found in part of the Peloponnese, within the borders of the AR of Peloponnese and Attica, as well as on the rocky island of Agios Georgios (Figure 6e).These areas are characterized by satisfactory wind velocity (e.g., 6-7 m/s), low slope, considerable distance from protected areas (about 40 km), and high values of installment capacity (11 MW, 16 MW, 73 MW, etc.).
Conclusions
In the present paper, a methodological framework for a spatial analysis as well as a suitability and sustainability assessment of the existing wind farms in Greece has been developed and presented.The main conclusions of this investigation are provided below:
•
Exclusion criteria are provided considering the national legal framework (SFSDSP), while the framework of the assessment criteria is created considering the literature review and data availability.
•
The country's existing wind farms are scattered throughout its territory; however, the regions of Central Greece, Peloponnese, Eastern Macedonia and Thrace, and Western Greece play a dominant role in the total installment capacity.
•
Most existing wind farms include a relatively small number of wind turbines (up to 10).
•
A total of 66.4% of the existing wind farms are included within the eligibility areas according to the institutional framework (SFSDSP) and this percentage increases further (81.4%)if the year of operation is considered.
•
Although the weights of the assessment criteria and therefore the siting assumptions differ in the five examined scenarios, overlaps are observed in the results of the ideal solutions.
•
The existing wind farms located in a part of Peloponnese, at the point bordering the AR of Attica (241,243,244,246,247,248,249,250,251), are characterized as more ideal in four out of the five examined scenarios in the sustainability assessment (EWS, ES, SoS, and SuS).
Wind farm siting is a complex issue, for which it is necessary to implement a methodological framework with a multicriteria dimension, which should ensure the most efficient operation of the project, the limitation of land conflicts, the full harmonization of the project with the natural environment, and the elimination of negative impacts on the natural and social environment.
The advantages of this work are that it includes numerous economic, technical, environmental, and social criteria according to the current national legal framework and the literature on wind farm siting as well as reliable and efficient methods and techniques, such as the GIS, AHP, and TOPSIS approaches.To the best the authors' knowledge, this is the first study that discusses the countrywide deployment of existing wind farms and their suitability and sustainability.The proposed approach can be a useful tool for making strategic decisions on the development and planning of Greece's wind energy potential and can also provide guidelines for future decisions regarding the land usage of areas occupied with wind farms after their lifecycle.The framework adopted in this study will not only serve as an important backdrop for the review of the current planning decisions but will also enhance new proper spatial planning decisions/strategies. Future work should include the engagement of planners and policy makers for the selection of the most appropriate scenario to assess the sustainability of the existing wind farms in Greece.Through an approach that incorporates multicriteria analysis methods as well as stakeholders' analyses will offer a novel way of combining decision making support and participatory procedures.In addition, the proposed methodology can be further validated by using the provided methodological framework in different real-world case studies (existing wind farms in various countries worldwide).
Fig- ure 1 .
The framework includes three distinct stages.
Figure 1 .
Figure 1.Methodological framework for suitability and sustainability assessment of existing wind farms in Greece.
Figure 1 .
Figure 1.Methodological framework for suitability and sustainability assessment of existing wind farms in Greece.
Figure 2 .
Figure 2. Relevant weights of assessment criteria under the performed scenarios.
Figure 2 .
Figure 2. Relevant weights of assessment criteria under the performed scenarios.
Figure 3 .
Figure 3. Number of wind farms (%) and generated wind power (%) per AR of Greece.
Figure 3 .
Figure 3. Number of wind farms (%) and generated wind power (%) per AR of Greece.
Figure 4 .
Figure 4. Year of operation of existing wind farms.
Figure 4 .
Figure 4. Year of operation of existing wind farms.
Figure 5 .
Figure 5. Unsuitable areas for wind farm siting and existing wind farm installations in Greece.
Figure 5 .
Figure 5. Unsuitable areas for wind farm siting and existing wind farm installations in Greece.
ReciprocalsIf one criterion i has one of the above values assigned to it when compared with criterion j, then j has a reciprocal value when compared with i (i.e., 4 = 1/4 or 0.25)
Table 3 .
Identification of exclusion criteria.
Table 4 .
Identification of assessment criteria.
Table 6 .
Percentage of improper wind farm siting due to exclusion criteria.
Table 6 .
Percentage of improper wind farm siting due to exclusion criteria. | 2023-01-25T16:06:39.035Z | 2023-01-22T00:00:00.000 | {
"year": 2023,
"sha1": "1ea476c2da4f4b4fef532c4ceef2e7b815243520",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/15/3/2095/pdf?version=1675647064",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f9bcd4717cc996a74ab14e3a89a8e91de53bfd44",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
150075584 | pes2o/s2orc | v3-fos-license | Engaging patients and informal caregivers to improve safety and facilitate person- and family-centered care during transitions from hospital to home – a qualitative descriptive study
Purpose: The purpose was to describe patients and informal caregivers’ perspectives on how to improve and monitor care during transitions from hospital to home as part of a larger research study to prioritize the components that most influence the development of successful care transition interventions. Methods: We conducted a qualitative descriptive study between July and August 2016, during which time semi-structured telephone interviews (n=8) were completed with patients and informal caregivers across select Canadian provinces. Interviews were audio-recorded, transcribed and thematically analyzed. Results: Main themes included: the need for effective communication between providers and patients and informal caregivers; the need for improving key aspects of the discharge process; and increasing patients and informal caregivers involvement in care practices. Participants also provided suggestions on how to best monitor care transitions. Conclusion: This study highlighted the following strategies with patients and informal caregivers: focus on effective communication regarding important information; provide appropriate resources; and increase involvement. Future research is needed to incorporate the input from patients and informal caregivers into the design and implementation of care transition interventions.
Person-and-family-centered care (PFCC) focuses on the unique individual as a "whole person" and not just their illness or disease. 12 The components of personand family-centered care include holistic care, collaborative care, and responsive care. 13 This approach to care encompasses the individual's own experience of health, family, culture and community. 14 It focuses on improving the autonomy of the individual to make healthcare decisions and to improve their overall healthcare experience. 12 In the last decade, there have been major strides toward person-and family-centered care at all levels of the health system including a multitude of provincially funded initiatives as well as provincial legislation. 12 Central to person-and family-centered care is the engagement of patients and their informal caregivers. 15 Patient engagement can range from consultation, to involvement, all the way up to partnership and shared leadership. 15 Engaging patients and informal caregivers is crucial to improving the health and experiences of patients, specifically during these "vulnerable exchange points." The purpose of this study was to describe patients' and informal caregivers' perspectives on how to improve and monitor care during transitions from hospital to home as part of a larger research study to prioritize the components that most influence the development of successful care transition interventions. 16
Research design and methodology
A qualitative descriptive study 17 was conducted using semi-structured telephone interviews. We chose to conduct interviews over the telephone because we were seeking input from patients and informal caregivers across the country. Ethics approval including the verbal consent process was obtained from the University of Ottawa Research Ethics Board. We used the Standards for Reporting Qualitative Research (SRQR) checklist when writing the report on our study findings. 18
Research team
The research team has expertise on care transitions (CB) and qualitative research methods (CB, DCY). The primary researcher (CB) is an affiliate investigator at the Ottawa Hospital Research Institute and at Bruyère Research Institute. Two patient partners (KKB and LH) helped with the design and pilot of the interview guide.
Sampling and recruitment
Using convenience sampling, patients and informal caregivers were contacted by email through the Patients for Patient Safety Canada members list, and were asked to participate in a semi-structured telephone interview. Patients for Patient Safety Canada is a national patient-led program of the Canadian Patient Safety Institute with an aim to help improve patient safety. 19 Three reminder recruitment emails were sent at approximately 3-week intervals. Patient and informal caregivers interested in participating were instructed to contact the research assistant (JR).
The inclusion criteria were individuals: 65 years or older, admitted and discharged from the hospital to home with the last 90 days, able to communicate in English, and not cognitively impaired. Family members who were at least 18 years of age or older were also invited.
Data collection tools and methods
Following pilot testing of the interview guide, a trained research assistant (JR) conducted 45 to 60-minute audiorecorded semi-structured telephone interviews with patients and informal caregivers (Appendix A: Semistructured telephone interview questions). The research assistant (JR) arranged a time that was most convenient for the telephone interview. Prior to the start of the interview, the research assistant read the information sheet, and obtained verbal consent. Participants were informed of the researcher's research background and professional affiliation, and noted that the purpose of the interview was to engage in a general discussion and obtain their perspective on how care transitions could be improved.
Data analysis
A thematic analysis 19 was used to extract key themes from all the transcripts. Two researchers (CB and DCY) independently reviewed and coded each transcript. The individual analyses were then collectively reviewed by the team members, looking for similarities between the transcripts and using an iterative process until consensus was obtained on the coding (code book) and the core themes were determined. 20 Themes were derived from the data with ATLAS.ti software 21 used to manage the qualitative data and support the thematic analysis. Researchers (CB and DCY) kept a journal to record any personal thoughts and information related to the data analysis. Participants were not provided final transcripts for comment or asked to provide feedback on the findings.
Results
A total of eight (n=8) study participants were recruited between July and August 2016. Nine individuals (n=9) were interested in participating, however one individual was excluded (n=1) because they did not meet all of the inclusion criteria. The characteristics of the participants are found in Table 1.
Patients and their informal caregivers described their perceptions of various aspects related to care transitions from hospital to home. The main themes that emerged included: 1) the need for effective communication between providers and patients/informal caregivers, 2) the need for improving key aspects of the discharge process, 3) increasing patient and family involvement and 4) suggestions on how to best monitor care transitions.
Theme 1: need for effective communication between providers and patients/informal caregivers
Under this theme, participants described examples of successful communication between health care providers and patients and their informal caregivers and also described their experiences with lack of effective communication. Participants also made suggestions to improve communication between providers across health care sectors.
Successful communication between health care providers and patients and their informal caregivers
Participants explained that it was easy to communicate to the health care providers and that the health care providers were supportive during the discharge planning process. Participants felt that the discharge planning process was well communicated, with one of those commenting on the inclusivity of the process, Caregiver 3 explained that: ". . .there was a meeting with the physician in charge of that ward, the nurse that was the care nurse on the ward, and somebody from the homecare organization and, the two of us. And that meeting was in a special room. It was totally open and free with any sort of questions being asked and being answered. It was a very good meeting. I was also clear on what was going to happen after that because of that meeting. . ." Several participants also commented on their experiences with supportive health care providers, either prior to discharge or once at home. Caregiver 6 said: "The infection specialist gave me his home phone number even and said, you know, if you get any more trouble, get on the phone. Which I did once." Caregiver 5 also commented, ". . .at least [at] the pharmacy. . . there was a wonderful staff lady who whenever he had any issues or any questions about his medicine or not feeling all right, he would call her and she would help him and she would also want to talk to the family, i.e., his wife, and if his adult children were near around them." Despite these positive experiences, some participants also noted poor communication between health care providers and patients/informal caregivers, specifically between one level of care to another.
Poor communication between health care providers and patients/informal caregivers
Participants expressed their concerns with the lack of information received as they transitioned from one setting to another. For example, Caregiver 4 said: "The patient-and family-centered care in an acute care facility. . ., you know, was excellent. But the connection into the next level of care not so excellent." Caregiver 2 emphasized the importance of arranging a follow-up appointment prior to discharge from hospital. Other suggestions included a scheduled discharge conference call or an electronic communication with the community home care agency or family physician. For example, Caregiver 1 suggested that: "her change in medication, the new prescription, should have been transmitted directly to her community pharmacy and her GP. As well as given to her, but it should have been transmitted. . . I'm going to use the word electronically, and you decide whether that's by fax, by email, or whatever, but should have been transmitted directly to the pharmacy and her GP, both of whom were known to the people doing the discharge." Theme 2: need for improving key aspects of the discharge process Participants felt that the coordination of care at discharge was not very flexible and that it was not very patient and family centered. Participants described the need to improve some aspects of the discharge process. This included the lack of notice of discharge date, the paucity of patient/family teaching, gaps in follow-up care, and limited accessibility to community resources.
Lack of notice of discharge date
Some participants did not receive appropriate notice about the discharge date and time. For example, Caregiver 6 described: ". . .I went to visit her in the morning and she was so relieved that I got there because she was being discharged. Like she had been discharged. I mean she was still in her bed, but, you know, she was waiting for me to come and pick her up to take her home."
Paucity of patient/family teaching
Participants described examples of lack of adequate patient and family teaching. Specifically, a participant commented on diabetes management education. Caregiver 6 described that he was not aware of the potential complications after his family member was discharged, which was detrimental to the patient's overall outcome: "But if they'd just explained what the potential complications of urinary tract infections are, and what symptoms to look out for, and how urgent it is to get people back into hospital, you know, I would have reacted much faster simply because I would have known what was a probable cause of her symptoms and how desperately urgent it is to get them treated straight away to improve your chances of survival." Another participant, Caregiver 2 also described her experience with poor teaching, "she didn't know if she [patient] was supposed to be taking medication or not, as an example. There was relatively short notice for her departure. So, you know, they [my family] live an hour from there and I live an hour from there and it was sort of just scrambling to get there and she [patient] just called. I'm done. I can go." Another gap was the lack of specific instructions at discharge. For example, Caregiver 3 described how he only learnt about taking care of his wife's condition after discharge: "Well I found that mostly out of the internet." ". . .I think we know what we need to know. You don't know what you don't know."
Gaps in follow-up care
Participants described gaps in their follow-up care. Specifically, Patient 8 described an administrative error that occurred during the referral process: "I had one physician who said he would, you know, put through a referral. . .and then that never really happened so that kind of left a few loose ends hanging two weeks later when we kind of wondered why we never got a call back." Caregiver 6 described an example of information that was missed during the discharge process: "The issue was, you know, lacking the education to know that if certain symptoms demonstrated themselves, that where to go and how urgent it was." Another participant commented on the lack of discussion around follow-up care and support once discharged. Patient 7 explained: ". . .so I was sent home and there was no one at home because my husband was also in the hospital. And fortunately, my daughter came to the rescue. We were glad she was available and able to do that, but nobody asked if she was."
Accessibility to resources
One participant was denied access to home care resources since there was already some support from a family member. Caregiver 6 explained the shortage of resources available for patients once discharged: "they said they just didn't have the resources to help people, that they were overstretched just in terms of helping people alone at home to cope and so, you know, they couldn't help me." Patient 7 noted the long wait time before her husband, who was also a patient in this instance, could gain access to physiotherapy in the community: "My husband. . . I thought that I had everything organized for him..he was supposed to have physio and that wasn't even ordered at that time, at the time of his discharge. And he's slid back considerably." Suggested recommendations to improve key aspects of the discharge process Participants suggested that resources be organized and that all discharge information be provided while the patients are still in the hospital. Patient 7 recommended: "I think that the homecare should. . . visit you in the hospital. And it should be all set up before you go home. There shouldn't be any surprises. The physio appointment should be made in the hospital so that you know when you're next going for physio. And all the equipment be ordered and in place or picked up." Another participant, Patient 8 emphazised that all the discharge information should be provided: "I think if you have any questions. I think if you have any symptoms that you are concerned about, that you know, are out of the ordinary. I think also when is that patient's next appointment with a physician or a healthcare professional because I think a lot of times people are discharged and they don't know when to see a GP or they don't know if they need to. Not that they all need to, but I think it's important to have. . .let them know a date. Kind of like what's the next step to kind of their recovery process." Participants also suggested the importance of written information to help supplement the verbal instructions. Caregiver 1 described the lack of written information as one of the biggest problems: ". . .if you're going to change someone's diet, you should leave some documentation with them, and rather than saying. . . and this was sort of the immediate change. . . and rather than saying do you understand or do you accept what I'm telling you, you should say what do you understand, because in my brother-in-law's case, he was a very proud man, and when someone asked him do you understand, the only thing he's going to say is yes. He's never going to say no, I don't understand, because. . . and he understood the language but he didn't understand the meaning in the words.".."Documentation is probably the biggest problem that I see as it relates to discharge." Caregiver 2 also explained the need for more written information: "I would say, you know, communication in terms of more information about when she comes home, what to look for, not just a verbal, you know, they provided her with a verbal, you know, look for this, look for that, but there was nobody there with her. So, you know, hearing her say something when she's sick, she might not remember it all. So, you know, I would say some kind of written documentation." Another participant, Patient 8 explained: DovePress of these cases this was demonstrated through their extensive knowledge about the patient's health condition(s), and in some instances where they acted as advocates for the patient. Participants also commented on negative aspects of their own and/or family member's care transition, and the need to increase patient and family involvement.
Family as an advocate for the patient One family member described that his wife was experiencing abnormal symptoms, and questioned a diagnosis based on his baseline knowledge of the patient's condition. This example not only demonstrated his expertise on the patient's baseline condition(s) but is also an example of a family member using their leadership skills and acting as an advocate for the patient. Caregiver 6: ". . .but I did question it and I actually wrote down her symptoms on paper for the doctors, and I did point out that until sort of late August she was as sharp as a razor mentally and it seemed to me from what I knew about dementia that it seemed to be a very rapid onset because normally, although I do understand now because I've researched this since she died, that sometimes Alzheimer's can come on very quickly, but it's normally a very slow process." Caregiver 3 also commented that: "you have to get to know how the system works and you have to get to know what is available."
Family presence
Patient 8 described the importance of family presence, and involvement in the discussions: "Like myself, either my husband or a family member is present for any important teaching or important information." Caregiver 2 said: "The second set of ears is a really big key to safe transitions."
Lack of patient and family involvement
Participants also commented on the lack of patient and family involvement. Caregiver 1 described: ". . . she [patient] has problems and they discharged her on her own without asking for a family member to be part of the discharge process." "I think there needs to be more emphasis, for lack of a better word, on their family [patient family] being involved with them [patient] so as to ensure that what they are told is followed, for lack of a better word." Another participant, Caregiver 4 said: "The family weren't even allowed to, you know, assist in the decision. . ." and described that ". . .as those patients get older, you know, the family. . .the involvement of the family should be greater not less. So the communication to the family GP that serves those care facilities, you know, needs to be bumped up a little bit." "And I, you know, where patients want the family to be involved and want the communication back to the family, there's got to be a way to do that. You know, I don't know whether it's email or if the physicians are really, you know, can't make those follow-up phone calls but just to be available to informal caregivers so that everyone's on the same page. My husband always has to chase the physician down and that's not always easy."
Theme 4: suggestions on how to best monitor care transitions
Participants made suggestions on how to monitor care transitions whether they are successful or not. This included identifying a central person to facilitate the transitions, and the development of a detailed discharge protocol with a follow-up survey.
Identifying a central person to facilitate the transitions
Participants highlighted the need for one central person to be their designated contact, to ensure that they have all the required discharge information. Caregiver 6 explained: "there is no gatekeeper to ensure that when somebody is discharged from hospital all the steps that should be done to ensure a smooth transition are made. So several people, doctors, nurses, [home care staff], who can discharge somebody and they do discharge people, without knowing whether all these things have been done properly. And that horrifies me. And I'm sure that's what happened to the lady I mentioned, that, you know, somebody discharged her without informing all these other bodies that were supposed to coordinate her home care. But, you know, I mean that should. . .obviously should never happen. There has to be one person who says either everything's been done to ensure a smooth transition, or I'm not discharging this patient until it is done." Caregiver 4 proposed: ". . . a care coordinator, you know, could be the hub of, you know, when. . .what the experience is going to be like, a central point of, you know, questions, concerns for patient and family while they're there. As well as someone in that follow-up, you know, package that she should have gone home with would have been the contact information for that person once you're back in your own [home]. . ." Another participant, Caregiver 6 suggested monitoring the success of the transition by having a provider follow-up with each patient: "the only way you find out if there's something wrong with the transition is when the patient comes back, you know, either to the general practitioner or specialists, you know, as I did, as we did, with [wife] on a number of occasions." or "the only way you could do that systematically I suppose is have some sort of social worker to just check up and see how you're getting on." Caregiver 6 also suggested the need for a central person to meet the patients' specific needs: ". . .different patients have different needs. Some patients need. . .their appointments with whoever they need in follow-up after they've been discharged have been made for them and you know, for transportation to be arranged and, you know, a whole bunch of stuff but in our case, of course, we didn't need that because I could just put her [patient] in the car and take her to whoever and I knew, you know, who her doctors were and so, you know, it varies from patient to patient but for sure I think there needs to be some sort of gatekeeper to make sure patients aren't discharged before they've got adequate arrangements, whatever those arrangements are for them individually." A detailed discharged protocol with a follow-up survey Another participant suggested a checklist with a follow-up survey. For example, Patient 7 proposed that a provider: "Phone them and ask them. Or give them an evaluation to fill out" or ". . .with a follow-up. I mean lots of companies give you a follow-up to see if their service is adequate. Or again, they send them a survey to fill out, a brief survey. They could have a checklist for when patients go home that you can mark off all these things that they're done." Patient 8 recommended that patients should have a followup from a discharge team: "the nurse comes in the first 24 hours and I think if there was something wrong, we could. . .we could talk to that nurse and then they can relate back to our physician, relate it back to the discharging unit." or ". . .the health care professionals need to have at least maybe a phone call follow-up or a number that the patient can call if they have any questions or concerns." "It's hard if they're not. . . like if they're not in a system, like a home care system. If they're just a patient in, that's been discharged. It's kind of hard. But maybe if there was a way that you might ensure that you have the patient information, like contact numbers before you left the hospital and maybe if there was a follow-up team or a discharge team that made that quick phone call the next day to see. That could be a possibility in a perfect world."
Discussion
The overall goal of the study was to obtain input from patients and informal caregivers on how to improve safety and facilitate person-and family-centered care during transitions from hospital to home. Engaging patients and their informal caregivers is an important strategy for examining care transitions practices in order to facilitate the development of innovative solutions for safer care transitions between hospital and home. In this study, patients and informal caregivers described what was important to them during the transition between hospital and home. This included providing appropriate communication between providers and patients/informal caregivers, providing discharge teaching, and access to adequate resources needs, and empowering patients and informal caregivers in their care during the transition from hospital to home. Participants also noted the importance of a central person or a strong support system to help during this crucial phase. Person-and family-centered care requires a true partnership between the individual, their family and health care providers. 12 A therapeutic relationship between these individuals helps achieve continuity of care (ie partnerships with the same health care providers) and shared decision-making. 12 In addition, fostering effective communication, collaboration and respectful care that reflects the individual's unique values, beliefs, culture, circumstances and changing health states are important. 12 Furthermore, appropriate resources and support for caregivers 22,23 as well as meaningful engagement of patients and informal caregivers during the transition 24-26 are needed to improve safety and person-and family-centered care during the transition between hospital and home. In this study, patients and informal caregivers expressed that many of these aspects are needed in order for them to feel better prepared for discharge, and better equipped to manage their health condition(s) at home.
Findings are consistent with other research studies 27-32 that have explored patient and family perspectives on care transitions and have suggested several approaches or strategies to improve care transitions. An initial strategy would be to focus on effective communications between providers and patients/informal caregivers. A second approach would be the provision of appropriate teaching and community resources. There is also a need to develop better methods for involving patients and informal caregivers and to create a strong support system that allows patients to safely move from hospital to home. These patients and informal caregivers' perspectives were incorporated into the first round of a larger Delphi study aimed at identify the value statements that are perceived by health care decision-makers, patients and informal caregivers to best signify safe person-and family-centered care during transitions from hospital to home. 16
Strengths and limitations
A primary limitation of this study was the small sample size and convenience sampling. We were limited in recruiting patients and informal caregivers from one organization (Patients for Patient Safety Canada 19 ). With this convenience sampling, we may have missed some important perspectives from individuals who are not as actively involved in the healthcare system. However, key themes emerged from the interviews that are worth investigating further. Another limitation is that participants were self-selected; therefore, it is possible that some important factors were not identified in this study. That said, the results are broadly aligned with the findings of other researchers in this area. 25,[28][29][30][31][32] Conclusion This study provided insight on patients and informal caregivers' perceptions to improve and monitor care during transitions from hospital to home. Strategies to improve care transitions should focus on more effective communication to patients and informal caregivers regarding important information, providing them with appropriate resources, and increasing their involvement while they transition from hospital to home. Future research is needed to incorporate the input from patients and informal caregivers into the design and implementation of care transition interventions.
Ethics approval
Ethical approval was obtained from the University of Ottawa Research Ethics Board.
Provenance and peer review
This paper was not commissioned and was externally peer reviewed.
Data sharing statement
No additional data are available. | 2019-05-12T13:27:51.442Z | 2019-04-26T00:00:00.000 | {
"year": 2019,
"sha1": "3bcdef2b7d672c8a151eebf04117e451aa3cd1a1",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=49360",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bcdef2b7d672c8a151eebf04117e451aa3cd1a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269963943 | pes2o/s2orc | v3-fos-license | Education Status of the Santal Community in Northern Bangladesh: A Case Study
Bangladesh is a land of cultural diversity, with several small ethnic groups living across the country, each with its own tradition and identity. The purpose of this study is to explore the educational status of the Santal community, one of the largest ethnic groups residing mainly in the northern part of Bangladesh. This study adopted a qualitative approach, using a case study as methodology. Data were collected through interviews, group discussions, and document analysis. The findings reveal that the educational status of the Santal community is characterized by poor enrollment, continuity and literacy rate. Despite receiving primary education, their attainment rate declines in the secondary, higher secondary, and university levels. In this regard, various socioeconomic factors contribute to the low educational attainment rate. The Santal children are unable to continue their studies mainly due to poverty. Furthermore, language barriers, future employment uncertainty, child marriage, gender discrimination, industrialization and cultural impediments are primarily responsible for low educational attainment. They are already fighting to make ends meet, so adding the expense of school to their list of concerns is excessive. Despite this, the majority of the young people, especially those who are currently pursuing education, acknowledge the value of education and aspire to improve their situation by reaching a high social position through educational attainment. It illustrates that the current generation is aware of the importance of education and is eager to continue their studies. As a case study, the results of this research demonstrate how ethnic groups struggle to have access to education in Bangladesh and what steps should be taken to enhance education status, so contributing to the existing literature on education and policy. The findings may also be useful for policymakers concerned in improving the educational status of ethnic groups, notably the Santal community.
Introduction
Bangladesh is a land of cultural diversity with several ethnic groups spread across the land.According to the Ministry of Cultural Affairs, the country has 50 ethnic minority communities (IPDS, 2022).The Santal community is one of the largest and historic ethnic groups in Bangladesh.They mostly reside in Rajshahi, Dinajpur, Bogra and Rangpur districts in northwestern Bangladesh (Pinku, 2020).The most recent census data indicates that there are 129,049 Santal people living in the country (BBS, 2022).Some comparative studies suggest that the majority of the Santal are historically poor, landless and uneducated (Hossain & Uddin, 2021).
The socioeconomic and cultural traditions of the Santals are ancient; they are not like the mainstream Bengali people of the country (Haider, 2022).They usually coexist with nature, specifically forests, jungles and fauna, with which they have symbiotic relationships.Traditionally, the Santals are hunters and gatherers who rely on a communal pool of resources and an agriculture-based economy to make their living (Sarker, Khan & Musarrat, 2017).Historically, they have been mostly engaged in farming activities, with over 95% of the Santals engaged in agriculture (Haider, 2022).In terms of religion, Santals' traditional belief system is polytheistic, based on bonga (means spirits) worship and festivals.While some have converted to Christianity, others have continued to practice ancient faiths with Hindu aspects incorporated (Brandt, 2011).
In terms of educational qualification, the Santals in Bangladesh have limited participation and low literacy rates (Samad, 2006).Pinku (2020) reveals that educational status among the Santals is not improving well, with 10% of the population illiterate and 54% unable to reach the HSC level.Specifically, Santal students are not keeping up with mainstream students academically (Hoque, 2023).According to Cavallaro and Rahman (2009), limited resources have resulted in significant disadvantages among the Santals in terms of education, employment and land ownership compared to the mainstream population and some other ethnic minority groups.Moreover, they feel ignored because Bangla is the primary language of instruction in schools.While there are several programs and the national education policy that assure the use of ethnic community language at the elementary level of schooling, they have not yet been put into practice (Shamsuddoha & Jahan, 2016).
From a socioeconomic perspective, due to a lack of cognizable access to education and suitable skills, there is a lack of income generating activities among the Santals (Samad, 2006).They look for work in town, but their lack of education and suitable skills have made it more challenging (Shamsuddoha & Jahan, 2016).Despite their capability, they are often ignored due to their lack of education and hence, requiring greater cooperation and special attention to ensure their inclusion in the mainstream society (Hoque, 2023).Because of their low educational attainment and career progress, they can not participate in policy making and politics much (Shamsuddoha & Jahan, 2016).
The Santal generations are currently enthusiastic about education and improving socioeconomic conditions however, they have been facing challenges such as language barriers, lack of native teachers, lack of guidance and family earning engagement (Sharif, 2014).Although they used to believe in different superstitions, those superstitious beliefs have been reduced by the impact of modernization and education (Shamsuddoha & Jahan, 2016).In addition, the Santal society is changing rapidly as a result of income opportunities, education, market expansion, and social and economic development (Shamsuddoha & Jahan, 2016).However, the equity and quality are the key concerns in the realm of education for them.
There is a significant debate regarding the equity of education, despite the fact that human rights legislation prioritizes to ensure the access and quality of education regardless of ethnicity.The objective of equity education is to ensure equality in learning outcomes, access and retention.In other words, all children must get the opportunity to develop basic cognitive skills and an appropriate learning environment.The UNESCO also emphasizes that quality and equity are inextricably linked.There are different factors such as poverty, remote location and gender inequality that persist as the strongest inverse relation between school attendance and learning performance (UNESCO, 2004).
The importance of quality education is also emphasized by the SDGs and Bangladesh education policy.The core, game-changing promise of the 2030 Agenda for Sustainable Development and its Sustainable Development Goals (SDGs) is 'quality education and leaving no one behind'.The SGDs emphasize on universal access to quality and inclusive education which uphold the idea that quality education is one of the most effective and reliable engines of sustainable development (UNDP, 2023).UNESCO focused on two essential components of a quality education: (a) cognitive development and (b) the importance of education in fostering the values and attitudes of a conscientious society, as well as nurturing artistic and emotional advancement.The Convention on the Rights of the Child delineated the objectives for education in Section 29(1): The child's personality, talents and cognitive and physical capacities should develop to their maximum potential.There should be an increase in the promotion of human rights and basic freedoms, as well as adherence to the ideals outlined in the UN Charter.The child's parents, cultural identity, language, values and the national values of the country where the child resides should all be respected and nurtured.
In addition to Article 26 of the Universal Declaration of Human Rights, the constitutional provisions pertaining to education in Bangladesh are placed in Article 17. Education is a fundamental right under the Universal Declaration of Human Rights and the Constitution of Bangladesh, both of which firmly establish it as a cornerstone of state policy (Rahman, 2017).It is a fundamental obligation to refrain from all forms of discrimination, such as those based on religion, caste, ethnicity, and birth, as explicitly stated in Article 28 of the constitution of Bangladesh.Moreover, in regard to children belonging to ethnic minority groups, Articles 15 and 16 unequivocally decree the value of equal educational opportunities for every individual.
Furthermore, the recognition of inclusive primary education as a fundamental right for every child, regardless of ethnic, cultural, or religious background, is affirmed by the 'Education for All (EFA)' initiative.In this backdrop, this study explored the following research questions: (a) What is the current educational situation of the Santal community?(b) How do they view the importance of education, taking the socioeconomic conditions of the community into consideration?(c) What factors are causing the Santal students' dropouts from educational institutions?
Literature Review
The educational marginalization of ethnic minority people is influenced by factors such as poverty, hunger, insecurity, identity crisis, language endangerment, land encroachment and threats to their livelihood.As a result, there is a significant increase in dropout rates and a decline in the participation of indigenous peoples in secondary and higher education.This situation is also observed within the Santal community of Bangladesh (Soren, 2022).In this regard, because of poverty, economic crisis, policy ignorance, a lack of social consciousness, and discrimination, Santal children are frequently excluded from elementary education, limiting their potential for skilled work (Mujeri, 2010).
There have been some academic studies on the educational and skill development of Santal children and youth (Sharif, 2014), the socio-historical contexts and education system (Shamsuddoha & Jahan, 2016), the impact of low income on literacy rates (Cavallaro & Rahman, 2009), the influence of education on Santal culture (Pinku, 2020), the adaptation of the Bangla language for education (Eftakhar, 2019), the educational lag experienced by Santal children (Hoque, 2023), and the adaptive strategies for receiving education (Soren, 2022).There are different factors affecting Santal children's educational and skill development.More specifically, socio-historical context, education system and economic status influence their educational access and achievements (Eftakhar, 2019;Sharif, 2014).Sharif (2014) focused on the education and skill development of Santal children and youth in Bangladesh, stressing the high number of dropouts from primary and junior secondary school.The community is mainly based on agriculture and experiences difficulties in finding suitable vocational options and income-generating alternatives.Besides, social isolation exacerbates dropouts and a lack of skill development opportunities.The study proposes bridging child education from mother tongue to bilingual or multilingual education in elementary schools, as well as developing TVET programs for Santal adolescents.Besides, the government and non-government organizations should take the required actions and provide financial support for inclusive education.This may help to overcome social exclusion and ensure that Santal children and youth receive an equitable education and skill development.
Language is one of the key elements of Santal children and youth's educational success.Shamsuddoha and Jahan (2016) explored the socio-historical background of the Santal community in Bangladesh, highlighting indigenous people's educational systems and institutions.They advocate for mother-tongue-based bilingual education to promote educational access and learning outcomes.Pinku (2020) argues that the Santal culture is changing as a result of various factors, including education.Over the years, the community has adapted to traditional practices by using three languages and is now moving towards higher education focusing on modern technology.This shift in language and culture has allowed the community to move forward in their education (Eftakhar, 2019).
Conventionally, socioeconomic status influences educational access and achievement.Cavallaro and Rahman (2009) explored that indigenous communities have low economic standing and educational attainment due to the lack of their recognition in the country's Constitution and the state's reluctance to acknowledge the needs of ethnic minorities.The Santals are deprived of land rights, employment opportunities and education as a result of state negligence.Although there is an increasing need for modern education, the education policy of the country priorities the significance of the mainstream language in elementary school, but the inclusion of minority languages in the education sector has not yet been implemented (Cavallaro & Rahman, 2009).Hoque (2023) mentioned that the cognitive achievement level of the Santal students has not reached the desirable level.Specifically, the academic achievement of Santal children lags behind that of mainstream students.Thus, it is important to identify and reduce the major determinants influencing the academic progress of Santal children in primary school in order to ensure exclusivity and the development of an equity-based society and future generations.
In addition, due to poverty, ignorance and lack of awareness, ethnic communities including the Santal community risk exclusion from primary education.This exclusion may result in lower levels of graduate and human capital, literacy rates and career prospects.Santal families influenced by this socio-cultural condition are practicing new adaptive strategies like teaching the Bangla language, concealing their cultural identity and emphasizing their religious identity.Although they are taking advantage of educational opportunities provided by Christian churches, non-governmental organizations and government institutions, this strategy risks reducing their cultural distinctiveness in the future (Soren, 2022).Debnath (2010) argued that the state, churches and NGOs use colonial models of education and development to fragment the indigenous community.These models disregard the spiritual dimensions, disrupt local economies and inflict damage upon family, society and environment.These actions are carried out under the guise of normalizing and controlling the indigenous population.The interconnection between the three ideologies also serves to reinforce one another in the creation of a discriminatory education policy.This policy includes the endorsement by the state which undermines the cultural values and beliefs of indigenous people.Additionally, the education policy sponsored by religious institutions, particularly the church, promotes a civilizing mission.Lastly, the influence of Western education contributes to the degradation of local knowledge and culture (Berry, 1990as cited in Debnath, 2010).
Although the above-mentioned literature focuses on different aspects of education, it lacks comprehensive narratives of Santal individuals on their educational status and socio-cultural, economic, political and religious factors that result in dropouts from educational institutions.Moreover, the existing literature mainly pays attention to the reasons for lagging behind Santal students in education instead of focusing on factors that drive students to complete their education.Against this backdrop, the present study investigates the current status of education of the Santal community, the importance of education and its impact on the society, and the factors resulting in dropouts from educational institutions.
Conceptual Framework
Considerable debate surrounds the definition and conceptualization of quality education.This study employs the Framework of the EFA Global Monitoring Report (UNESCO, 2004) to comprehend the status and significance of education among the Santal community in northern Bangladesh.The Report established a comprehensive framework for comprehending, monitoring, and enhancing the quality of education.In this regard, there are two methods: the "school effectiveness" method and the "learner-centred view of education".The school effectiveness method focuses on analyzing education systems at the school level, specifically examining how schools are structured and operated.On the other hand, the learner-centered approach is emphasized by the framework's incorporation of the category of "learners' contributions to the school environment."The framework focuses on four key elements pertaining to quality education: learner characteristics, enabling inputs, outcome and context.
Characteristics of Learners
The process of learning and the rate at which individuals acquire knowledge are significantly shaped by their cognitive abilities and prior experiences.In this regard, crucial deciding factors may include socio-economic status, health condition, residential location, cultural and religious traditions, as well as the extent and kind of preceding educational experiences (UNESCO, 2004).
Context
The relationship between education and society is robust, with both exerting significant effect on one another.Education often mirrors society to a significant extent, since it is shaped by the prevailing values and attitudes of the broader societal context.Equally significant is the consideration of whether education occurs within the framework of a prosperous society or one characterized by pervasive poverty.In a more explicit manner, national education policy also serves as a significant contextual factor.The enabling circumstances for educational practices are established by several factors such as purposes and standards, curriculum and teacher policies (UNESCO, 2004).
Enabling Inputs
The effectiveness of teaching and learning is expected to be significantly impacted by providing adequate resources to assist the educational process.It is evident that educational institutions without qualified instructors, adequate textbooks and appropriate learning resources would have challenges in delivering effective instruction.The pedagogical process is intricately intertwined with the support system of inputs and different environmental elements (UNESCO, 2004).
Outcomes
The assessment of educational outcomes should be conducted within the framework of its established goals.Academic achievement is the main way to measure these results, but additional approaches should be considered to assess creative and emotional growth, as well as changes in values, attitudes and behaviour (UNESCO, 2004).
Methodology
This study adopted a qualitative approach, specifically using case study as a methodology, since it provides valuable insights and enhances readers' understanding by shedding light on different aspects of a small group.Besides, qualitative case studies enable researchers to focus on gaining insight, making discoveries, and interpreting data rather than only testing hypotheses (Merriam, 1988).Furthermore, such studies focus on how certain groups address particular issues, adopting a comprehensive perspective on the situation.Yin (2018) argued that case study research entails the examination of a specific instance within a current and authentic context or environment.Stake (1995) contends that a case study is a decision about what to investigate, specifically a particular case within a defined system, limited by both time and location.This involves collecting detailed and comprehensive data from various sources, such as observations, interviews, audiovisual material, documents and reports (Yin, 2018).The case study may take several forms, such as being single or collective, taking place in multiple locations or within a single site, and focusing on a specific instance or a problem (either intrinsic or instrumental) (Stake, 1995;Yin, 2018).Consequently, this procedure develops hypothetical suppositions and gathers evidence from many sources, guaranteeing that the data converges in a triangulating pattern (Yin, 2018).Considering the position of the Santal community as an ethnic minority group and the research trend in the field of education, the case study seems to be suitable to understand the status of education in the Santal community.
Study Village
In order to collect primary data, fieldwork has been conducted in a Santal village located in Dinajpur district.The village was chosen because it is one of the largest Santal villages in Bangladesh.The village community consists of a total of 35 households.Ten of these households have been converted to Christianity while others believe in various supernatural forces.The village's total population is 209, which includes both adults and children.Out of 209, 107 individuals are male and 102 individuals are female.The majority of the locals engage in agricultural activities.Besides, some individuals earn their livelihoods via other occupations such as day labour, sharecroppers, van drivers, and so on.Most of the households own a small amount of land for cultivation.It is also found that, about 70 percent of the Santals are engaged in some kind of sharecropping.There is a primary school in the village, and it is currently run by the village community.
Data Collection Methods and Tools
Case study research often involves comprehensive data collection, which encompasses several sources of information including "observations, interviews, documents and audiovisual materials" (Creswell, 2013, p.94).Interviews and documents review were the predominant methods of data collection for this study.The data collection methods and tools that have been used in order to explore the educational status of the Santal community are described below.
Case Study Interviews
Case study interviews were undertaken with a total of 20 members from the Santal community.Yin (2018) suggests that case study interviews may be characterized as guided conversations rather than structured queries.During a case study interview, the series of questions is expected to be flexible rather than strict but nevertheless follow a continuous path of enquiry (Rubin & Rubin, 2011).Specifically, such interviews entail a thorough investigation of a current phenomenon, often known as a "case", within its original context.To expedite the interview process and capture the essential insights, explanations, and interpretations related to the research topic, an interview guideline was used that covered the issues such as the attitude of the Santals towards education, experience with the education system, and causes of dropout from the formal educational institution.During the interviews, two specific criteria were ensured, as proposed by Yin (2018): to adhere to the research questions and to accurately document the conversations without any prejudice.The interviewer established a rapport with the respondents during the interviews.Ten out of the 20 participants were current students, while the other 10 had recently dropped out of their studies; all participants were purposefully selected.
Key Informant Interviews (KII)
A KII was also carried out, whereby individuals with extensive knowledge and understanding of the educational situation of the Santal community were interviewed.Yin (2018) asserts that key informants play a vital role in assessing the success of a case study.These people may provide useful insights on a topic and can facilitate the researcher's connection with additional interviewees who may possess corroborating or contradictory information (Yin, 2018).In this study, interviews were conducted with five key Informants who were senior and experienced members of the community.They were chosen purposively because of their distinctive position in the community, and their knowledge and communication skills in relation to the education situation of the Santal community.These interviews have revealed the historical and contemporary social and political factors that affected the educational status of the community.
Focus Group Discussion (FGD)
As part of this study, two FGDs were conducted with children and youth ranging in age from 10 to 25 years in order to collect reflective appraisals of the participants on the situation, obstacles and socio-cultural dynamics in relation to education.The participants were chosen by following purposive sampling and engaged in a collaborative exchange of ideas to arrive at a conclusion about a specific topic aligned with the study purpose.As stated by van Eeuwijk and Angehrn (2017), a FGD allows for the involvement of a varied group of individuals who engage in interactive sessions to share information on their attitudes, perspectives, knowledge, experiences and behaviour.Berg (1989) defines the focus group as an interviewing technique particularly suitable for small groups.In this study, each FGD lasted for one hour.The FGDs were conducted using Lederman's (1990) approach, which included several steps: an introduction outlining the goals of the study, guidelines, and boundaries; a set of introductory questions to encourage participation; a series of focused questions designed to elicit all pertinent information regarding the subjects under discussion; and a concluding part for summary and discussion closure.A pre-developed guideline was used in the discussions that included but were not limited to, geographic location, cultural traits, education status, importance of education, impact of education, socioeconomic and political factors for dropout in the Santal community.
Documents Review
Books, articles, research reports, census reports and policy papers regarding the Santal community were reviewed and used to supplement the primary data.
Sampling Procedure
Sampling is a critical aspect of qualitative research since sample decisions may undermine or invalidate the ultimate findings (Berg, 1989).According to Patton (2002), purposive sampling is considered the most appropriate sampling method for case study research as it involves selecting cases that provide enough information for in-depth analysis that are fundamental to the study objectives.This study opts for cases that exhibit diverse viewpoints in relation to the educational status of the Santal community.The study village was selected for convenient sampling in the first step of the sampling procedure.Later, the research participants for interviews and discussions were selected using purposive sampling.The total number of samples was 47.
Data Analysis and Presentation
After completing the fieldwork, the data was transcribed and then underwent coding.In this case, a mixture of Alpha-numerical characters was used to encode the data.For instance, KII-3 denotes a key informant interview carried out with participant number 3. Table-1 provides data on the application of the same coding pattern on other participants such as CSI for Case Study Interview and FGD for Focus Group Discussion.Afterwards, the perspectives and knowledge shared by the participants were analyzed using a thematic approach.This approach is often employed in case study research as it has been documented in the literature (Yin, 2024).Specifically, this research has used the approach of thematic analysis, which encompasses six stages: familiarizing oneself with the data, creating initial codes, finding overarching themes, scrutinizing and refining the themes, giving labels to the topics, and compiling the report.Stake (1995) presented four forms for interpreting data in case study research: category aggregation, direct interpretation, pattern identification, and naturalistic generalizations.This study grouped the collected data into ten categories through coding and condensed them into four themes.Then, generalizations regarding themes were formulated, and a comparison was made with relevant literature on the Santal community's educational situation.
To ensure the validity of the data, this study performed a triangulation.Data obtained from case study interviews and focus group discussions was compared with information gathered from key informant interviews.The data was obtained in its unaltered state by audio recording, which was then transcribed.The data analysis includes exact quotes to improve the accuracy of the data and provide a thorough comprehension of the real situations of the study population.The study has complied with ethical standards in various aspects, including obtaining informed consent from participants, maintaining confidentiality, privacy and ensuring voluntary participation.
Results and Findings
Based on the themes emerged from group discussions and interviews, following sections have been presented: access to education of the Santal community, views about the importance of education, and the challenges in education.The study utilized UNESCO education framework to understand the education status and its impact in the Santal community considering the learners' achievement in terms of educational access, learning contexts, available educational inputs, teaching-learning environment and process and finally, education output among the Santals.
Access to Education of the Santal Children
All children within the age range of primary school in the study village are enrolled at the nearby primary school.The village has a primary school that is managed by the village community.
The school has an approximate enrollment of 25 pupils.A non-governmental organization (NGO) once operated the school; however, it has recently stopped financial backing.In this situation, the community has taken on the responsibility of sustaining the school's operations via self-financing, therefore ensuring that all children of primary school going age have access to school education.According to KII-1, The school was first built and operated by an NGO along with financial support.However, the provision of assistance for it has stopped in recent times.The dearth of educational opportunities for the students was mostly attributed to the distance location of the government primary school and their apparent reluctance to participate in government school.As a result, the school was reopened by the provision of financial assistance from the community.
The community has currently a total of 15 secondary school pupils, categorized by age.However, a total of nine students only attend school.The remaining children have completed their elementary schooling but are now discontinuing their enrollment in the educational institution.KII-2 said, There is a subset of youngsters in the community who do not attend high school due to financial constraints.Besides, there seems to be a lack of desire among them in engaging in academic pursuits.As a result, they prefer to take manual jobs and travel here and there rather than attending school.Even if parents persist to motivate them, they are not willing to attend the school.
The number of students who meet the criteria for enrollment in higher secondary education subsequent to successfully completing their secondary education is 12.However, just two have pursued further education, while the majority has discontinued their studies.It is found that the discontinued students had assumed the role of day labour, engaging in employment to generate income and provide financial support to their respective families.Similarly, while the expected number of individuals pursuing higher education is five, the current status reveals that only three students are now engaged in higher education.Two of them are female students who are studying at honours level, while the other one is a male student studying towards the completion of a degree program.
Importance of Education in the Community
The Santal community has historically endured neglect and discrimination, resulting in their marginalization in mainstream society.Moreover, the lack of integration with the wider population has caused a significant disparity in their progress across several domains, notably in the realm of education.However, there has seen significant advancements in education in recent years, attributable to both governmental and non-governmental endeavours as well as heightened consciousness.The endeavours at hand are a collective undertaking including children, parents, teachers and other members of the community.
Students who are granted access to education see it not only as a fundamental entitlement but also as a significant determinant of their future prospects.According to CSI-5, We identify ourselves as members of the adivashi community.We have experienced a lack of integration with the mainstream community for a long period of neglect.Education is the one means by which our condition may be altered.I am now pursuing a degree in the field of engineering at a private university.Upon completion of my academic studies, I want to establish myself and change the socioeconomic condition of my family.
Similarly, CSI-1, a secondary school girl, expressed her commitment to attending school consistently while facing familial challenges.
I go to school on a regular basis, despite my family problems.My ambition is to become a doctor one day.I will work for the people.I aspire to be a doctor.Doctors are held with great regard.I'm now studying science.In the future, I hope to study medical science.
In the contemporary period characterized by advancements in technology and economic development, parents feel a heightened recognition and urge of education in the Santal community.The community people have the belief that their current socioeconomic conditions may be transformed through receiving education.According to KII-4, My father had no literacy skills.At this juncture, we can read somehow.However, the world has gone through a significant change.Education is the key element in society.It has a pivotal role in transforming one's social standing and economic circumstances, so this motivates us to priorities the education of our children in order to secure a more promising future.
Senior members also hold the belief that there has been a significant shift in the educational attainment of the community.In the past, there was a limited level of interest for education, however now; it is found that there exists a substantial degree of interest in receiving education.
In this regard, KII-5 said, The community has shown a growing interest in pursuing formal education.However, it is essential that this numerical value undergoes an increment.
Efforts are being made to enhance the enrollment rate among those seeking education within the community.Consequently, despite the initial small number of Santal students a decade ago, there has been a substantial increase in their numbers now.
FGD-1 depicted the historical causes of the lack of education among the Santal community, Historical impediments to education within the Santal community mostly included poverty and limited understanding.The current scenario has undergone a transformation.Every student is provided with a stipend, fostering curiosity and facilitating educational prospects.However, it is important to ascertain the underlying causes of the dropout and thereafter implement suitable measures to address this issue.
The discernible influence of education on the Santal community has been more evident in recent times.The rate of literacy is progressively rising as a result of the efforts made by both governmental and non-governmental institutions, as well as the initiatives undertaken by Christian missionaries.The introduction of formal education has resulted in notable transformations within the Santal traditional social structure.Haviland (1990) posits that there are several elements that potentially contribute to the cultural transformation experienced by a community including innovation, diffusion, migration, modernization and globalization.The Santal community holds the belief that education plays a crucial role in fostering competence in the age of globalization.
An individuals can alter his/her economic and social circumstances by means of education.The use of technology in the field of education has seen a notable surge in recent years, hence fostering innovation.The Santal community has the means to engage with media and mobile technologies.
The individuals are developing an awareness of education, which is facilitating their departure from traditional perceptions.According to a participant of FGD-1, Our culture is plagued by several issues, one of which is the prevalence of superstitious behaviours.As a result, we are largely disregarded in the mainstream society.However, the current circumstances have undergone a transformation.We may engage in recreational activities, socialize and get pleasure in the company of our fellow individuals from the Bengali community.These achievements have been made possible as a result of our educational qualification.Education has the potential to address the constraints and obstacles hindering the progress of our society.Currently, we are making significant progress in the domains of healthcare and education.I anticipate a positive change in our culture in future due to educational attainment.
According to Haviland (1990), cultural change may give rise to distinct outcomes, namely assimilation, adaptation, and extinction.The increasing inclination of the Santals to pursue education mostly stems from their desire to progress by integrating into the wider community while preserving their cultural identity.In this regard, the findings of FGDs revealed that education has had a positive impact on the Santal culture.Such as, First and foremost, the Santals possess a distinct cultural identity that is susceptible to the influences of mainstream culture, globalization and urbanization.However, schooling emphasizes the need of preserving and fostering one's own cultural heritage.This also encourages the Santal community to uphold their culture, identity, traditions through formal and informal practices in the community.
Secondly, for a long time, the Santal community has been subjected to neglect, injustice, and suffering.Besides, people belonging to this community have encountered various forms of discrimination and deprivation across several domains, including social, economic, and political realms (Ali, 1998).In this regard, education has been playing a crucial role in fostering awareness and understanding among the community members about the establishment and assertion of their rights.
Thirdly, education has a significant role in mitigating the biases ingrained within the traditional belief.For instance, although the drinking of haria (local homemade wine) is ingrained in the cultural practices, excessive intake of haria may have detrimental effects on their physical health and wellbeing.Both FGD groups strongly believe it and accordingly try to avoid it in their everyday life.
Fourthly, an additional aspect to consider is economic development.Through receiving education, individuals are given the chance to engage in formal institutions to a certain degree, so augmenting their social and economic standing.
Fifth, it is important to consider the concept of human and constitutional rights.Santals who have received formal education have knowledge on their rights as citizens of the country.The educated members are now engaged in a proactive endeavour to advocate for the establishment of their basic rights, as shown by their vociferous and assertive movement.
Least but not last, a notable change has occurred in the family and marital practice.While child marriage was a common practice in the past, its incidence has reduced as a result of increased access to education.As per the statement made by KII-2, alterations in culture possess the potential to provide both positive and negative outcomes.In this perspective, the change in the Santal society as a result of education is seen favourably.
Educational Challenges
In the Santal community, there exists a favourable educational attainment rate at the primary and secondary levels.However, when considering further higher secondary and tertiary education, the attainment rate is much lower and raises concerns when compared to the mainstream population.There exist several factors contributing to this dropout, including poverty, language barrier, future employment uncertainty, corruption, industrialization and cultural impediments which are explained below.
Poverty
Poverty is the primary impediment to educational attainment of Santal children.The majority of individuals in this community primarily engage in manual employment.The sustenance of their livelihoods is contingent upon their daily earnings.A significant number of households exhibit a preference for engaging in daily labour rather than pursuing formal schooling.CSI-11 mentioned that, I had the intention to further my studies but was unable to do so due to the financial crisis of my family.My parents wanted me to engage in income generating activities so that I could contribute to my family.We do not have land or property.As a result, I was compelled to discontinue my study and engaged in manual labour as a means of employment.
Parents sometimes believe that it is not always feasible for them to bear the kinds of expenditures associated with schooling.As a consequence, they prioritize other tasks above studying.For example, KII-3 claims, "schooling expenses especially in high school and college levels are beyond our capacity.As a result, our children are unable to attend school.The government's supports are partial and hence, we can't afford all the additional expenses."
Future Employment Uncertainty
The Santal community is grappling with a predicament about the assurance of securing employment at the completion of their educational pursuits.As per their assertion, the Santals lack the requisite money as bribe and influential connections necessary to get employment.A participant of FGD-2 described that, We wanted to study, but we dropped out.Where may one get job security?Some of us successfully completed SSC, however were unable to get employment.Bribe or power or influence is a prerequisite for getting employment today.We don't have so much money to bribe for a job.So, securing a job is an unrealistic ambition and unattainable goal for us.
Education Status of the Santal Community in Northern Bangladesh: A Case Study
Corruption
Corruption in the wider society has been said to be a significant impediment to the educational advancement of the Santal community.The prevalence of corruption within the broader societal context is mirrored among the education seekers of the community.In this regard, a participant of FGD-2 said, It is now common to provide bribes in order to get a job.The lack of money to provide bribes may diminish the likelihood of securing a job.How will we manage so much money!We do not have anyone to influence.There exists an uncertainty regarding the future after completing formal education.So, it is better to do some alternative jobs to survive from now onwards.It is best to accomplish this immediately.
Gender Discrimination
Gender discrimination often leads to the dropout of students.As an example, CSI-12 asserts that her ability to engage in studying is restricted by her mother.
My mother does not allow me to study.She prefers that I remain at home and work for the family.What is the use of studying for females, she wonders?Working at home is better for my mother than studying.She wants me to marry as soon as possible.I attempted to persuade her at first but failed.So, I dropped out of school and now work for family.
Cultural Barriers
Cultural barriers are a significant contributing factor to student dropout.Haria paan (drinking local wine) holds significant cultural value among the Santals.However, this carries a negative perception among the wider community.As an example, CSI-13 asserts: I used to attend school; however, I often encountered the appellation "Santal" [slang], 'drink haria' from my fellow classmates.I had a prolonged period of laughter, which elicited a sense of melancholy inside me.Later I decided not to continue my education due to the insult.In addition, the Bengali language as a means of teaching was a problem for us at the beginning.
Industrialization
Industrialization has a significant impact on the Santal households.The opportunity of employment in the garment sector has seen a notable upsurge among the community in recent years.Given the prevailing uncertainties surrounding employment prospects and the labour market, a significant number of Santal children prioritize pursuing opportunities in the Readymade Garments Sector.
A female garment worker commented on their occupation within the RMG sector: There exists a degree of financial and future security.Based on my current income, I am able to sustain an average standard of life.I am now working in the garment industry.Despite my lack of formal education, I can manage to sustain myself financially here (CSI-15).
Language Barrier
Language is a crucial component for teaching and learning at every educational level.Santali youngsters acquire and use the Santali language as their primary means of communication since childhood.However, the educational institutions use Bengali as the primary language of teaching.Consequently, children have challenges in comprehending the instructions and educational content in the classroom throughout the primary stages of their academic journey.
This not only hinders their academic progress in comparison to their mainstream peers but also contributes to their decision to discontinue studies.As CSI-16 said, We have learned our mother tongue from infancy.All of us are fluent in the Santali language.However, Bengali is the medium of instruction for teaching in all educational institutions in the country.Children often struggle to comprehend the instructions given by teachers in the school, particularly during the primary level of education.The teachers do not know our language.Consequently, our ability to communicate effectively and fully enjoy our educational experience is hindered throughout the early stages of schooling.I suggest the implementation of a policy mandating the presence of at least a teacher from our community in every school, ensuring effective communication with Santal youngsters.
Early Marriage
Interviews with key informants indicate that early marriage was prevalent in the past; however, there has been a recent decline in this practice owing to increased awareness within the community.Nevertheless, several families continue to engage in early marriage, hence exacerbating the issue of school dropout.As stated by CSI-20: My parents wed me between the ages of fourteen and fifteen.Although I was unprepared, I felt compelled to comply.I was in class nine and aspired to further my academic pursuits.However, my parents received a marriage proposal and believed it prudent for me to enter into marriage, given that the prospective bridegroom belonged to a wealthy family.I tried to convince my parents but was unsuccessful.Eventually, I married, but within two years, I had to file for divorce because my husband was drug addicted.This marriage has ruined my life and aspirations!
Discussion
The Santals youngsters possess poor education status in terms of literacy rate, enrollment and continuity.There are many interconnected socio-economic factors behind their poor education status.One of the remarkable reasons is that, living in an extreme poverty condition, they cannot afford to attend school without working, making them reluctant to continue studies.They prefer financial assistance from schools; otherwise, they choose the options of getting paid by labour.The Santals culture is more traditional than Bengali culture; therefore, the Santals youngsters face social stigma in the school, so let them remain unenrolled in the school.Moreover, the unavailability of the school near their village also pushes them to remain unenrolled.
Many of the youngsters, specifically those who are continuing education, realize the significance of education and willing to change their economic condition by attaining a good position in society.They also realize that becoming engineers, doctors, or having at least a solid educational qualification can assure their promising future.These kinds of self-realization and aspiration are quite similar among Bengali youngsters.It indicates many of them understand the significance of education and want to continue their studies.Although there are some historical barriers to educating themselves, the understanding of the significance of education is changing positively.
It has been found that their interest in enrolling in educational institutions and continuing to complete a degree is increasing day by day.
Various socio-economic factors contribute to the low attainment rate of education within the Santals.Poverty, unpredictability regarding future employment, gender discrimination, corruption, language barriers, child marriage, and cultural obstacles are the primary causes of the Santals' unfavourable educational standing.One of the major reasons for poverty is that the Santals prefer daily earnings to taking education as an investment for the future.Additionally, as the cost-of-living rises, they are unable to afford the additional financial burden of education expenses.Moreover, the rate of unemployment is very high in Bangladesh.Nobody can be sure of getting a job in this overpopulated country, which also makes them reluctant to receive education from the formal institutions.In the uncertain and competitive job market, corruption, discrimination and social stigma are also considerable factors in their poor attainment.In addition, the growth of the garment industry has reduced interest in education because they can earn a minimum wage from the industry.
Education has a positive impact on the Santal community in many ways.Above all, new technologies like mobile and different media have made them understand the significance of education.Diffusions of culture have also brought them mentally to accept education, and the awareness of education motivates them to leave the traditional cultural mind.As they do internal migration, they learn that education has changed the lives of others in society.Moreover, the Santals are aware of the fact that education can make them skilled manpower to sustain in the era of modernization and globalization.Thus, education plays a pivotal role in making them integrate into the mainstream society.Vickie Roach, an Aboriginal Graduate of Deakin University, said that education has consistently played a crucial role in the economic, social, and cultural advancement of Indigenous communities.In addition, a quality education has a crucial role in shaping the health, literacy, career opportunities, social standing and productivity of Indigenous children (as cited in Das, 2011: 43).Furthermore, a Santal teacher comments that they are aware of the significance of education now and make efforts to enroll their children in school (as cited in Das, 2011: 55).
Ensuring quality education within the Santals is threatened by different barriers like poverty, cultural barriers, language difficulty, etc.All these barriers have pushed the Santals to an unequal margin of being deprived socio-economically in society and the state.Moreover, they cannot avail of the opportunity of getting education in their mother tongue as the public institutions provides education mainly in Bengali.
Like Hoque's study (2023), it is found that 'Santali' is the main language that Santal children use to talk to each other, and they start learning it from childhood.There is a language hurdle in the classroom because the language used for teaching is not the same as the child's first language.It's likely that most parents don't know how to help their kids get ready for school (Hoque, 2023).Furthermore, inadequate nutrition and poor health during early life might have a significant impact on learning and cognitive development in subsequent years.This research posits that the education system, particularly the schooling provided to Santal children, has been affected by the sociocultural settings and economic circumstances prevalent in the community.
The differences among the Santal youngsters undeniably constitute a significant aspect that impacts their academic pursuits.
The government and NGOs in Bangladesh have taken several initiatives and projects to ensure equal access to education for everyone, including the Santal community (Sen, Roy & Lamin, 2007).The aim of these educational efforts is to improve the general literacy rate by offering compulsory primary education to all children, thereby guaranteeing their right to education as citizens.In this case, the Santal people have a positive attitude towards education.They think education is the only option for improving their socio-economic conditions and the community as a whole.The access to education has already had a positive impact on Santal society.In this regard, the Santal children have access and attended primary school; however, the rate of enrollment significantly declines thereafter.The primary factors driving this drop out are deprivation, apprehension over forthcoming circumstances, and a pervasive dearth of information.
Besides, the educational status of Santal children is influenced by several socioeconomic factors, such as employment uncertainty, poverty, and the quality of education.Sen, Roy and Lamin (2007) outlined the factors contributing to dropout rates, including insufficient security measures and unfavourable environmental conditions that negatively impact the welfare of the Santal community.Additionally, challenges arise from long-distance travel, illness, economic crisis, familial issues, early marriage, academic underperformance, lack of interest in studying and linguistic barriers.Sharif (2014) additionally highlighted the persistent obstacles faced by Santal children, including linguistic differences, social isolation, lack of nurturing and motivating home environments, parents' limited understanding and awareness, poverty, demands of family obligations, early marriage, alcohol addiction and limited access to education.
The Santal people face significant challenges arising from economic instability associated with their property rights.Children experience increasing academic disparities and a significant number of them discontinue their education as a direct consequence of this factor (Das, 2011).Their economic marginalization is an additional factor that contributes to their downfall too.Instead of attending school, children are required to promptly begin their work search in order to sustain themselves financially.Consequently, students are increasingly lagging behind in their academic pursuits.Besides, they face discrimination in educational institutions due to the absence of legal entitlement to receive education in their native language.Oftentimes, the linguistic barrier hinders ethnic children from attaining an education that is equivalent to that of Bengali children (Cavallaro & Rahman, 2009).
It is essential to evaluate the degree to which Santal children have challenges to attend school, considering their minority position in the country.According to Sen, Roy and Lamin (2007), a significant number of parents choose to shift their children from educational institutions as a result of the ongoing economic crisis.Furthermore, in rural areas, a considerable proportion of young individuals have restricted educational opportunities due to the substantial geographical distance that separates their homes from schools (Cavallaro & Rahman, 2009).As a result, the children's school attendance becomes irregular.Parents with limited academic qualifications sometimes struggle to adequately monitor their children's educational advancement (Hoque, 2023).As a result, young people have a deficiency in being prompt with their studies and neglect to engage in completing formal education.
Primarily, schooling serves as a reflection and validation of the unique cultural identity and spiritual beliefs held by the Santal people (Debnath, 2010).Das (2011) asserts that schools play a pivotal part in the development of a community.Nevertheless, it is important to recognize that the Santal community is now facing various manifestations of covert discrimination inside the educational system.Although a considerable number of Santal parents choose to admit their children in schools, they nonetheless encounter substantial obstacles in acquiring adequate educational opportunities.The significant prevalence of school dropout among youngsters is a cause for serious worriedness, primarily because of its possible consequences on the ability of future generations to effectively tackle the difficulties resulted from poverty and landlessness and attain economic empowerment (Das, 2011).Debnath (2010) argued that the educational system, characterized by its colonial origins, has been ineffective in cultivating a sense of selfrespect among the Santal community and in safeguarding their property from encroachment.Consequently, they have been subjected to various forms of oppression such as bonded labour and indebtedness.
The Santal youngsters feel disconnected and alienated in schools that enforce dominant cultural norms by imposing language dominance.They often struggle to discover any correlation between what they are taught in school and what they learn from their real-life experiences (Debnath, 2010).Due to 'bicultural ambivalence' (Cummins, as cited in Debnath, 2010), Santal youngsters face uncertainty and poor self-esteem both individually and as a group.Consequently, many of them end up leaving school.The education and linguistic policies implemented have resulted in the dispersal and individualization of the Santal community, causing the loss and fragmentation of their ethnic identity in rural areas (Debnath, 2010).So, it is crucial, according to Cavallaro and Rahman (2009), that members of the marginal community possess literacy skills in Santali, which is also their mother tongue.Additionally, fluency in both Bangla and English is necessary to take advantage of all the benefits that the formal education system has to offer.For the establishment of effective bilingual and multilingual educational programmes for the Santals, collaboration among linguists, educators, and government agencies responsible for curriculum development, assessment, and evaluation is essential (Cavallaro & Rahman, 2009).It is of the utmost importance to address the distinct educational needs of the Santal community through the establishment of a resilient and enduring bilingual curriculum that emphasizes the Santal language.The strategy ought to priorities cultural significance and resilience, with the aim of expanding educational access for this community.Such initiatives are essential for enhancing the quality of life for the Santals and guaranteeing the survival and revitalization of their native tongue (Cavallaro & Rahman, 2009).There is no doubt that there has been a change in the Santal society due to education.However, in order to improve the education status of the Santal community, it is necessary to enhance their economic prosperity, the teaching-learning process and ensuring that the results of education fit with the needs of the Santal community.
Conclusion
Education is universally acknowledged as an essential entitlement for every person, regardless of their ethnic background, religious beliefs or gender status.Small ethnic communities in Bangladesh have had significant socioeconomic development since gaining independence.Following then, the Santal community has experienced substantial progress in terms of gaining access to formal education and cultural advancement.Without a doubt, this has unequivocally resulted in a beneficial transformation in the community.Nevertheless, the community is deeply concerned about the dropout rate that occurs after primary school, mostly owing to socioeconomic factors.In addition, it is found that a uniform educational approach has been implemented for both small ethnic groups and the mainstream population, leading to difficulties in language acquisition.More precisely, this approach is not suitable for Santal youngsters, since they communicate and acquire knowledge about the world using the Santali language.Not only does this language serve as a means of communication, but it also embodies social attitudes, histories and traditions.The access to primary education for Santal students is acceptable, but there is a notable worry over their achievement in higher levels of education.In this regard, poverty, education policy and cultural barriers are significant obstacles that need immediate response at the policy level.Simultaneously, it is advisable to use the Santali language as the primary medium of teaching in regions where the Santal people resides, gradually transitioning to the Bangla language as the medium of instruction (Hoque, 2023). | 2024-05-23T15:22:53.370Z | 2024-05-21T00:00:00.000 | {
"year": 2024,
"sha1": "ad6e1c9fef0047d0ab5504ed093f8c8c87519b83",
"oa_license": "CCBYNC",
"oa_url": "https://www.banglajol.info/index.php/TWJER/article/download/71996/48705",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e3c8f6c1ddba1134c0a9e431abcd568b1337f795",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": []
} |
92996363 | pes2o/s2orc | v3-fos-license | Heart transplantation in a patient with Myotonic Dystrophy type 1 and end-stage dilated cardiomyopathy: a short term follow-up.
Myotonic dystrophy type 1 (DM1) or Steinert's disease is the most common muscular dystrophy in adult life with an estimated prevalence of 1:8000. Cardiac involvement, including arrhythmias and conduction disorders, contributes significantly to the morbidity and mortality of the disease. Mild ventricular dysfunction has also been reported associated with conduction disorders, but severe ventricular systolic dysfunction is not a frequent feature and usually occurs late in the course of the disease. Heart transplantation is currently considered the ultimate gold standard surgical approach in the treatment of refractory heart failure in general population. To date, considering the shortage of donors that limit the achievement of a greater number of heart transplants and the reluctance of the cardiac surgeons to transplant patients with dystrophic cardiomyopathy, little is known about the number of patients with DM1 transplanted and their outcome. We report the case of a 44 year old patient with Steinert disease who showed an early onset ventricular dysfunction refractory to optimal medical and cardiac resincronization therapy, and underwent to successful heart transplantation. At our knowledge, this is the second heart transplantation performed in a patient affected by Steinert disease after the one reported by Conraads et al in 2002.
Introduction
Steinert's disease or Myotonic Dystrophy type 1 (DM1) is an autosomal dominant multisystemic disor-der characterized by myotonia, muscle and facial weakness, cataracts, cognitive, endocrine and gastrointestinal involvement. Cardiac involvement affects the conduction system in about 80% of cases and usually follows the onset of myopathy (1). One third of patients with DM1 may have sudden cardiac death, likely due to the onset of malignant ventricular arrhythmias, so the early identification and treatment of the cardiac impairment is the main key to prevent this tragic event. Advanced degrees of conduction abnormalities and arrhythmias are indicated as significant predictors of mortality in patients with DM1 (2,3). Myocardial contractility is less commonly impaired and heart failure (HF) may occur late in the course of the disease as the final stage of the cardiomyopathy (4,5). Despite cardiac involvement, DM1 patients are usually asymptomatic, probably due to the limited level of activity and consequently reduced cardiac demand (3). Heart transplantation (HT) is currently considered the ultimate gold-standard surgical approach in the treatment of refractory heart failure (RHF), a situation in which the patients present with great functional limitation and high mortality rate (6). Thus, HT should be taken into account for patients in III and IV NYHA class, who need recurrent hospitalizations, and present with a poor prognosis despite the therapeutic optimization.
To date, because of the shortage of donors and the high operative risk related to muscle impairment and respiratory failure in patients with DM1, heart transplanta-tion is not considered an appropriate option in these patients (7).
We report the second case of a successful heart transplantation in patient with Myotonic Dystrophy type 1 who showed an early ventricular dysfunction, despite the employ of optimal medical and cardiac resynchronization therapy. The diagnosis of DM1, based on family history (father and one brother affected) and presence of typical clinical features (myotonic phenomenon, mild distal skeletal muscle atrophy, cataract, gastrointestinal disturbances, endocrine deficiency), was subsequently confirmed by molecular testing, that showed a pathological expansion (500 CTG triplets). In 2005, a bicameral pacemaker (PM) was implanted because evidence of first degree (PR interval ≥ 255 ms) plus second-degree type 2 atrio-ventricular block (8)(9)(10)(11)(12), and concomitant episodes of paroxysmal atrial fibrillation (AF), a frequent finding in this population (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28). The implant was followed by an improvement of symptoms and quality of life. In 2013, the PM was uploaded to a cardioverter defibrillator (ICD), because of the detection of not sustained ventricular tachycardia (NSVT) in pacemaker stored electrograms. According to our protocol the uploading is usually performed to prevent the risk of sudden cardiac death, frequently observed in these patients as in others muscular dystrophies (29)(30)(31)(32). At six-months follow-up, the epicardial CRT did not induce symptom relief, nor improvement of the ejection fraction ( Fig. 1) or reduction of the arrhythmic risk.
Case report
Three years later during a routine cardiological check, signs of congestive heart failure (CHF) were detected. Transthoracic echocardiography showed a dilated cardiomyopathy, with a left ventricular end-diastolic diameter (LVEDD) of 7.4 cm and an ejection fraction, calculated by the Simpson and Teichholz method, of 25%. Pharmacological treatment was changed to achieve symptom remission. Six months later the patient was hospitalised for a new episode of HF [fatigue, muscle weakness, dyspnea, ortophnea, edema and palpitations, New York Heart Association (NYHA) class III]. At the control, blood pressure (BP) was 107/57 mmHg and heart rate (HR) 70/bpm, crackles at the basal field of lungs and pretibial edema were detected. Chest X-ray confirmed cardiac dilation and pulmonary congestion. In the following 12 months, despite the optimization of the medical therapy, the patient experienced two further episodes of acute heart failure.The therapy was changed again and included a more aggressive loop diuretic therapy, β-blockers, spironolactone and ACE inhibitors (33). As no relief in symptoms of heart failure was obtained, the patient underwent -after the acquisition of informed consent -cardiac resynchronization therapy (34)(35)(36) using an epicardial approach because of angiographic evidence of right subclavian vein occlusion (37). As six-months later, no symptom relief was reported by the patient, nor an improvement in the ejection fraction detected on the echocardiogram, the patient was addressed to heart transplantation that was performed in June 2018. At the time of transplant preevaluation, the patient showed a mild muscular impairment and no respiratory involvement.
Follow-up
The intraoperative course did not reveal any complication; the postoperative course was prolonged due to transient severe respiratory failure requiring antibiotic therapy and mechanical ventilation. The invasive ventilation was withdrawn 3 days after surgery and antibiotic therapy prolonged for 20 days. As post-operative immunosuppression, the patient received cyclosporine A and everolimus. Subsequently, oral prednisone was added to maintain immunosuppression. At one month follow-up the patient showed a successful functional rehabilitation with a good performance status. Neither evidence of graft dysfunction nor progression of muscular impairment was detected after 1 and 3 months, respectively. The cardiological post-operative follow-up included evaluation of patient's clinical status and echocardiography. At 3 months follow-up, no symptoms of heart failure (e.g. breathlessness, ankle swelling and fatigue) nor clinical signs (e.g. elevated jugular venous pressure, pulmonary crackles and peripheral oedema) were found and the patient's exercise tolerance was slightly improved. Transthoracic echocardiography showed normal heart size (left ventricular end-diastolic diameter -LVEDD -was 4, 2 cm) and systolic function (EF and FS were 64% and 37%, respectively) (Fig. 2). The observed enlargement of the left atrium is a normal post-transplantation finding.
Discussion
Cardiac complications -as conduction system anomalies and arrhythmias -in patients suffering from Myotonic Dystrophy type 1 have been frequently described in the literature. Conversely dilated cardiomyopahy in general and end-stage cardiomyopathy in particular is uncommon (8). The clinical recognition of congestive heart failure in muscular diseases has some more difficulties, as fatigue is often inherent to muscle weakness while exercise tolerance can be impaired by the muscle disease itself. In the classic clinical picture of myotonic dystrophy, skeletal muscle impairment appears years before the onset of cardiac symptoms. Nevertheless, in some cases, cardiomyopathy may represent the initial and unique manifestation of the inherited myopathy (4,5), as it happened in our patient, in which a marked discrepancy between skeletal muscle and cardiac involvement was observed. In fact, while myopathy was mild and slowly progressive, cardiomyopathy displayed a rapid and severe course requiring HT about 15 years after the diagnosis.
The early onset of heart failure in this patient could be related to the electromechanical delay caused by the intra-and inter-ventricular asynchrony induced by the chronic right apical pacing that causes an uncoordination in the heart contraction which in turn accelerates the progression of the heart failure, as previously reported (37).
Heart transplantation is an elective treatment in patients with ischemic disease and refractary end-stage HF; it is usually accepted that this procedure significantly increases survival, exercise capacity and quality of life compared with conventional treatment (6). However controlled trials are not available.
Inherited myopathies in patients with endstage cardiomyopathies have always been considered a relative controindication for HT (39) because of the perioperative risk secondary to respiratory muscle weakness. Furthermore, a possible progression of the underlying myopathy due to immunosuppressive therapy, is a potential side effect with unknown consequences on the quality of life and prognosis. However, previous papers showed that clinical outcomes of cardiac transplantation in Duchenne/ Becker patients with end-stage dystrophinopathic cardiomyopathy seem to be similar to a matched cohort of patients undergoing transplantation for idiopathic dilated cardiomyopathy (40)(41)(42)(43). In particular, Cripe et al. (42) reported the case of a 14-year-old patient with intermediate Duchenne Muscular Dystrophy (IDMD), preserved pulmonary function and severe dilated cardiomyopathy who underwent successful cardiac transplantation and survived four years later. Rees et al. (43) described heart transplantation in 3 patients with DMD with a mean duration of follow-up of 40 months. All patients tolerated immunosuppression, had no complications in post-operative intubation and were able to be rehabilitated.
In our experience (40) on 4 patients with end-stage dystrophinopathic cardiomyopathy (3 Becker patients and 1 with X-linked dilated cardiomyopathy), the outcomes were without complications both in the post-operative follow-up and in the long-term follow-up.
These experiences suggest that cardiac transplantation can be successfully performed in patients with muscular dystrophy in general and in patients with Steinert disease, who present a severe cardiomyopathy, provided that they have a preserved pulmonary function and a mild muscle impairment. However, reports on clinical outcomes of cardiac transplantation in patients with muscular dystrophies or extended follow-up periods are still rare and are advisable. At our knowledge, this the second case of heart transplantation, described in literature, in a patient with Steinert disease, after that reported by Conraads et al., in 2002 (44), with satisfactory short-term results.
This case report reinforces the increasing opinon that patients with muscular disorders should have the opportunity to access cardiac transplantation because of under- lying myopathy, as long as there is a careful selection of patients especially with regard to muscle and respiratory function. | 2019-04-05T03:28:45.110Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "59e7256dab6121782af9d70a4a1703c54cead907",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "59e7256dab6121782af9d70a4a1703c54cead907",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239631906 | pes2o/s2orc | v3-fos-license | Enacting Scenario Card-Lesson Study in Pre-service Teacher Education: A Case Study on Indonesian Pre-service Teachers with Disabilities
Kata kunci: Lesson study kartu scenario; Calon guru berkebutuhan khusus; Pendidikan calon guru; Pengembangan materi Pre-service teachers are agents involved in several teaching training before involving in the professional community. The present study was designed to look at the enactment of Scenario Card-Lesson Study (SCLS), a previously developed learning media. A single pre-service teacher with disabilities was voluntarily involved in the project. Data were garnered through multiple video-recorded observations and checklist documentation. Findings suggest that the participant demonstrated contextual practices of classroom teaching using SCLS. Based on the themed-finding, the participant designed the lesson plan effectively, improved classroom teaching skills, and wrote the lesson study report well. In addition, based on our observation session, The participant also engaged fully in student-student discussion and teacher-student interactions. One tangible finding from this study is that the participant appeared autonomous in designing the teaching and learning plans. Suggestions for policymakers, stakeholders, and future researchers in pre-service teacher education are offered in this paper.
Pre-service teachers are agents involved in several teaching training before involving in the professional community. The present study was designed to look at the enactment of Scenario Card-Lesson Study (SCLS), a previously developed learning media. A single pre-service teacher with disabilities was voluntarily involved in the project. Data were garnered through multiple video-recorded observations and checklist documentation. Findings suggest that the participant demonstrated contextual practices of classroom teaching using SCLS. Based on the themed-finding, the participant designed the lesson plan effectively, improved classroom teaching skills, and wrote the lesson study report well. In addition, based on our observation session, The participant also engaged fully in student-student discussion and teacher-student interactions. One tangible finding from this study is that the participant appeared autonomous in designing the teaching and learning plans. Suggestions for policymakers, stakeholders, and future researchers in pre-service teacher education are offered in this paper.
INTRODUCTION
Pre-service education has been of great interest in research among scholars. In particular, there has been a growing concern on pre-service teachers' teaching and learning process in the classroom context in the last decades (Lim et al., 2018;Owiny et al., 2019;Quinlan, 2020;Zhang et al., 2018). Research to date has much focused on pre-service teacher reflection (Karlström & Hamza, 2019), pre-service teacher beliefs (Othman & Kiely, 2016), pre-service teacher teaching practicum (Maidou et al., 2020), and pre-service teacher identity construction (Trent, 2012). The studies above have greatly contributed to positive findings of pre-service teacher research in the educational landscape. Despite this, there is a scarcity of research exploring how a single preservice teacher enacts the teaching and learning process employing a specific learning approach during the class. The presents study was designed to uncover the implementation of a previously developed learning product, Scenario Card-Lesson Study (SCLS), in an Indonesian pre-service teacher program enacted by a pre-service teacher with disabilities.
Teacher competence can be done through extensive teaching practices. Both pre-service and in-service teachers can do this. In the university context, where a teacher education preparation program is carried out, undergraduate students are likely to practice teaching in the schools through internship programs (Siri et al., 2020). Teaching practice is a compulsory subject taken by all students at the faculty of teacher training and education. During the teaching practice activity, the student teachers have exposed a real-life experience of teaching and learning where they implement their academic, cognitive competencies, such as teaching competence, social and negotiation competence, and educational management competence (Owens et al., 2021).
The teaching practice activity is administered at schools under several characteristics, such as (a) it is both programmed and supervised activity where every pre-service teacher is supervised by the school subject teacher, lecturer, as well as school principal and (2) the conducted teaching practice can be in the form of a lesson study (LS). Theoretically, LS is a professional guided teaching model for the pre-service teachers, which is developed through collaborative and simultaneous learning undergone through several collaborative principles where the pre-service teachers help each other achieve the community learning goals (Triyanto, 2016). One of the ways used to prepare the pre-service teachers' competence can be done via a particular training-like activity or be conducted by implementing a collective learning model (Yusuf et al., 2017). Moreover, LS per se is a skill leading to learning activities following the effort to achieve a fundamental teaching competence under a simultaneous guidance group of teachers or lecturers based on collaborative and mutual learning principles to build learning (Brodie, 2021).
In the current research, teaching practice activity at school is based on LS where it is collaboratively implemented amongst pre-service teachers with particular needs, school subject teachers, school non-subject teachers, school principals, lecturers, as well as colleagues to identifying teaching and learning problems, planning teaching and learning activities, teaching students, evaluating teaching and learning activity, and even revising teaching and learning plan. Besides, standard competency of teaching practice activity is 1) to have a skilled pre-service teacher in developing innovative learning media, 2) to have a skilled pre-service teacher conducting classroom-real teaching practice activity by using innovative learning model and media, and also 3) to have a skilled pre-service teacher in planning, implementing, and evaluating assessmentclassroom based (Omar et al., 2020). In the essence of those necessary competencies, material and activity assessment in the form of teaching practice covering orientation and observation activities, composing printed teaching and learning activities, teaching practice activity, and every single activity dealing with the classroom teaching and learning activity, including ethics and school management.
Pre-service teachers are obliged to arrange printed teaching and learning activities based on several criteria and decided components. Related to teaching practice activity, every planned printed teaching and learning activity containing related materials is sought to consult the school subject teacher based on the subject's curriculum. Practically, every planned printed teaching and learning activity should be consulted and agreed upon by both school subject teachers and lecturers (Afalla & Fabelico, 2020;Sarkadi et al., 2020). The number of printed teaching and learning activities done by the pre-service teacher is based on the varied materials and meetings, at least done in ten meetings. Additionally, the set of obliged teaching practice documents are syllabus and lesson plan. Even more, the pre-service teacher is suggested to be able to arrange an annual and semester program, as well as to be able to assess tested materials.
Given the importance of researching pre-service teacher's teaching practice with disabilities in an Indonesian university context and the gap of research persisting in the literature, the present study was designed to reveal the enactment of a previously developed learning product, Scenario Card-Lesson Study (SCLS), carried out by a single pre-service teacher in an Indonesian pre-service teacher program. The study specifically aims at revealing the implementation of the Scenario Card-Lesson Study (SCLS) method in the teaching and learning process carried out by the pre-service teacher.
METHOD
The present study was situated in a private secondary school based in Malang, East Java, Indonesia, through a scheduled teaching practice program from April to October 2019. The study aimed to investigate the teaching practice of a single pre-service teacher with disabilities on teaching Bahasa Indonesia using Scenario Card-Lesson Study (SCLS). The nature of the study was a case study (Yazan, 2015). This design was used to capture a single-based phenomenon in a specific environment. The study was involved by a single pre-service teacher mandated to teach in the studied classroom. In each meeting, the pre-service teacher's performance is based on an approved lesson plan. Moreover, the schedule of the teaching performance was set up by the school subject teacher. In practice, every pre-service teacher should at least conduct teaching practice in ten meetings. Practically, every two hours meeting equals one meeting. In addition, the teaching practice components cover several activities, As explained in Table 1 below.
Observed-Teaching Checklist Notes
Setting up the readiness of the students during the teaching and learning activity Conducting the main activity of teaching and learning covering mastering main activity, applying specific approach or learning strategy, using related learning source and learning media Concluding the materials at the end of the activity.
The development of SCLS as a learning media was first done using Research & Development design in the previous project. Because this study was done to explore how SCLS was implemented in classroom teaching, video-recorded observation was conducted. Researchers employed a self-developed checklist to document pre-service teachers' performance in enacting SCLS in classroom teaching in this observation. Data from observation were analyzed qualitatively based on the emerging themes: designing lesson plans, teaching skills, and the ability to write lesson study reports (Lester et al., 2020).
FINDINGS AND DISCUSSION
The study's findings are detailed into three emerging themes from the observation activity: designing lesson plans, teaching skills, and the ability to write lesson study reports.
Designing Lesson Plan Competency
The study reveals interesting evidence from participant's classroom teaching observations. The first salient theme emerged is the competency of designing lesson plan before coming into the class. In this regard, researchers rated several aspects of the competency, such as subject identification, learning indicator, learning objectives, learning material, learning source, learning media, learning model, learning scenario, and learning assessment. Table 1 below showcases the observation results about the participant's competency in designing a lesson plan. The first finding from the observation shows that designing lesson plans enacted by the participant has been well-performed. As in the information column, all aspects of the lesson plan development are successfully implemented. The study concludes that the participant understands the process of creating a good lesson plan for practicing SCLS in the classroom.
The study uncovers that designing lesson plans has been well-implemented by the participant. Previous studies (see Davis et al., 2019;Drost & Levine, 2015;Quinlan, 2020;Zhang et al., 2018) have argued that designing lesson plans in teaching is essential as it directs pre-service teachers to map out the classroom activities in a given period. Indicators of competence in designing lesson plans revealed in the present study are parts of lesson planning construction done in many educational contexts such as subject identification, learning indicator, learning objectives, learning material, learning source, learning media, learning model, learning scenario, and learning assessment. Our study also echoes a recent research study in the US that pre-service teachers who modified prior lesson plans could design lesson plans than the other pre-service teachers at the university level (Lim et al., 2018). It is therefore important for pre-service teacher education to guide student teachers in designing lesson plans effectively.
Participant's Teaching Skill in the Classroom
The second themed-finding portrayed in the present study is participant's skill in teaching students using the SCLS method. The observation checklist in this study covers three rated aspects of teaching skills: pre-teaching, main-teaching, and post-teaching activities. In the pre-teaching session, we rate three aspects: apperception, motivation, and skills in delivering competencies and planned activities. In the main teaching session, six aspects are rated: mastery of the subject, application of pedagogical learning activities, implementation of selected scientific approach and evaluation, utilization of learning resources and learning media, student involvement in learning, and proper language use. Lastly, in the post-teaching session, we rate how the participant ends the classroom teaching, reflection, follow-up, and up-coming materials.
Our findings also shed light that the participant could practice classroom teaching using the SCLS method well. It is found in the observation that in pre-teaching, whilst-teaching, and postteaching sessions, the participant envisions well-prepared materials, good voice, and body movement. Specifically, the study documented that the participant enacted apperception, motivational support, and prepared learning activities for the students in the pre-teaching session. Theoretically, pre-teaching session serves a pivotal role for the whole teaching and learning process (Becker et al., 2019). It is also a departure space for teachers to control the class and manage the students' learning implementation via the prepared activities. Interestingly, in the whilst-teaching session, the participant actively involved the students in learning activities by focusing on the content course taught in the class. As evidenced in the classroom observation, Table 2 indicates participant's teaching skills. The above themed-finding illustrates the flow of participant's teaching skills in the classroom. The observation portrays three stages of teaching which are done successfully by the participant. The competency to teach starts from this well-performed. This main teaching session is an important room for activities in the teaching and learning process (Becker et al., 2019). This finding is in line with the previous studies contending that teachers' role in teaching should be exclusively seen from the main teaching session (Annisa, 2014;Jabborova & Mirsadullayev, 2020;Piercy et al., 2012). In the same vein, the participant successfully enacted the teaching and learning process in the post-teaching session. It is found that reflection on the teaching was done effectively based on our observation. This result echoes Hahl's (2021) study that teacher reflection in the postteaching could yield meaningful voices for future teaching and learning activities.
Ability to Write a Lesson Study Report
Lastly, the study uncovers how the participants write the lesson study report in the postteaching phase. Because this part is important, the participant attempted to reflect on his teaching and construct meaning from the reflections in the form of a report. Table 4 below showcases three aspects being written. Our study also highlights that the participant successfully wrote a lesson study report. It is found in the observation session and documented in the checklist. Three aspects were observed in this lesson study report writing: lesson plan use, the implementation of the lesson study, and the reflection at the end of the teaching and learning process. In terms of writing an effective lesson study report, the participant carried out such a task well. Lesson study has been of great concern for pre-service teaching competency. This practice can be done in the teaching internship programs. Many previous studies have portrayed the positive effects of lesson study on pre-service teachers' professional learning (Coenders & Verhoef, 2019;Ogegbo et al., 2019;Takahashi & McDougal, 2016). The present study's findings confirm the existing literature on lesson study implementation. It is found in the observation that the participant also included lesson plan, implementation, and reflection as the reports of the lesson study.
CONCLUSION
This study has explored the teaching practice of a single pre-service teacher with disabilities employing Scenario Card-Lesson Study in classroom teaching. The study specifically documents the participant's teaching practice from three themed findings: designing lesson plans, teaching skills, and writing lesson study reports. These findings shed light of the importance of understanding pre-service teacher agency and identity in enacting a specific teaching strategy in the classroom. As scholars have examined previously, agency and identity focused on in-service teachers and professional teachers (see Kayi-Aydar, 2019;Tao & Gao, 2017). Therefore, the study's findings may be a catalyst for investigating the agency and identity of pre-service teachers.
Based on the findings, it is suggested that educational stakeholders implement Scenario Card-Lesson Study in teacher education institutions and provide pre-service teachers with how to practice the method effectively. It is also recommended that designing lesson plans, teachers' teaching skills, and their ability to report lesson study shall be the core focus in classroom teaching. Future research is encouraged to explore this issue using collaborative action research to uncover how pre-service teachers work with school teacher mentors and position themselves in a professional learning community. | 2021-09-24T15:27:46.576Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "6672a30efa4d14a222bd1ac34a7dd7967fc500e6",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.journal.staihubbulwathan.id/index.php/alishlah/article/download/1014/432",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c51360f699f0b87a96a49f1c3016c33b401f98ad",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
221492902 | pes2o/s2orc | v3-fos-license | THE EFFECT OF INFLATION RATES ON STOCK MARKET RETURNS IN SUDAN: THE LINEAR AUTOREGRESSIVE DISTRIBUTED LAG MODEL
Article History Received: 8 April 2020 Revised: 26 May 2020 Accepted: 29 June 2020 Published: 22 July 2020
INTRODUCTION
Investigating the relationship between the inflation rate and stock returns in different countries has been undertaken by a large number of theoretical and empirical studies since the early 1930s. Understanding this relationship is vital for researchers and policymakers in both developed and developing economies, but there is still no consensus among researchers on how the relationship works: some studies report a positive relationship between inflation and stock returns, while others conclude the opposite. For instance, Fisher (1930) suggested that nominal stock returns hedge against inflation; therefore, an increase in current and expected inflation rates should lead to an increase in expected nominal dividends. In contrast, Fama (1981) argued that an increase in inflation rates negatively affected corporate profits and stock prices due to reduced real economic activity.
According to the Central Bank of Sudan, the national inflation rate reached 81.6% in March 2020, following decades of very high inflation. It is of evident importance that the relationship between inflation and other Asian Economic and Financial Review
LITERATURE REVIEW
Although, researchers as well as policymakers have paid a great deal of attention to the impact of inflation rates on stock returns, it has not been widely investigated with regard to the Khartoum Stock Exchange. The current study intends to bridge this gap in the literature, but existing studies that have examined the relationship, both in general and in Sudan. Fisher (1930) was the first such study, which was based on the assumption that the monetary and real sectors of the economy are largely independent, and the expected returns on physical or real assets, such as stocks, to move one-for-one with inflation rates, thus hedging against inflation. However, Modigliani and Cohn (1979) introduced the money (or inflation) illusion hypothesis that states investors fail to take into account the effect of fluctuations in inflation on the real value of their stock returns. Furthermore, Fama (1981) argued that stock returns are negatively affected by inflation due to the deleterious effect of inflation rates on real economic activity.
More studies have been undertaken over the last two decades, starting with Choudhry (2001) investigation into the relationship between stock returns and inflation in four high-inflation countries: Argentina, Chile, Mexico and Venezuela; the findings showed a positive relationship between current stock returns and current inflation, confirming that the former acts as a hedge against the latter. Ioannides, Katrakilidis, and Lake (2005) later studied the same relationship in Greece, using the ARDL cointegration technique and Granger causality test to identify possible long-run and short-run effects and causal direction between variables; the results revealed a negative longrun causal relationship between inflation and stock returns. Ozbay (2009), however, widened the investigation into the relationship between stock returns and macroeconomic variables (i.e., inflation, exchange, money supply growth, and interest rates, and the real economy) in Turkey. According to the results from monthly data between 1998 and 2008, inflation, exchange, and money supply growth rates, and industrial production proved not to be statistically significant.
More recently, Eldomiaty, Saeed, Hammam, and AboulSoud (2019) examined the effect of both inflation and interest rates on stock prices. Analyzing quarterly data with the linearity, normality, Johansen cointegration, cointegration regression, Granger causality, and vector error correction model tests, the results showed that inflation rates were negatively associated with stock prices. The ambiguity remains, though, as Alqaralleh (2020), taking a nonlinear autoregressive distributed lag (NARDL) approach, identified the generally asymmetric responses of stock returns to inflation rates. Reviewing other literature according to the geographical region in which the studies were undertaken over the last 10 years, the following discussion focuses on Southeast and South Asia, the Arab States, and West and South Africa, before concluding with Sudan.
With regard to Southeast Asia, Geetha, Mohidin, Chandran, and Chong (2011) tested the relationship between stock returns and inflation, which was divided into expected and unexpected inflation, in Malaysia, China, and the USA. They revealed a long-run relationship between both types of inflation and stock returns; however, no shortrun relationship existed in Malaysia and the USA, although it did in China. In Vietnam, Bui (2019) applied the ARDL approach adopted for the current study and found inflation to exert a significantly negative impact on stock returns in both the long and short run. Meanwhile, in South Asia, Chakravarty and Mitra (2013) used the alternative vector autoregression (VAR) to analyze monthly data on the wholesale price index and determine the relationship between inflation and stock prices in India, concluding that it tended to be negative. Likewise, Saleem, Zafar, and Rafique (2013) also showed a negative long-run relationship, with the Granger causality test-also used in this study-between inflation and stock returns in Pakistan between 1996 and 2011. Moreover, Mahmood, Fiyaz, and Muhammad (2014) also found a negative relationship between inflation and stock returns in Pakistan, using VAR. In contrast to these other Asian countries, though, Hemamala and Jameel (2016) demonstrated a positive relationship between inflation and stock returns in Sri Lanka.
Moving to the Arab States, Al-Sharkas and Al-Zoubi (2013) applied cointegration methods to study the 2000-2009 monthly stock and goods price indices for Jordan, Saudi Arabia, Morocco, and Kuwait. Their findings not only confirmed a long-run relationship between the two indices but also revealed that stock prices had a long memory with respect to inflation shocks, meaning stock returns act as reasonably good hedge against inflation in the long term. Specifically in Jordan, Al-Abbadi and Abdul-Khaliq (2017) discovered a short-as well as a long-run relationship over the longer period of 1978-2015 between inflation and stock market trading value. In addition, long-and short-run negative relationships were shown between inflation and stock returns in Iraq by the ARDL approach taken by Battal and Matar (2017).
Finally, three studies investigated the relationship between inflation and stock returns in Nigeria: Omotor (2010) analyzed monthly data and discovered that stock returns could provide effective hedging against inflation, while Ibrahim and Agbaje (2013) found short-and long-run relationships after performing an ARDL analysis of data from 1997 to 2010, and Uwubanmwen and Eghosa (2015) showed a weak, negative impact of inflation on stock returns following an analysis 1995-2010 monthly data. Also in West Africa, Kwofie and Ansah (2018) applied the ARDL model to data on inflation and exchange rates compared to inflation rates between 2000 and 2013 in Ghana.
The results revealed a significant long-run relationship between inflation and stock; however, the short-run relationship proved not to be significant. With regard to South Africa, Ndlovu, Faisa, Resatoglu, and Türsoy (2018) tested the impact of macroeconomic variables-inflation, money supply growth, interest, and exchange rates-on stock prices for the Johannesburg Stock Exchange South Africa, and found a positive relationship between inflation and stock prices.
It is evident that not only has considerable attention been paid worldwide to the effect of inflation rates on stock returns by empirical studies but also no consensus has been reached on whether that relationship is positive or negative. It is also unfortunate that Sudan is rarely included in these empirical studies. The first known study for Sudan, which also included Saudi Arabia, was conducted by Ahmed and Abdalla (2013) The studies in Sudan employed the generalized autoregressive conditional heteroskedasticity (GARCH) symmetric and symmetric models to investigate the effects of inflation rates on stock returns. A more recent and reliable analysis tool is available, however, which is applied in this study: the linear autoregressive distributed lag (ARDL) model.
METHODOLOGY
This study sourced secondary monthly data for the period September 2003-December 2019 from the Central Bank of Sudan (CBS), Khartoum Stock Exchange (KSE), and the Central Bureau of Statistics, which was then analyzed with the ARDL model. First, the linear error correction model (ECM) was applied: Where: KSE represents the Khartoum stock returns, INF the inflation rate, EX the official monthly nominal exchange rate per USD, M2 the nominal money supply growth rate, and MPM the Murabaha profit margin. It should be noted that by definition, a positive change in the exchange rate will lead to depreciation and a negative change to appreciation a multivariate model was applied to account for other variables mentioned by previous studies as affecting stock returns. The intention was to add a proxy for economic activity as well, but, the monthly data required was unavailable in Sudan.
The current study was based on certain predictions derived from previous research results: An increase in inflation leads to a negative effect on stock returns. As an increase in money supply leads to an increase in inflation, it is expected that the same effect on stock returns will be seen. This prediction is based on Fama (1981): rises in inflation rates negatively affected corporate profits and stock prices due to a reduction in real economic activity.
As Sudanese firms are import-oriented and the exchange rates show a depreciation in the Sudanese pound, it is expected that Khartoum stock returns decline. This prediction is based on Bahmani and Saha (2015): stock returns can respond to changes in the exchange rate either positively or negatively, according to whether a country's private sector is export-or import-based.
It is expected that a negative correlation will exists between the Murabaha profit margin and stock returns.
This prediction is based on Amin et al. (2014): the Murabaha profit margin is negatively correlated stock returns in Sudan.
Following the bounds testing approach of Pesaran et al. (2001), Equation 1 can be rewritten as: This linear ARDL model is then applied to identify short-and long-run relationships.
EMPIRICAL RESULTS AND DATA ANALYSIS
First, unit root tests were conducted for each variable is the first task of doing this analysis. The augmented Dickey-Fuller (ADF) test was used to determine stationarity at level and first difference, taking into account that cointegration requires I(0) or I(1) variables. The ADF statistics in Table 1 indicate that all the variables satisfied the required condition.
Using the, A maximum of 10 lags were then imposed on each first-difference variable and Akaike's information criterion (AIC) was used to select an optimum specification.
Second, short-and long-run estimates, in addition to diagnostic statistics, were calculated using the linear © 2020 AESS Publications. All Rights Reserved. The short-run estimates reported in Table 2 indicate that only changes in LINF significantly affect stock returns at the 1% significance level, while the other variables, except for LM2, exert significant effects once lags are imposed. With regard to the long-run estimates, cointegration had to be established first using the ARDL bounds test, which the F-statistic of 5.097143 being higher than the upper-bound critical value of 3.52 at all significance levels did suggest. Table 3 clearly shows that LINF still significantly affects stock returns in the long run, along with LEX, but none of the other variables exert a significant effect.
Several conclusions can be drawn from the diagnostic statistics reported in Table 4. The significant negative coefficient for ECM (t−1) confirms the existence of cointegration in the long run and implies that the estimate will adjust to its long-run equilibrium by 12% within one month. Meanwhile, the Lagrange multiplier (LM), although not significant at a 5% significance level, indicates the absence of any serial correlation problems. Moreover, Ramsey's regression equation specification error test (RESET), which, with a t-value of 0.300453, is also not significant, shows the model to be correct in its assumptions. Furthermore, the 56% coefficient of determination (Adj. R 2 ) demonstrates the goodness of fit for the model. Finally, the cumulative sum (CUSUM) test found the estimates to be stable. (Pesaran et al., 2001). The upper bound critical value at the 5% and 10% significance levels are -3.99 and (-3.66), respectively (when there are four exogenous variables) for the t-statistic (Pesaran et al., 2001). These values are usually used to determine the significance of ECM (-1).
** significance at the 5% level. Figure 1 plots the break points (i.e., significant changes). As the CUSUM statistics remain within the 5% significance level, the estimated coefficients are regarded as stable. | 2020-08-06T09:05:31.502Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "1fa8e53733930938d44adbfcb0b1744f04c6aba4",
"oa_license": null,
"oa_url": "http://www.aessweb.com/pdf-files/AEFR-2020-10(7)-808-815.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9aea7de4ae570beb8d0961912b7877895428e3c2",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
245131084 | pes2o/s2orc | v3-fos-license | Bethe-Heitler signature in proton synchrotron models for gamma-ray bursts
We study the effect of Bethe-Heitler (BeHe) pair production on a proton synchrotron model for the prompt emission in gamma-ray bursts (GRBs). The possible parameter space of the model is constrained by consideration of the synchrotron radiation from the secondary BeHe pairs. We find two regimes of interest. 1) At high bulk Lorentz factor, large radius and low luminosity, proton synchrotron emission dominates and produces a spectrum in agreement with observations. For part of this parameter space, a subdominant (in the MeV band) power-law is created by the synchrotron emission of the BeHe pairs. This power-law extends up to few tens or hundreds of MeV. Such a signature is a natural expectation in a proton synchrotron model, and it is seen in some GRBs, including GRB 190114C recently observed by the MAGIC observatory. 2) At low bulk Lorentz factor, small radius and high luminosity, BeHe cooling dominates. The spectrum achieves the shape of a single power-law with spectral index $\alpha = -3/2$ extending across the entire GBM/Swift energy window, incompatible with observations. Our theoretical results can be used to further constrain the spectral analysis of GRBs in the guise of proton synchrotron models.
INTRODUCTION
The emission mechanism at the origin of the observed signal during the prompt phase of GRBs remains unknown. Among the prime contenders are photospheric emission, released when the plasma becomes optically thin (Goodman 1986;Paczynski 1986;Mészáros & Rees 2000;Drenkhahn & Spruit 2002), and synchrotron emission produced by relativistic particles accelerated by shocks or magnetic reconnection once the flow is optically thin (Rees & Meszaros 1994;Sari et al. 1996;Daigne & Mochkovitch 1998;Zhang & Yan 2011). In addition, protons may also contribute, either directly by synchrotron emission, or indirectly by emission from the secondaries produced in photo-hadronic and photo-pair processes (Asano et al. 2009;Crumley & Kumar 2013;Florou et al. 2021).
When comparing models to spectral data, the most crucial difference between the aforementioned models are the prediction for the low energy spectral slope α, usually associated with the low-energy slope of the Band model (Band et al. 1993). For photospheric emission models, the slope is expected to be around α = 0.4 (Beloborodov 2010;Pe'er & Ryde 2011;Bégué et al. 2013;Parsotan & Lazzati 2018), unless the ejecta becomes transparent during the acceleration phase (Goodman 1986;Paczynski 1986;Bégué & Vereshchagin 2014;Ryde et al. 2017), in which case a steeper slope up to α = 1 can be achieved. Slopes shallower than α = 0.4 can be obtained when considering geometrical effects such as emission from a structured jet (Lundman et al. 2013), or subphotospheric dissipation (Pe'er & Waxman 2005;Giannios 2006;Vurm & Beloborodov 2016). Observationally, the footprint of photospheric emission is seen in many GRB spectra, see e.g. Ryde & Pe'er (2009) ;Acuner et al. (2020); Dereli- . Moreover, analysis of GRB 090902B strongly supports a model where the emission is produced at the photosphere of a highly relativistic outflow (Ryde et al. 2010;Pe'Er et al. 2012). In the past years, photospheric models have been directly fitted to data achieving good agreement (Ahlgren et al. 2015;Vianello et al. 2018;Samuelsson et al. 2021).
Synchrotron models predict a low energy slope to be α = −2/3 in the slow cooling regime and α = −3/2 in the fast cooling regime. Slightly steeper slopes could be obtained when considering the effects of inverse Compton cooling in the Klein-Nishina regime (Bošnjak et al. 2009;Nakar et al. 2009;Daigne et al. 2011). Yet, most GRB spectra fitted with the Band function (Band et al. 1993) are found incompatible with a synchrotron model: this is known as the "synchrotron line-of-death" (Preece et al. 1998). Recently, it was found that fitting a synchrotron model directly to GRB spectra alleviates this problem 1 (Burgess et al. 2020;Acuner et al. 2020). The main reason for the disagreement is that the Band function is a poor approximation of the synchrotron emission around the peak, giving poor constraints when comparing the fitted results to the expectations from synchrotron models in a limiting energy window. In addition, two independent analysis using Swift X-ray (Oganesyan et al. 2018) and optical data (Oganesyan et al. 2019) showed that the spectra of several GRBs require an additional break around observed energy 1 keV, leading to the straightforward identification of the injection and cooling frequencies of a synchrotron model. The spectral slopes below and between the breaks were also found compatible with the expectation from synchrotron models with power-law distributed charged particles.
The closeness of the two identified breaks requires, within the framework of synchrotron models, that the emitting particles be in the marginally fast cooling regime, with their cooling Lorentz factor γ c nearly equal to their injection Lorentz factor γ m . This requirement is difficult to account for if the radiating particles are electrons (Beniamini et al. 2018). Possible solutions include the jet in jet model (Narayan & Kumar 2009;Beniamini et al. 2018) or emission in a time dependent magnetic field (Uhm & Zhang 2014).
Alternatively, it was proposed by Ghisellini et al. (2020) that protons could be the particles radiating synchrotron and producing the main prompt MeV-peak. The observed requirement of marginally fast cooling is then naturally fulfilled for emission radius in the order of 10 13 − 10 14 cm and bulk Lorentz factor of a few hundreds (Ghisellini et al. 2020), as expected for optically thin emission models of GRBs (Rees & Meszaros 1994;Daigne & Mochkovitch 1998). On the other hand, proton synchrotron models do not explain the observed spectral peak energy clustering (von Kienlin et al. 2020) and require a large magnetic luminosity L B 10 55 erg s −1 (Florou et al. 2021).
Synchrotron emission from protons and from the secondaries produced by photo-pion ( pγ → p + π 0 , pγ → n + π + and other channels producing two or more pions (Mücke et al. 2000;Lipari et al. 2007;Hümmer et al. 2010)) and BeHe (pγ → pe + e − ) interactions was thoroughly studied in connection with ultra-high energy cosmic ray acceleration (Böttcher & Dermer 1998;Totani 1998;Razzaque et al. 2010), high-energy PeV neutrino production (Petropoulou 2014) and high energy photon component observed by LAT and more recently by HESS and MAGIC (Gupta & Zhang 2007;Asano et al. 2009;Crumley & Kumar 2013;Sahu & Fortín 2020). However, all those studies have in common the leptonic origin of the main MeV peak component, when it is not set to be a fiducial Band model. Florou et al. (2021) numerically studied a proton synchrotron model as the source of the main MeV peak similar to the one proposed by Ghisellini et al. (2020). They concluded that emission from the secondaries produced by either the BeHe-process or photo-pion interactions would be too bright to account for the optical constraints. This is especially important since this result is inconsistent with the claim that optical observations support synchrotron emission models (Oganesyan et al. 2019). However, the bursts used by Oganesyan et al. (2019) and afterwards by Florou et al. (2021) are very long duration bursts (T 90 > 70 − 80s), which is required to have simultaneous optical observations. Thus, this small subset of bursts is not necessarily representative of the full GRB population, nor of the emission mechanism producing the early episodes of a GRB. Indeed, it was suggested that the emission mechanism might change throughout the burst episodes (e.g. Zhang et al. (2018); Li (2019)). It is therefore interesting to find predictions of proton synchrotron models that do not rely on optical data, to be able to test the model on shorter duration bursts.
In this paper, we assume that the prompt emission is due to proton synchrotron and derive constraints on the model. We present analytical estimates of the effect of BeHe pair production and the pairs subsequent radiation. We identify two emission regimes: 1) proton synchrotron dominated emission regime with little contribution from other processes, 2) BeHe pair dominated emission regime, leading to a spectrum incompatible with observations. We further describe the transition between these two extreme regimes in which a subdominant power-law from the BeHe pair synchrotron radiation appears across the MeV band, as observed in some GRBs (e.g. Vianello et al. (2018); Chand et al. (2020)).
The paper is organized as follows. In Section 2, we identify the parameter space for each of the three regimes mentioned above by comparing the timescales of synchrotron emission to that of BeHe pair production. Section 3 details the modification to the spectrum due to synchrotron radiation from the pairs produced by the BeHe process. Discussion with an emphasis on GRB 190114C is given in Section 4.
CONSTRAINING PROTON SYNCHROTRON EMISSION MODELS BY BETHE-HEITLER COOLING
In this section, we compare the timescale of proton synchrotron emission to that of BeHe pair production. If the protons cool too quickly from BeHe pair creation, the secondary emission from the pairs can greatly affect the observed spectrum. The valid parameter space for proton synchrotron models can thus be constrained.
Consider an emission region expanding relativistically with Lorentz factor Γ, emitting radiation at a distance r from a central engine, and threaded by a magnetic field of comoving strength B. The comoving dynamical time is given by t dyn = r/(Γc), where c is the speed of light. In a marginally fast cooling scenario, relativistic particles, here protons, are assumed to be steadily injected into a power-law with index −p above some injection Lorentz factor γ p,m . We present our results for p = 2.5 and p = 3.5. On the one hand, the value of p ∼ 2.5 is expected in many dissipation and acceleration scenarios (e.g. Bednarz & Ostrowski (1998); Kirk et al. (2000)), albeit softer values can also be obtained from simulations (Sironi et al. 2013;Crumley et al. 2019;Comisso et al. 2020). On the other hand, synchrotron fits to the GRB spectra require an average value of p = 3.5 (Burgess et al. 2020). Marginally fast cooling implies that γ p,m ∼ γ p,c , where γ p,c is the characteristics proton cooling Lorentz factor. We write γ p,m = ξγ p,c . In this paper, we assume ξ 1, i.e., the protons are fast cooling albeit marginally. This implies that protons efficiently radiate most of their energy, while satisfying the observed spectral constraints.
The comoving cooling time for protons with Lorentz factor γ p emitting synchrotron radiation is and the frequency of the synchrotron spectral peak is ν peak = (4/3)Γqγ 2 p,m B/(πcm p ), where we used ν peak as the frequency without redshift correction, i.e. in the frame of the burst. The observed frequency is ν obs = ν peak /(1 + z).
In those equations, m p and m e are the proton and electron masses, q is the elementary charge and σ T is the Thompson cross section.
Setting the dynamical timescale and the cooling timescale equal, t dyn ∼ t synch , gives the magnetic field and the proton cooling Lorentz factor B = 2 3 √ 6πqcm where we have replaced γ p,m by ξγ p,c and have used the notation Q x = Q/10 x . Here, ν MeV = hν peak /1 MeV is the peak energy from the proton synchrotron, normalised to the value 1 MeV in agreement with observations, and h is Planck's constant. The comoving photon peak energy is hν m = hν peak /(2Γ) = 5.0 keV ν MeV Γ −1 2 . Most of the accelerated protons 2 have Lorentz factor γ p,c . Therefore, for the bulk number of accelerated protons interacting with photons at the peak, one gets which satisfies the threshold requirement for the BeHe-process (γ p,c hν m > 2m e c 2 ) unless r MeV < 0.015 × Γ 2 2 ξ 4 3 . We note that the lowest energy protons cannot satisfy the energy threshold for photo-pion interaction γ p,c hν m < 135 MeV, and it is therefore expected that neutrino production in this model be small. We comment further on the relevant cooling times in the discussion section. The model can be further constrained by the tight constraints from the IceCube (Aartsen et al. 2017) and Antares (Albert et al. 2017) experiments, as shown by Florou et al. (2021) and Pitik et al. (2021).
Having verified that all accelerated protons are energetic enough to satisfy the threshold of BeHe pair creation, we now estimate the cooling of proton by the BeHe. This timescale is a function of the comoving photon spectrum near the peak of their distribution, which itself depends on the comoving proton density 3 . Let L obs be the observed isotropic photon luminosity of the burst. Assuming the main emission mechanism is proton synchrotron, the number of radiating protons N p is where P obs synch (γ p,m ) is the observed synchrotron power emitted by a single proton with Lorentz factor γ p,m and u B = B 2 /(8π) is the comoving magnetic energy density.
For the comoving volume, we use V = 4πr 2 (r/Γ) (e.g. Pe'er (2015)), and therefore, the comoving density of radiating protons is given by To normalise the photon spectrum, it is assumed that the whole power radiated by protons with Lorentz factor γ p,m is emitted at ν m . Thus, the peak spectral energy density is where P synch = Γ −2 P obs synch . Since protons with Lorentz factor γ p,m mostly interact via BeHe process with photons close to the peak, only the shape of the photon spectrum around the peak affects the cooling rate by the BeHe process. The photon distribution close to the peak is well approximated by the synchrotron radiation of the protons even when t BeHe ∼ t synch , where t BeHe is the BeHe cooling timescale. This can be understood because the photons produced by the BeHe pairs are at different energies (see section 3), where the cross-section is smaller. Furthermore, when t BeHe ∼ t synch , the proton distribution function is not strongly changed below γ p,m . Therefore, in the marginally fast cooling scenario considered in this paper (ξ ≥ 1), the comoving photon spectrum around the peak is obtained as (Sari et al. 1998) where p is the index of the proton spectrum.
In Appendix A, we obtain the cooling rate of protons by BeHe pair production (pγ → pe + e − ) following the prescription of Chodorowski et al. (1992). The cooling is a function of photon energy in the proton rest frame, thus, one has to integrate the photon distribution over energy and angle. This is done when generating the figures, which therefore show exact results in the case of an isotropic photon distribution. Here, we present approximate analytical estimates to demonstrate how the cooling varies with the parameters. Using Equations (A5) and (A6), and assuming ξ = 1 for simplicity, the BeHe timescale is given by where x m = hν m /(m e c 2 ) and κ 0 = 40 is found to provide an adequate approximation for the cooling rate, see Appendix A. The parameters κ 0 is introduced to simplify the expression for the BeHe cooling. It is roughly the photon energy (in the proton rest frame) corresponding to the maximum cooling rate by the BeHe process. To obtain the numerical value in the bottom expression, p was set to 2.5. The expression and numerical value for a different value of p can be obtained by using Equations (A5) and (A6). The first line of Equation (9) is for protons that mostly interact with photons below ν m , while the second line describes protons interacting with photons with frequency higher than ν m . An estimate of the ratio between the BeHe cooling time and the synchrotron cooling time at the injection Lorentz factor γ p,m is given by where we used the fact that t synch = t dyn for γ p,m . This results therefore indicates similar timescale for fiducial parameters. As noted above, the similarities of the time scales implies that the peak of the proton synchrotron is not substantially modified by the BeHe process. Figure 1 shows the ratio of cooling times at γ p,m assuming ξ = 1 for different parameter choices. It is obtained by direct integration of Equation (A1). In computing this figure, we have assumed that the proton distribution function is only modified by synchrotron losses. This assumption breaks when BeHe cooling strongly dominates in the lowest domain of each panel in Figure 1. From top to bottom, the emitted luminosity is 10 54 , 10 53 , and 10 52 erg s −1 and from left to right the observed spectral peak energy is 100 keV, 300 keV, and 1 MeV, respectively. Figure 1 is made with p = 2.5. A softer value of p increases the timescale ratio for the high-energy branch, i.e., it only affects the left-most part in the panels in Figure 1. For p = 3.5 as compared to 2.5, the timescale ratio increases by a factor ∼ 2 for an order of magnitude decrease in radius.
From this figure, as well as from Equation (10), one can identify two extreme regimes. For low luminosities L obs , high Lorentz factor Γ and large radius r, the protons are largely unaffected by BeHe pair creation (yellow region in Figure 1). In this scenario, the observed spectrum is due to the synchrotron emission from the marginally fast cooling protons as described in Ghisellini et al. (2020), without any modification by BeHe. This shows that explaining GRB prompt spectra with proton synchrotron requires high bulk Lorentz factor Γ 300, in agreement with the analysis of Florou et al. (2021) who used optical constraints. On the other end, when Γ and r are small and/or L is high, the BeHe process dominates the cooling (dark region in Figure 1). In this regime, the synchrotron photons from the very fast cooling pairs quickly outnumber the proton synchrotron photons, leading to even more rapid BeHe pair creation. Therefore, most of the available proton energy is extracted by the BeHe pairs. The cooling Lorentz factor of the pairs is γ ±,c ∼ 1, corresponding to a cooling break in the observed spectrum at ∼ 10 eV, whereas the νF ν -peak energy associated to the synchrotron from the created BeHe pairs is at ∼ 100 MeV (see Equation (14)). Thus, the observed spectrum consists of a single power-law with F ν ∝ ν −1/2 between these two energies, clearly incompatible with observed GRB spectra. In between the two extreme regimes when the cooling timescales are comparable, signatures from both processes can be seen in the spectrum, and we explore this scenario in Section 3.
SPECTRAL SIGNATURE OF BETHE-HEITLER PAIRS
In this section, we obtain predictions for the comoving pair distribution and their emission spectrum in the case where the cooling via synchrotron and BeHe are comparable. In this situation, the proton distribution at γ p,m is only marginally affected by BeHe cooling. This implies that the photon spectrum at the peak energy around 1 MeV (which is the optimal photon energy for BeHe pair creation; see Equation (4) and Appendix A) is not strongly modified by synchrotron radiation from the secondaries. If the secondary emission from the pairs do substantially contribute to the BeHe cooling of the protons, the BeHe pair creation becomes exponential in time and we are instead in the regime where BeHe dominates. Here, we use the proton synchrotron photons as targets to compute the rate at which pairs are created, namely in the case t BeHe > ∼ t synch . A proton with Lorentz factor γ p produces electrons and positrons with typical Lorentz factor γ ± = κ e (m p /m e )γ p , where κ e is the inelasticity. The dependence of the inelasticity on γ p x, where x is the target photon energy in units of electron rest mass, can be found in Mastichiadis et al. (2005). For photons at the peak energy, γ p x is given by Equation (4) and is of the order of a few to a few hundreds. Looking at Figure 1 of Mastichiadis et al. (2005) for those values of γ p x, the inelasticity is found to vary between 10 −3 and 10 −4 , giving an average pair Lorentz factor between Figure 1. Comparison of cooling rate between the BeHe and synchrotron processes at γp,m. From top to bottom the emitted luminosity is 10 54 , 10 53 and 10 52 erg s −1 , while from left to right the observed spectral peak frequency is 100 keV, 300 keV and 1 MeV. The purple, red and black thick lines correspond to tBeHe = t synch , tBeHe = 0.1t synch , and tBeHe = 10t synch respectively. The thin black lines show the variability time expected from the Lorentz factor and radius tvar ∼ r/(Γ 2 c) for selected variability time 10 −2 s, 10 −1 s, 1s and 10s. The figure is made with p = 2.5. A value of p = 3.5 slightly increases the valid parameter space for proton synchrotron models by increasing the timescale ratio tBeHe/t synch at small radii. γ ± ∼ 2γ p and γ ± ∼ γ p /5. Considering that most of the protons have Lorentz factor γ p,m ∼ γ p,c , we consider that all pairs are created with Lorentz factor γ ±,m ≡ κ e (m p /m e )γ p,m . In other words, we neglect the contribution of higher energy protons in the creation of pairs with higher energies. This effect only changes the very high energy photon spectrum, which is likely to be absorbed by pair creation. In addition, since the pairs are fast cooling, which follows as the protons are marginally fast cooling, they obtain a Lorentz factor much smaller than their initial Lorentz factor in one dynamical timescale γ ±,c γ ±,m , and therefore the exact details of their injection is lost via their cooling. The pair production rate by BeHe is given by Chodorowski et al. (1992), but cannot be analytically integrated for a general photon spectrum. Analytical estimates in some specific cases were provided by Petropoulou & Mastichiadis (2015). We provide the integral expression used in our numerical computation in Appendix B. In order to get analytical estimates of the number of pairs, we write the ratio between the synchrotron power and BeHe power to be equal to the ratio of their timescales: where P tot BeHe it the power emitted by all BeHe pairs. Using P tot BeHe = γ ±,m m e c 2ṅ ± V , which is valid since the pairs are fast cooled, one obtainṡ We note the very strong dependence on the parameters, specifically the radius and the peak energy. This means that in principle both scenarios with high and low pair yield are possible.
Assuming that all pairs are produced at Lorentz factor γ ±,m and that pair annihilation is negligible (see Section 4), the continuity equation for the pairs can be solved to obtain the pair distribution. This computation is done in Appendix C, and gives namely a single power-law with index −2 extending from γ ±,m down to γ ∼ 1. Here, P e is the synchrotron power emitted by an electron and H is the Heaviside function. We note that at low energies the pair distribution should substantially deviate from this power-law because of strong synchrotron self-absorption heating and pair annihilation. Our analysis also neglects the electrons originally present in the flow (see the discussion in Section 4). We now estimate the emerging spectrum. The emitted synchrotron spectrum from the pair distribution in Equation (13) is a fast cooling power-law with F ν ∝ ν −1/2 . It extends from an observed peak frequency of down to sub-keV energies. The synchrotron frequency of the pairs linearly depends on the observed peak frequency. It also indirectly depends on the other model parameters via the value of the inelasticity. Larger values of κ e results from smaller values of γ p,m ν m , i.e., larger Lorentz factor and peak frequency, and/or smaller radius and luminosity, see Equation (4). The shape of the spectrum above ν ±,m depends on the shape of the proton distribution function and of the pair injection details. Since we are only giving analytical estimates, it is out of the scope of this paper to account for a detailed analysis at those energies. We note however that emission at GeV and eventually TeV energies are constrained by the LAT instrument on-board Fermi (e.g. Guetta et al. (2011)). The normalization of the spectrum is either obtained from the pair distribution function in Equation (13), or by considering the ratio of the synchrotron power to the BeHe power in Equation (11). Indeed, the value and parameter dependence of the ratio between the proton synchrotron peak and the BeHe peak, ν m F νm /ν ±,m F ν±,m , are well described by the ratio of the timescales given in Equation (10). This is true as long as t synch is not much larger than t BeHe , so that the target photons for the BeHe process are those produced by proton synchrotron. Figure 2 shows an example spectrum when the two timescales are comparable. A subdominant power-law extends from ∼ 100 MeV all across the observation window. This extra component is solely due to synchrotron radiation from the BeHe pairs and it is simultaneous to the main MeV emission. It is potentially detectable at low energy (few tens of keV) and in the LLE data. In making this figure, we assumed r 14 = 1, Γ 2 = 1, ν MeV = 1, L 52 = 10, ξ = 1 and p = 2.5 (left) or p = 3.5 (right). They both correspond to t BeHe /t synch ∼ 1.9 (the approximate expression in Equation (10) gives t BeHe /t synch ∼ 0.85). The figure was made with BeHe inelasticity κ e ∼ 10 −4 to determine ν m,± . This value of κ e is appropriate when γ p,m hν m /(m e c 2 ) ∼ 100, as obtained for this choice of parameters. The total number of pairs was calculated using the integral formulation of Chodorowski et al. (1992) (see Appendix B).
In this example, the overall synchrotron spectrum is strongly modified at low and high energies by an extra power-law produced by the synchrotron emission of the BeHe pairs. This is a clear spectral signature of the proton synchrotron emission model, which can help to differentiate proton synchrotron models from electron synchrotron models. Several properties of this power-law are well determined and weakly sensitive to the parameter of the model. First, its slope is set to be α = −1.5 since it is produced by electrons and positrons in the fast cooling regime. Second, it extends from low (sub-keV) energy to high energy with a peak at few tens or hundreds of MeV. Therefore, this component crosses the entire GBM energy window. Third, the maximum energy of this component is linearly correlated with the peak frequency of the MeV component, as shown by Equation (14). Finally, the intensity of this component does not have a It is clear that the emission from the secondary BeHe pairs greatly affect the overall shape of the spectrum at low ( 30 keV) and high ( 10 MeV) energies . The frequency of the second peak is independent on the power-law index. Above the photon peak at ∼ 100 MeV, the photon spectrum is not specified by our analysis as it depends on the details of the pair injection spectrum, and as such on the exact shape of the proton distribution function above γp,m. This is represented by a dashed line.
strong dependence on the uncertain value of the proton index p, but strongly depends on the luminosity, Lorentz factor and emission radius. The combination of all those characteristics provides a clear smoking-gun of proton synchrotron model.
DISCUSSION AND CONCLUSION
Proton synchrotron models are an attractive solution to explain marginally fast cooling spectra from GRBs (Ghisellini et al. 2020). We have performed a detailed investigation of the effect of BeHe cooling on the protons and of the subsequent radiation of the BeHe pairs. For high bulk Lorentz factor Γ, large radius r and low luminosity L obs , proton synchrotron emission dominates and no BeHe pair signature is expected. Conversely, synchrotron emission from the BeHe pairs dominates when the luminosity is high, and Γ and r are relatively low. This constitutes an additional test for the model: if no pair signature is observed in high luminosity bursts, large radii and high Lorentz factors are necessary. The magnetic luminosity is given by where we have replaced the Lorentz factor Γ by the variability time t v = r/(cΓ 2 ) normalised to 1s. The large dependence on the radius and variability time implies a very large magnetic luminosity if no BeHe pair signature is observed. For instance, for a variability timescale of the order a few seconds as observed in the burst sample of Burgess et al. (2020), an observed luminosity of L obs = 10 54 erg s −1 requires that r ∼ 10 15 cm to suppress the BeHe pair creation (see Figure 1). Using Equation (15), this implies a magnetic luminosity of a few 10 56 erg s −1 . Such a high magnetic luminosity suggests that the jet is magnetically dominated, leading to an acceleration rate with Γ ∝ r 1/3 (Drenkhahn & Spruit 2002;Bégué et al. 2017), and possibly magnetic reconnection as the energy dissipation mechanism, see e.g. Lyutikov & Blackman (2001); Giannios & Uzdensky (2019).
In between the two regimes outlined above, we expect a parameter space where both signatures can be observed simultaneously in the MeV band (see Figure 2). This component might have already been observed in several GRBs. Indeed, it is reminiscent of the population of bursts whose spectra seems to have two components: a main emission peak together with a subdominant component well approximated by a power-law observed from a few keV to several tens of MeV (Ackermann et al. 2010;Guiriec et al. 2015). This sub-dominant power-law can be interpreted as the radiation from BeHe pairs, identified in our work as the spectral signature of proton synchrotron models.
Most notably, the spectra of GRB 190114C detected by MAGIC (MAGIC Collaboration et al. 2019) are composed of two components in the energy range 1keV -1 GeV 4 , with low energy slopes seemingly compatible with slow α = −2/3 and fast α = −3/2 cooling synchrotron radiation in some time bins (Chand et al. (2020), however see Ajello et al. (2020)). In addition, a spectral cut-off component at energies between 50 to 100 MeV was reported in Chand et al. (2020). We speculate that this burst might possess the clear signature of proton synchrotron with BeHe cooling discussed in this paper: 1) one main peak produced by proton synchrotron in the slow cooling regime, 2) a second component compatible with fast cooling synchrotron radiation from BeHe pairs and 3) a cutoff between 50-100 MeV corresponding to the injection limit of BeHe pairs. Furthermore, analysis of panel c) and d) in Figure 2 of Chand et al. (2020) shows that the ratio between the low and the high peak frequency is about 100, in rough agreement with Equation (14).
In our analysis, there are additional emission processes not dealt with that might modify the spectrum. We briefly discuss some of them here. First, we neglected the modification of the high energy peak by γγ absorption. Indeed, Equation (14) shows that the synchrotron peak of the pairs is marginally below the energy threshold for pair creation. Therefore, only the spectrum at energies larger than the peak can be affected by this process. Pair recombination could produce an observable characteristic around observed frequency ν obs,γγ ∼ 2Γ m e c 2 /h = 2.4 × 10 22 Hz Γ 2 . Yet, the time for pair recombination approximated by t e ± →γγ = 2/(σ T n ± v ) is such that t e ± →γγ t dyn = 2.0 × 10 2 Γ MeV L −2 52 , where we have approximated the density of pairs as n ± =ṅ ± t dyn using the upper branch of Equation (12) and use for the average electron velocity v = c. Thus, pairs do not substantially recombine for our fiducial parameters. However, we note the large dependence on the parameters, therefore, such a signature could in some parameter space regions be present. Furthermore, we did not treat the radiation from the initial population of electrons. This was discussed in Ghisellini et al. (2020), who argued that under the assumption that the same number of protons and electrons are accelerated, the electrons would not be seen if they achieve the same injection spectrum as the protons since their luminosity would be a factor m e /m p lower. If instead of similar injection spectrum, both electrons and protons carry the same energy, electrons would radiate their energy in the TeV band, which should trigger a leptonic cascade. In addition, thermalization of the background electrons via synchrotron self-absorption could also change the low energy spectrum. We expect this process to change the spectrum mostly in the optical band, therefore our conclusions would be largely unaffected as we have focused on the keV to GeV band.
We assumed that photo-pion interaction is inefficient in our model since the threshold for this process is not reached for the fiducial parameters, see Equation (4). However, it was argued by Florou et al. (2021) that cooling by photo-pion dominates cooling by the BeHe process. We show in Appendix D the additional conditions that the parameters should satisfy for BeHe cooling to dominate. We find that this holds true for a large set of parameters. Yet, if a substantial amount of energy is transferred from protons to charged pions by photo-hadronic interaction, the spectrum above the peak would be modified.
In a future work, we will compute a table model with SOPRANO, a code designed to simulate lepto-hadronic processes in optically thin environment (Gasparyan et al. 2022). There, we perform direct fits of a proton synchrotron model with BeHe pair production to GRB 190114C, in order to understand if the additional power-law and its cutoff are in agreement with the model presented here. The sample of bursts in Burgess et al. (2020) will also be studied, in order to constrain the parameters and further test the model.
To conclude, proton synchrotron models have a clear smoking-gun signature: the synchrotron radiation produced by the BeHe pairs. The emerging spectrum is composed of a main synchrotron peak produced by protons, and a power-law with index α = −1.5 extending from sub-keV energies to few tens or hundreds of MeV. The peak frequency of this component is linearly linked to the frequency of the MeV peak frequency. Identification of this extra power-law in spectra will help to constrain the emission mechanism of GRB jets and their parameters. The cooling rate for a proton of Lorentz factor γ p is given in a simple form by Chodorowski et al. (1992): where α is the fine structure constant, r 0 the classical electron radius, κ = 2γ p x is the maximum energy for a photon with energy x = hν/m e c 2 in the proton rest frame, and φ(κ)/κ 2 is the energy loss rate of a single-energy proton with Lorentz factor γ p in an isotropic photon background. It is approximately given by Equations (3.14) and (3.18) of Chodorowski et al. (1992). The photon distribution n x is the photon number per energy in units electron rest mass. In Equation (A1), the photon distribution function is used at the value κ/(2γ p ). It is given by (compare with Equation (8)) where x c = hν c /(m e c 2 ) and x m = hν m /m e c 2 are the cooling and injection frequencies in units of the electron rest mass energy, ξ = ν c /ν m and u νm is the peak spectral energy density as given by Equation (7). In order to obtain analytical results that still factorize the photon spectrum, we approximate φ(κ)/κ 2 by a δ-function that is normalized to the integral between 2 < κ < 500 (those numbers are arbitrarily, but we numerically checked that they encompass most of the cooling contribution): where we find A ± = 420 and κ 0 = 40, roughly corresponding to the position of the maximum of φ(κ)/κ 2 . Using this approximation in Equation (A1) gives We now use the expression for the photon spectrum given in Equation (8) to obtain in marginally fast (ξ > 1) cooling The cooling time by the BeHe process is finally obtained by In this appendix, we seek an approximate expression for the pair production under the assumption of marginally fast cooling ξ ≡ 1. The pair production rate is given by Chodorowski et al. (1992) ∂N ± ∂t = c 3 8π where ψ(κ) is given by Equations 2.3 of Chodorowski et al. (1992). Multiplying this equation by an electron energy γ ± m e c 2 and integrating over electron energies corresponds to the cooling rate of a proton whose expression is given by Equation (A1). Replacing with the expression of the synchrotron spectrum in the marginally fast cooling scenario with γ m = γ c gives: where κ * ≥ 2 is such that is the energy of a photon at the spectral peak in the rest frame of the proton. Using the expression for the photon spectrum given by Equation (8), it becomes: We assume that electron-positron pairs are produced with a single Lorentz factor γ ±,m and we proceed by solving the kinetic equation describing the evolution of the pair distribution function ∂n ± ∂t + 1 m e c 2 ∂ ∂γ e (P e (γ)n ± ) =ṅ ± δ(γ − γ ±,m ) where the pair distribution function n ± (γ e , t) is a function of both time and electron energy, andṅ ± is the pair production rate by the BeHe process, given by Equation (13). Let be a small positive variable and let us integrate the above equation between γ ±,m − and γ ±,m + . Keeping only the zeroth order term in , we obtain 1 m e c 2 [P e n ± ] γ±,m+ γ±,m− =ṅ ± (C12) describing a jump in the solution at γ ±,m . Since particles are cooling, n ± is null for γ > γ ±,m . We now solve the equation for γ < γ ±,m . Further assuming that electron are fast cool, far from γ ±,c γ ±,m , the equation can be further simplified to ∂ ∂γ (P e n ± ) = 0 (C13) Therefore, the solution is of the form where we introduced γ ±,m for convenience. We use Equation (C12) to obtain Q = m e c 2 P e (γ ±,m )ṅ ± (C15) | 2021-12-15T02:15:52.539Z | 2021-12-14T00:00:00.000 | {
"year": 2021,
"sha1": "ce38fecee9c8781ce4c255e6bf7a0817a3ee2f27",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3847/1538-4357/ac85b7",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "ce38fecee9c8781ce4c255e6bf7a0817a3ee2f27",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
218720104 | pes2o/s2orc | v3-fos-license | IEEE Transactions on Neural Networks and Learning Systems Information for Authors
Abstract —Power amplifier (PA) models, such as the neural network (NN) models and the multilayer NN models, have the problems of high complexity. In this paper, we firstly propose a novel behavior model for wideband PAs using a real-valued time-delay convolutional neural network (RVTDCNN). The input data of the model are sorted and arranged as the graph composed of the in-phase and quadrature (I/Q) components and envelope-dependent terms of current and past signals. We design a pre-designed filter using the convolutional layer to extract the basis functions required for the PA forward or reverse modeling. Then, the generated rich basis functions are modeled using a simple fully connected layer. Because of the weight sharing characteristics of the convolutional model’s structure, the strong memory effect does not lead to a significant increase in the complexity of the model. Meanwhile, the extraction effect of the pre-designed filter also reduces the training complexity of the model. The experimental results show that the performance of the RVTDCNN model is almost the same as the NN models and the multilayer NN models. Meanwhile, compared with the models mentioned above, the coefficient number and computational complexity of the RVTDCNN model are significantly reduced. This advantage is noticeable when the memory effects of PA are increased by using wider signal bandwidths.
nonlinearity and memory effect can lead to the spectral expansion and decrease in Adjacent channel power ratio (ACPR) performance, thereby degrading the quality of communication [5]- [9]. Behavior modeling provides an effective method for nonlinear analysis and modeling of PAs. The behavior modeling often constructs the mathematical nonlinear modeling function by capturing the input and output responses of the system when driven with highly time varying signals to trig and observe the static nonlinear behavior of the system as well as the dynamics of the system are often designated as memory effects [10], [11]. With the incoming 5G standard calling for a sharp increase in data transmission rate up to multiple Gbps, the signal bandwidth needs to be increased significantly up to several hundred MHz. Accordingly, ultra-broadband PA behavior modeling and digital predistortion (DPD) have become the current research hotspot.
Traditional behavioral models with memory effect, including the Volterra model and several compact Volterra models, have been widely used in the modeling of wideband PA [12], [13]. However, the high correlation between polynomial bases in these models makes it difficult to improve the modeling performance [14]. Recently, the outstanding achievements of artificial neural networks (ANNs) in the field of communication have attracted the attention of researchers in the field of wireless PA modeling. Due to ANN's excellent performance for the approximation of nonlinear function, many works published in open literature have studied its application in PA modeling and predistortion area [15]- [19]. When PA exhibits complicated nonlinear characteristics and memory effects, it is difficult to achieve good modeling performance with low-complexity-based ANN models. This motivated this work to address the problem of how to derive broadband low-complexity NN based models that can provide accurate modeling performance for the forward and inverse (predistorters) models.
To address the above issues and inspired by the emergence of artificial intelligence (AI) in the broadband communications area, advanced NN based models will be investigated [20]- [24]. Deep learning [25]- [28] in the AI field has shown excellent performance in discovering complex non-linear relationships using labeled data. In particular, convolutional neural networks (CNNs) [29], [30] and recurrent neural networks (RNNs) [31], [32] in deep learning have been proven to be effective in many fields including wireless communication [33]- [38]. However, according to the results of our research, the work on the use of deep learning to solve the problems of behavior modeling and A linearization of PAs [14], [39]- [42] is limited. One of the important reasons is that the regression algorithm based on RNNs learning is often utilized for natural speech processing and time series processing tasks. If they are used for modeling and linearization of PAs, although they have fewer parameters compared with feedforward neural networks due to their characteristics of weight sharing, the complex training algorithm seems to make the method complicated [43]. In addition, CNN is usually used as a classifier, and the output layer makes a discrete decision rather than outputting a continuous signal. However, with the present work, it will demonstrate for the first time that CNN can be adapted and used in the fields of behavior modeling and DPD synthesis of the PAs. The NN model's complexity reduction will mainly result from the characteristic of weight sharing in CNN structures [29], [30]. The aspect of increasing the input dimension without changing network structure has attracted our attention. We firstly apply CNN to PA modeling and propose a real-valued time-delay convolutional neural network (RVTDCNN) behavior model for wideband wireless PA modeling. Due to CNN cannot be directly used to build the PA model since input signals are not graphs, the input data are sorted and arranged as the graph composed of the in-phase and quadrature (I/Q) components and envelope-dependent terms of current and past signals. And then, this model constructs a pre-designed filter using the convolutional layer to extract the basis functions required for PA forward or reverse modeling. Finally, the extracted basis functions are input into a simple fully connected layer to build the PA model. The model complexity of RVTDCNN is significantly reduced due to the weight sharing characteristic of the convolution structure. Meanwhile, the extraction effect of the pre-designed filter also reduces the training complexity of the model. In order to evaluate the model performance of the RVTDCNN model, we compared the RVTDCNN model with other existing models (including NN and multilayer NN model) by experiment and simulation. The results show that, compared with the existing state-of-the-art models, the model performance of the RVTDCNN model, especially the model complexity, is reduced in terms of the number of model's coefficients.
The contributions of this paper are as follows: As the signal bandwidth increases, PA exhibits complicated nonlinear characteristics and memory effects. It is difficult to achieve good modeling performance with low complexitybased traditional behavioral models [14]. To address the problem of how to derive the broadband low complexity model, the first CNN-based architecture for extracting the PA behavioral model is proposed to improve nonlinear modeling performance. It is found that the existing NN-based models [14], [19] still have a considerable complexity of model coefficients. To alleviate this issue, the input dataset is constructed as the graph and the convolution layer is studied and designed as a pre-designed filter to extract the basis functions required for the PA modeling. If RNN or CNN is used in the modeling and linearization of PAs, they have high computational complexity for parameters training [43]. To reduce the computational complexity, a training methodology for PA modeling is proposed to accelerate the training of the PA model. The remainder of this paper is organized as follows. In Section II, the existing neural network models for PA modeling, including shallow neural network (NN) models and deep neural network (DNN) models, are briefly reviewed. Section III proposes the structure of the RVTDCNN model and describes it in a detailed manner. Section IV discusses the training process of the RVTDCNN model and analyzes the model complexity of the RVTDCNN model. Section V describes the platform for experimental validation. Section VI reports the measurement and validation results and compares the proposed model with other models. Finally, Section VII gives the conclusions.
A. Shallow Neural Networks for PA modeling
The shallow NN networks with fewer hidden layers are used to express the output characteristics of the PA due to its relatively simple network structure and training process, as shown in Fig. 1 (a) and (b) [15], [19]. A commonly used shallow NN structure includes an input layer, a hidden layer structure with one or two layers, and an output layer. The model in Fig. 1 (a) considers injecting the in-phase and quadrature (I/Q) components of the input signal and embeds their corresponding time-delayed values into the spatial structure of the input layer of the network to reflect the corresponding memory effects, such as real-valued time-delay NN (RVTDNN) model in [15]. However, hidden and related information, such as envelope-dependent terms, requires further network computation capability, which leads to a complex network structure and additional hidden layers. To this end, the structure in Fig. 1(b) is proposed to simplify the network structure, which attempts to inject I/Q components and important envelope-dependent terms. The corresponding models include augmented radial basis function NN (ARBFNN) in [18] and augmented real-valued time-delay NN (ARVTDNN) in [19]. However, with the increase of signal bandwidth, the memory depth considered will also increase, and the input dimension of the model will grow significantly, resulting in a complex network structure. Overall, to provide sufficient network capacity, the shallow NN structure makes the calculation relatively complex.
B. Deep Neural Networks for PA modeling
NN with multiple hidden layers is proposed to improve the performance of PA modeling. Instead of the simple hidden layer, DNNs' architecture includes over three hidden layers to mimic and approximate the nonlinearity and memory effects of PA, as shown in Fig. 1(c). The corresponding models include the DNN model in [14]. As the number of the hidden layer increases, the fitting and generalization capabilities of the NN model increase [14], so it is fair to assume that the accuracy of modeling will increase with the number of the hidden layers. Different from the shallow neural networks, DNN can build more complex models with relatively low complexity. From the experiment conducted in [14], the networks can achieve the same accuracy with relatively low complexity. However, when PA exhibits complicated nonlinear characteristics and deep memory effects, it is difficult to achieve low complexity modeling performance with DNN. In addition, their implementation demands excessive signal processing resources as the signal bandwidth gets wider.
To further reduce the complexity of DNN, CNN and RNN are alternative methods. However, regression algorithms based on RNNs learning were originally designed for natural speech processing. If they are used for modeling and linearizing of the PAs, they seem to have high complexity. Also, the weight sharing structure of the CNN network has a remarkable effect in reducing the complexity of the model.
III. REAL-VALUED TIME-DELAY CONVOLUTIONAL NEURAL NETWORK
The proposed RVTDCNN model structure is shown in Fig. 2. The RVTDCNN structure includes four layers, namely one input layer, one pre-designed filter layer, one fully connected (FC) layer, and one output layer. The pre-designed filter layer is constructed using a convolutional layer and is used to capture in an effective manner the important features and characteristics of the input data. Due to the characteristics of the weight sharing and data dimensionality reduction of the convolution kernel in the pre-designed filter structure, the input information can be extracted at a small network scale. The dimensions of each convolution kernel can be designed to yield low computation complexity while maintaining a good model's prediction performance. After the pre-designed filter layer, a fully connected layer is used to integrate valid features. The final output layer consists of two neurons with a linear activation function, corresponding to the I/Q components of the samples.
To construct the input graph of the convolutional network, the input data is a two-dimensional graph, including the I/Q components and the envelope-dependent terms of current and past signals. The input matrix is expressed as follows.
Output layer The reason why the input data is arranged from a one-dimensional vector to a two-dimensional graph is to put it in a format suitable to the convolutional processing. The input data items corresponding to the adjacent delay signals are arranged adjacently, which ensures that the two-dimensional convolution kernel extracts the cross-terms of the differently delayed signals. As shown in Fig. 3, the input graph n X is transformed into a volume of the feature's map by pre-designed filter layer. This is accomplished by convolving the input data with the multiple local convolution kernels and adding bias's parameters to generate the corresponding local features, as shown in Fig. 4. The convolution operation is expressed as follows.
represents the convolution output of the l-th convolution kernel with the input volume data n X arranged in 2D graph as illustrated in Fig. 3; L represents the number of convolution kernels; c l represents the coefficients of the l-th convolution kernel; shows the operation of convolution. c f is the activation function of the convolution kernels. c l b represents the bias of the l-th convolution kernels. Through the pre-designed filter, the rich basis function features required for PA modeling are extracted, which is proved in the appendix.
Then, the basis function features extracted by the predesigned filter are arranged into a feature's vector to be injected into the FC layer. The feature's vector is expressed as where m is a vector of L B C ; the dimension of feature's maps is B C . The output of the FC layer is obtained as follows.
1 Finally, the output layer weights and sums the output characteristics of the FC layer to acquire the network output. To ensure continuous values for the output data, we adjust the activation function, o f , of the output layer by setting it as a linear function y x .
represent the neuron output in the output layer, which correspond to the prediction of the I/Q components of the output sample. represents the weights and biases of the output layer. The label data contains I/Q components of the output samples. The output data vector is represented as Adam optimization algorithm [44]. The trained convolutional layer is used as a pre-designed filter to extract the features of the input data. Due to the extraction effect of the pre-designed filter on the basis function, only a simple fully connected layer can be used to fit the behavioral characteristics of the PA. Therefore, during the modeling, to fix the parameters of the pre-designed filter, only the parameters , , , of the fully connected layer and the output layer need to be adjusted to use the Levenberg-Marquardt (LM) algorithm [45]. The goal of network training is to minimize the error between the label (measured output) data and the RVTDCNN model output determined in the forward path by updating the parameters of epoch k until convergence of the network. In the forward path, we define the mean square error (MSE) as a cost function, which can be expressed as In this paper, 7,000 sets of modeling data are used for the modeling of RVTDCNN. Each set of modeling data contains input data and label data (measured output). The input data is a two-dimensional graph with a dimension of 5 M , where M is the memory depth, as shown in Eq. (1). The label data is a vector with a dimension of 2 1 and is composed of the I/Q components of the PA output, as shown in Eq. (7). We divide the modeling data into the training set and test set according to the ratio of 3:2. Therefore, the training set contains 4,200 sets of modeling data, and the test set contains 2,800 sets of modeling data. The training set is used to train the model, and the unseen test set is used to test the final model to verify the generalization ability of the model. The modeling performance is described by normalized mean square error (NMSE).
In the Adam optimization algorithm, the initialization parameters 1 and 2 are used to control the exponential decay rate of moving averages of the gradient and the squared gradient, which are required to be close to 1 and 2 st moment vector 0 are often set to 0 0 , 0 0 . The constant is used to prevent 2 st moment vector from being 0. This paper sets to the default value of 8 10 . We analyzed the cost function values and corresponding NMSE performance at different learning rates, as shown in Table I. It can be found from Table I that when the learning rate is 3 1 10 , the NMSE performance is almost optimal, and the corresponding MSE is 7 1.24 10 . At this time, the learning rate is also the choice of the fastest training speed at the best performance. Therefore, the learning rate is set to 3 1 10 , and the threshold for the cost function is set to 7 1.2 10 . The training process of the RVTDCNN model is shown in algorithm 1. To get the desired modeling performance, we need to decide the specific parameters of RVTDCNN. The 100 MHz OFDM input signal is taken as an example for description. The peak to average power ratio (PAPR) of the OFDM signal is 10.4 dB. The test PA is a Doherty PA. The small-signal gain of the PA is 28 dB, and the saturation power is 44 dBm. The choice of input data affects modeling performance and model complexity. An inappropriate input dimension of input data will increase the model coefficients. According to paper [19], the combination of the components I , Q , 2 3 , , x n x n x n is the best choice of the input signal to the NN yielding low model complexity and good performance. Based on the determined input data, the appropriate size of the convolution kernel becomes a factor affecting the modeling performance. The modeling performance and ACPR performance in DPD under different sizes and number of the convolution kernels was verified, and the results are shown in Table II. To decouple the effects of the FC layer and the pre-designed filter settings, the number of neurons in the FC layer is set to 20, which can provide sufficient network modeling capacity at different convolution kernel sizes. At this time, the modeling performance of the RVTDCNN model is only affected by the number and size of convolution kernels. It was found that the convolution kernels number significantly affects the modeling performance. If the convolution kernel number is equal to or less than 2, the model's NMSE performance increases with the increase of the convolution kernel number, regardless of the size of the convolution kernel. If the convolution kernel number exceeds 3, the NMSE performance does not increase significantly with the increase of the convolution kernel number. This can be explained by the fact that few convolution kernels cannot fully extract the features that reside in the input data. Meanwhile, when the convolution kernel number is kept constant, the size of the convolution kernel significantly affects the RVTDCNN model coefficient number. When the size of the convolution kernel is 3*3*1, the number of model coefficients is relatively small, and the NMSE performance is also quite good. The ACPR performance shows the same trend. Therefore, considering the modeling performance and model complexity, the convolution layer contains 3 convolution kernels of 3*3*1. The results in Table II correspond to the PA used in this paper. For different PAs, the optimal size and number of convolution kernels can be obtained through the scheme in our paper.
The neuron number in the FC layer is also an important factor affecting modeling performance and model complexity.
To obtain the minimum number of neurons in the FC layer that can achieve the required performance, based on the determined input data and pre-designed filter structure, the modeling performance under the different neuron number in the FC layer was verified, and the results are shown in Fig. 5. It can be found that when the number of neurons is less than 6, the NMSE performance of the model will drop dramatically, meaning that few neurons cannot provide the required network modeling capacity. When the number of neurons is greater than 6, the NMSE performance of the model will not be significantly improved. Considering the model complexity and modeling performance, the neuron number in the FC layer was determined to be 6. The activation functions commonly used in CNN are the sigmoid function, the Rectified Linear Unit (ReLU) function, the exponential linear unit (elu), the Leaky ReLU and the hyperbolic tangent sigmoid (Tanh) function, which are respectively defined in Eq. (10). To get the best modeling performance, the functions mentioned above were used to train the RVTDCNN model, and the results are shown in Table III, which can be summarized as the Tanh function solves the problem better than others.
B. Complexity analysis of RVTDCNN
The complexity analysis aims to evaluate the capability of different models to assess if the training procedure of RVTDCNN is simpler than other typical model models. In terms of the model complexity, it refers to both the number of coefficients and the argument floating-point operations (FLOPs). The comparison of complexity, including the total number of coefficients of the network structure and the FLOPs, is shown in Table IV. Based on the theory and experimental data, RVTDCNN proposed in this paper has superior performance than traditional models for its convolution calculation. The following is a specific calculation process for the complexity of RVTDCNN.
Based on RVTDCNN, it can be stated that the convolution structure decreases the model size, which makes the extraction of the model's features more efficient. Compared to the standard feedforward network, the coefficient number of the convolutional structure to generate the same feature points is much less due to the weight sharing feature, which reduces the coefficient complexity of the RVTDCNN model. The total number of coefficients is equal to the sum of the weight number and the bias number between layers. Thus, the number of coefficients of the pre-designed filter layer can be calculated as follows.
where the kernel size is r s z , the number of kernels is L . The coefficient number of the FC layer can be calculated as follows.
where B C L denotes the size of the output tensor of the pre-designed filter, T is the number of neurons of the FC layer.
The number of coefficients of the output layer can be obtained as follows.
where out T represents the neuron number of the output layer.
In summary, the number of coefficients of RVTDCNN can be calculated as follows.
For a typical generalized memory polynomial (GMP) model, the complex coefficient number of the model represents the number of basis function terms considered. The number of real coefficients of the GMP model can be expressed as. The ARVTDNN in [19], the RVTDNN in [15] and the DNN model in [14] are all fully connected networks. The coefficient number of the fully connected networks can be obtained as follows.
I , including the input and output layer). For the LSTM model in [41], the number of model coefficients of the LSTM layer is 17) where in N means the input number of the LSTM layer at each moment; I is the number of neurons in the LSTM layer.
Except for the total number of the coefficients, the argument FLOPs are also introduced to assess the network complexity. For the convolutional process, considering the complexity of the activation function, the formula for calculating the number of FLOPs can be derived as follows. 2 The number of hidden layers is 1 RVTDNN in [15] DNN DNN in [14] For the LSTM model in [41], considering the complexity of the activation function, the FLOPs of the LSTM layer is According to the above formula, the calculation formulas of the complexity of the RVTDCNN model and other models are listed in Table VI. V. EXPERIMENTAL SETUP The experimental setup in Fig. 6 is used to evaluate the linearization performance of the proposed model. The test signal is a 100 MHz OFDM signal with a PAPR of 10.4 dB, which is generated by MATLAB on a personal computer (PC). The OFDM signal is compounded of multiple OFDM symbols, generated by 16-QAM symbols modulated onto 64 subcarriers and then filtered by a raised-cosine with the roll-off factor of 0.1. The test signal was first downloaded into the arbitrary waveform generator (AWG) 81180A. Then, the AWG transmits the generated baseband signal to the performance signal generator (PSG) E8267D through cable, which implements digital-to-analog (DAC) conversion and frequency-up conversion. The modulation frequency in PSG is 2.14-GHz. Then, the RF signal generated by PSG is fed into PA.
The PA output signal is fed into the coupler, whose output is connected to a high-power load. In the feedback loop, the output of the coupler is captured through the oscilloscope (MSO) 9404A. Then, the Keysight 89600 Vector Signal Analyzer (VSA) software running on the MSO analyzes the captured RF input signal, including frequency-down conversion and analog-to-digital (ADC) conversion. The sampling rate is set to 625-MHz. Then, the output baseband signal is captured by VSA and downloaded to the PC. The acquired input and output signals are processed in the Python software in a PC to construct the behavior model of PA.
A. Modeling Performance
RVTDCNN, as shown in Fig. 2, is used to illustrate the performance of this behavioral model. The proposed modeling method and other modeling methods are evaluated herein using the NMSE performance in Eq. (9) and ACPR performance in DPD. The optimal network structure of different methods for 100MHz is shown in Table V. Fig. 7 shows the convergence curve of the RVTDCNN model training process. As shown in Algorithm 1, the threshold value of the cost function is set to 7 1.2 10 . When the cost function value of the network is less than the threshold, the network converges. It was found that the model convergence requires only 83 iterations (LM algorithm), so the RVTDCNN model has less training complexity. At the same time, the model converges synchronously on the training set and the test set, and MSE is almost the same, despite that the test data has never been used in training. Therefore, the model does not have overfitting problems. DPD is one of the most effective ways to alleviate the nonlinearity and memory effects of PA [2]. The DPD model is the inverse model of PA. Based on the indirect learning structure [19], the DPD can be implemented by placing the RVTDCNN model on the main path, as shown in Fig. 9. Then, the trained DPD model is used to update the DPD on the main path for the linearization of the PA. The input data of the DPD model is a two-dimensional matrix as shown in Eq. (1), including the I/Q components and envelope-dependent terms of the current and past signals of the PA output. The label data of the DPD model is the I/Q components of the input signal, as shown in Eq. (7). Take the signal of 100MHz as an example, when implementing DPD, the parameters and training methodology of the RVTDCNN model are the same as those for PA modeling. Fig. 10 shows the output spectrum after the linearization of the PA using the RVTDCNN model at 100 MHz OFDM source signal. The same dimension was used to derive the inverse model, and it was found that the RVTDCNN inverse model (DPD model) has a significant effect in reducing the PA distortion when cascaded with the nonlinear PA. Using RVTDCNN to linearize the PA, the ACPR performance is reduced from -31 dBc to -46 dBc. It can be deduced that as the signal bandwidth increases, the length of memory required for modeling increases accordingly. The same network structure is used to model PAs with different signal bandwidths, resulting in good modeling performance. If the signal bandwidth is further increased, we can achieve good modeling performance by increasing the number of convolution kernels and neurons in the fully connected layer. It can be found that for the traditional ANN models, such as ARVTDNN, the strong memory effect leads to a rapid increase in the model complexity. The memory depth increased from 2 to 5, and the number of the model coefficients of ARVTDNN increases to 563. For the proposed model, the memory depth increases from 2 to 5, and the number of model's coefficients is 266 that can be considered as reasonable, which is half the number of model coefficients of the ANN model. The coefficient number of the LSTM model does not increase with the signal bandwidth, but it is still about twice the coefficient number of RVTDCNN model. In other words, with the increase in signal bandwidth, the proposed model has more advantages in model complexity and fewer model coefficients. At the same time, the ACPR performance of RVTDCNN also verified this conclusion. Fig. 13 shows the output spectrum after the linearization of the PA using the RVTDCNN model at the 200MHz OFDM source signal. The results show that under a wide signal bandwidth, the RVTDCNN model still has a significant linearization effect on the power amplifier.
VII. CONCLUSION
In this paper, RVTDCNN is proposed for modeling the nonlinear and memory effects of wideband PA. RVTDCNN extracts the effective features of the two-dimensional input graph data with a convolutional structure. Doherty PA, with an OFDM signal from 40 MHz to 200 MHz, is tested to verify the effectiveness of the RVTDCNN model. For the PA with 100 MHz under different cases, the NMSE can reach about -36 dB, with an ACPR around -46 dBc with DPD. The results show that the RVTDCNN still has a good modeling effect when there are I/Q imbalance and DC offset, which verifies that the proposed model has strong adaptability. Compared with the existing shallow NN and DNN in terms of the number of model coefficients and FLOPs, the proposed RVTDCNN is verified to reduce the number of model coefficients by better than 50% under different signal bandwidth.
Extraction Effect of Pre-Designed Filter on Basis Function
Because the pre-designed filter can completely capture the basis function features required for modeling, the number of neurons in fully connected layer required is | 2020-01-09T09:10:08.853Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "aaa30d5c08406e697d66caccd1de99ed6050685b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2005.09848",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a75e8e88e9c38b980f397cb420bfddd7a8ed1101",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Engineering"
]
} |
258665138 | pes2o/s2orc | v3-fos-license | Characterization of Physicochemical, Biological, and Chemical Changes Associated with Coconut Milk Fermentation and Correlation Revealed by 1H NMR-Based Metabolomics
Fermentation of milk enhances its nutritional and biological activity through the improvement of the bioavailability of nutrients and the production of bioactive compounds. Coconut milk was fermented with Lactiplantibacillus plantarum ngue16. The aim of this study was to evaluate the effect of fermentation and cold storage for 28 days on physicochemical characteristics, shelf life, and antioxidant and antibacterial activities of coconut milk as well as its proximate and chemical compositions. The pH of fermented milk decreased from 4.26 to 3.92 on the 28th day during cold storage. The viable cell count of lactic acid bacteria (LAB) in fermented coconut milk was significantly increased during fermentation and cold storage period (1 to 14 days), reaching 6.4 × 108 CFU/mL, and then decreased significantly after 14 days to 1.6 × 108 CFU/mL at 28 days. Yeast and molds in fermented coconut milk were only detected on the 21st and 28th days of cold storage, which ranged from 1.7 × 102 to 1.2 × 104 CFU/mL, respectively. However, the growth of coliforms and E. coli was observed on the 14th until the 28th day of cold storage. The fermented coconut milk demonstrated strong antibacterial activity against Staphylococcus aureus, Bacillus subtilis, Escherichia coli, Cronobacter sakazakii, Bacillus cereus, and Salmonella typhimurium compared to fresh coconut milk. Fermented coconut milk had the greatest 1,1-diphenyl-2-picrylhydrazyl (DPPH) and ferric reducing antioxidant power (FRAP) values, with 67.1% and 61.961 mmol/g at day 14 of cold storage, respectively. Forty metabolites were detected in fermented and pasteurized coconut milk by proton nuclear magnetic resonance (1H NMR) metabolomics. The principal component analysis (PCA) showed clear difference between the fermented and pasteurized coconut milk as well as the studied cold storage days. The metabolites responsible for this variation were ethanol, valine, GABA, arginine, lactic acid, acetoin, alanine, phenylalanine, acetic acid, methionine, acetone, pyruvate, succinic acid, malic acid, tryptophan, uridine, uracil, and cytosin, which were higher in fermented coconut milk. However, sugars and other identified compounds were higher in fresh coconut milk. The findings of this study show that fermentation of coconut milk with L. plantarum ngue16 had high potential benefits to extending its shelf life and improved biological activities as well as other beneficial nutrients.
Introduction
Fermentation is an ancient technology for extending food's shelf life and improving its nutritional and organoleptic properties. During fermentation, several biochemical changes may occur, resulting in a changed ratio of nutritive and antinutritive components, which influences the product's bioactivity and digestibility [1]. This bioprocess has recently been used in the food, chemical, and pharmaceutical sectors to produce and extract bioactive chemicals. Furthermore, bioactive chemicals produced during the fermentation process (vitamins, antioxidant compounds, peptides, and phenolic compounds) boost the antioxidant and antibacterial activity of foodstuffs [1]. Milk is a nutrient-rich beverage that humans consume to achieve and satisfy their nutritional needs. It is sourced from cows, goats, and plant sources such as soy, rice, almonds, and coconut [2]. Nowadays, the nutritional profile of plant-based drinks, including plant-based milk, has become relevant to consumers for various reasons. These include allergies to cow's milk, lactose intolerance, and concerns about the presence of growth hormones or antibiotic residues in cow's milk [2]. The fermentation of nondairy milk products was mostly performed using lactic acid bacteria (LAB) to generate probiotic fermented milks and has become more popular. The manufacturing of high-quality fermented milk containing probiotic bacteria is a formidable obstacle due to technological issues, legislative aspects, ability of probiotics to multiply in food, safety of product, unknown health benefits, as well as consumer demands.
The LAB is used to produce fermented milk products such as yoghurt and cheese, fermented vegetables, including sauerkraut, cucumber pickles, and olives as well as fermented fish and meat [3]. The significance of LAB in food fermentation and preservation against pathogenic microorganisms is crucial. This is owing to the capacity of LAB to make organic acids, such as lactic acid and acetic acid, as well as to enhance the texture and taste of many milk and fermented food items and preserve their nutritional content. Fermented milk products benefit human health via a variety of mechanisms. For instance, Lb. helveticus produces peptide from casein milk protein and has antihypertensive, immunological modulator, and anticancer properties [4]. Several studies have investigated the development and viability of probiotics in fermented plant-based milk products throughout the fermentation and storage processes. Due to its high nutritional content, which promotes the development of probiotic bacteria, coconut milk is suggested as a substitute for dairy milk in probiotic formulations [4]. Coconut milk intake is seldom connected to allergic responses, aids digestion, nourishes the skin and hair, and is associated with anticarcinogenic, antimicrobial, antioxidative, and antiviral effects [5]. Coconut oil is high in lauric acid and phenolic compounds produced by the fermentation of coconut milk [6]. Coconut milk may also be made into other dairy products, such as yoghurt, which provides substantial advantages to consumers [7]. Fermented coconut milk may be stored for a long period because of the presence of natural additives accumulated during fermentation, including organic acids, peptides, and phenolic compounds. Additionally, fermented coconut milk provides various health advantages over fresh coconut milk and involves many biological and chemical changes.
The study of all the metabolites produced by an organism when it is exposed to various conditions is known as metabolomics [8]. For the high throughput study of targeted and untargeted metabolites, many cutting-edge technologies have been employed in metabolomics. The simultaneous identification of the several classes of secondary metabolites as well as the numerous primary metabolites makes nuclear magnetic resonance (NMR) an adequate and appropriate method for these investigations, integrated with multivariate data processing. Additionally, this technique requires straightforward sample preparation and is quick and repeatable. The NMR technique has been used for metabolic profiling and characterization of various fermented samples. There is, however, no information available regarding the bioactivity and metabolic differentiation of fermented and nonfermented coconut milk at varied cold storage times utilizing this advance technology. The purpose of this study is to address the knowledge gap on the chemical and biochemical properties of fermented coconut milk. To this aim, we investigated the changes of pH, acidity, and the most significant microbial groups during cold storage of fermented and pasteurized coconut milk. We characterized the chemical composition of both types of milk as well as antioxidant and antibacterial activities. Finally, we used an NMR-based metabolomics approach to explore the composition of the methanol extract and possible correlation with the biological activity of fermented coconut milk.
Preparation of Fermented Coconut Milk
Fermentation of coconut milk was carried out following the previous method with some modifications [9]. The fresh coconut milk was filtered twice with cheese cloth and subjected to homogenization at 40/4 MPa followed by pasteurisation at 90 • C for 3 min prior to chilling at 4 • C before fermentation. Lactiplantibacillus plantarum ngue16 were activated (1%) in MRS broth incubated at 37 • C for 24 h, centrifuged at 5600 rpm at room temperature, and the cells were washed twice with saline solution 0.85% (w/v) and then suspended in the same solution. The fermentation of coconut milk was performed by inoculation of 2% (v/v) activated probiotic culture at initial inoculum density of approximately 10 6 CFU mL −1 into coconut milk. The control was coconut milk prepared using the same conditions without the addition of the starter culture. The samples were incubated at 33 • C for 15 h, and after fermentation, the samples were stored at 4 • C for one month for further analysis.
pH and Total Titratable Acidity (TTA) during Storage
The pH of fermented milk samples was measured with a pH meter (Mettler-Toledo ® , Schwerzenbach, Switzerland). The titratable acidity was determined using the AOAC method [10]. A total of 25 mL of the fermented coconut milk and pasteurized coconut milk was measured into conical flasks, and a few drops of 1% phenolphthalein indicator (2 mL) were added to each sample and titrated with 0.1 M NaOH to the first permanent pink color. The acidity was reported as the percentage of lactic acid by weight.
Microbial Analysis
The enumeration for LB was performed to determine the viable cells count in the fermented coconut milk after prolonged cold storage following the previous method [11]. Fermented coconut milk and the pasteurized coconut milk (25 mL) were subjected to serial dilution with 225 mL of maximum recovery diluent (MRD). Serial dilution was persisted until 10 6 dilution was obtained. The inoculation of the sample (0.1 mL) was performed on MRS agar, and the plates were anaerobically incubated at 37 • C for 48 h using anaerobic jar and anaerogen sachets (Oxoid). The MRS plates showing 30-300 colonies were used to determine the viable cells that were expressed as Log CFU/mL using the formula: N (CFU/mL) = C/v (n1 + 0.1 n2)d.
C: Sum of colonies on the plate. v: Volume applied to each plate (0.1 mL). n1: Number of plates counted at the first dilution. n2: Number of plates counted at the second dilution. d: Dilution from which the first count was obtained. For mold and yeast count, the above procedure was repeated by the spread plate method using dichloran rose-bengal chloramphenicol (DRBC) agar, and incubation was performed at 25 • C for 72 h for coliform and E. coli using coliform agar [11]. The fermented coconut milk and pasteurized milk were counted in triplicate every week for four weeks.
Proximate Analysis of Fermented and Fresh Coconut Milk
Chemical composition of fermented coconut milk and pasteurized coconut milk were determined. Oven drying was utilized to determine the moisture content. The protein content was determined using the Kjeldahl method and a 6.25-fold conversion factor. After acid hydrolysis, the fat content was determined by solvent extraction. The amount of ash was measured using a muffle furnace. Carbohydrates were computed using the difference method.
Extraction of Bioactive Compounds from Fermented and Nonfermented Coconut Milks
A volume of 20 mL from fermented and pasteurized coconut milk was combined with methanol (100 mL), and the mixture was shaken at room temperature for 3 h using an orbital shaker at 90 rpm. The mixture was filtered through Whatman paper no. 4, and the liquid extract was evaporated under vacuum at 50 • C using a rotary evaporator (Rotavapor R−114, Buchi, Switzerland) before it was freeze dried to remove any moisture. The dried extracts were kept in an amber bottle at 4 • C before use. All extractions were performed in 5 replicates. The concentrated extract was volumetrically adjusted with methanol to 10 mL in a volumetric flask for further bioassay analyses.
Determination of Antibacterial Activity by Microtiter Plate Assay
The assay was performed using three gram-positive bacterial strains (S. aureus ATCC ® 25923™, B. subtilis ATCC ® 6633™, and B. cereus ATCC ® 33019™) and three gramnegative bacteria (E. coli O157:H7 IMR E91, C. sakazakii ATCC ® 25944™, and S. typhimurium ATCC ® 14028™). Both fermented and nonfermented milk extract during cold storage was tested against the selected pathogenic bacteria in microtiter plate assay. A 100 µL of Mueller-Hinton broth (MHB) containing 10 6 CFU/mL were placed in the 96-wells plate followed by adding 100 µL of extract (10 mg/mL) in MHB, with 10% dimethyl sulfoxide (DMSO) poured into the wells. After measuring absorbance at 630 nm using an Elisa plate reader (Epoch. BioTek, Winooski, VT, USA), the plates were incubated at 37 • C for 24 h before being measured again. The positive control was MHB, with 10% DMSO with pathogenic bacteria, while the negative control was fermented and pasteurized coconut milk extract with MHB, with 10% DMSO without bacteria [12]. The results then were interpreted using the following formula: Percentage of inhibition = (24 h control − 24 h sample)/0 h control × 100.
Antibacterial Activity by Well Diffusion Method
The antibacterial activity of the fermented extract and nonfermented extract against a variety of pathogenic (S. aureus ATCC ® 25923™, B. subtilis ATCC ® 6633™, and B. cereus ATCC ® 33019™) and three gram-negative bacteria (E. coli O157:H7 IMR E91, C. sakazakii ATCC ® 25944™, and S. typhimurium ATCC ® 14028™) was determined using well diffusion assay, following the previous method [3]. Wells of 6 mm diameter were punched in the MHA plate and filled with 100 µL of the fermented and nonfermented extract. After the extract, it was to be absorbed into the agar wells. The pathogenic bacteria in Mueller-Hinton broth (MHB) containing 10 6 CFU/mL were speared on the surface of agar using cotton swap, and methanol was used as the control. The plates were incubated at 37 • C for 24 h. The antibacterial activity of the extract against the tested bacteria were established by measuring the inhibition zone diameters (mm) around the well. All the experiments were performed in triplicate.
Antioxidant
Activity of Fermented Coconut Milk 2.8.1. DPPH Radical Scavenging Activity Assay The radical scavenging activity was determined by the previously described method [12] with some modifications. Briefly, 100 µL of DPPH (5.9 mg in 100 mL of ethanol) was added to 50 µL of the extract (5 mg/mL) in a 96-well microtiter plate. The mixture was then shaken and incubated in a dark chamber at room temperature for 30 min. Absorbance was measured at 517 nm. The control was methanol and DPPH solution. The scavenging activity was determined according to the following equation: The determination of FRAP was carried out according to the previous method [13] with modification. FRAP reagent was prepared using 300 mmol/L acetate buffer pH 3.6, 10 mmol/L TPTZ (2, 4, 6-tripyridyl-s-triazine) in 40 mmol/L HCl, and 20 mmol/L FeCl 3 6H 2 O solution in the ratio of 10:1:1 to give the working reagent. A total of 20 µL of the extract (5 mg/mL) was mixed with 200 µL FRAP solution in the 96-well microtiter plate and incubated for 45 min at room temperature in a dark room. The absorbance was measured at 595 nm wavelength using a spectrophotometer (SPECTROstar NANO , BMG LabTech, Ortenberg, Germany). Known concentrations of Trolox were used to prepare standard curve with linear regression, which was used to compare the extracts. Results were expressed as mmol of Trolox equivalents per g of dry sample (mmol TE/g DW).
NMR Measurement and Data Preprocessing
The metabolites' variation during fermentation and cold storage was investigated according to the previous method [8]. In Eppendorf tubes, 5 mg of each freeze-dried extract were mixed with 650 µL of DMSO-d6, containing 0.1% trimethylsilyl propionic acid (TSP). At room temperature, the mixture was vortexed for 1 min and ultrasonicated for 15 min. After that, the mixture was centrifuged for 10 min at 13,000 rpm. A supernatant volume of 550 µL was transferred to an NMR tube and submitted for NMR analysis. NMR spectra were collected using a 500 MHz Varian INOVA NMR spectrometer (Varian Inc., Palo Alto, CA, USA) set to 499.887 MHz at room temperature (25 • C) with an internal lock of DMSO-d6. To minimize water (H 2 O) signals, the presaturation (PRESAT) pulse sequence was used on all samples, and the collection duration for each spectrum was 3.54 min with 64 scans. The J-resolved spectrum was recorded in 50 min 18 s (i.e., 8 scans per 128 increments for the spin-spin coupling constant axis with spectral widths of 66 Hz and 8 K for the chemical shift axis with spectral widths of 5000 Hz), and the relaxation delay was 1.5 s. Chenomx software (version 8.2, Alberta, Canada) was used for automatically phasing and baseline adjustments on all sample spectra with a reliable setting. The 1 H NMR spectra were also binned with the Chenomx software. All the spectra were automatically binned to ASCII files with the same parameters (0.04 spectral bin), generating an area between 0.5 and 10.0 ppm. The chemical shift range of (4.50-5.0 ppm), which is related to the water signal, was omitted, and the total variables of 222 chemical shifts were generated for each of the 1 H NMR spectra.
Statistical Analyses
All trials were carried out in five replicates (n = 5), and data are reported as mean standard deviation. Minitab was used to perform one-way analysis of variance (ANOVA) to establish statistical significance (version 17). The Tukey's test was used to determine significant differences between means, and values with p < 0.05 were considered significant. After binning the NMR spectra by Chenomx, SIMCA-P software (v. 14.0, Umetrics, Umeå, Sweden) was used to perform multivariate data analysis (MVDA) using principal component analysis (PCA) and partial least square (PLS) regression by the Parreto scaling method. The NMR chemical shifts were the variable, and the sample names were the observations in the data matrix that was generated. The heat map and Pearson test for correlation analysis among all the metabolites as well as the variable importance in the projection (VIP) showing the significant metabolites were performed using MetaboAnalyst 5.0, which is freely available on the online metabolomics analysis software (http://www.metaboanalyst.ca, accessed on 15 July 2022).
pH and Total Titratable Acidity (TTA) during Cold Storage
To evaluate the fermentation properties of fermented coconut milk compared to pasteurized coconut milk, the pH and total titratable acidity (TTA) were evaluated throughout cold storage (Table 1). A gradual decrease in pH values and increase in titratable acidity values were observed in the fermented coconut milk during cold storage. The pH of fermented milk decreased from 4.26 to 3.92 on the 28th day during cold storage with a significant difference (p < 0.05). According to this finding and the previous research, the pH of commercially fermented milk is between 3.9 and 4.2 [14]. In addition, the pH of pasteurized milk slightly decreases from 6.04 to 5.09 until the 28th day during cold storage. The longer fermented coconut milk was preserved, the lower its pH value became due to the activity of the LAB utilized in its preparation.
Similarly, the average pH of coconut milk-based yoghurt was about 4.54, owing to the action of probiotic culture [15]. In addition, a study found a significant decrease in the pH of coconut milk upon fermentation [16]. The pH value of fermented milk decreased during cold storage from 4.69 to 4.04 for 28 days. Furthermore, Lb. casei has the ability to produce organic acids in yogurt, causing a decrease in the pH from 4.27 to 4.03 after 60 days of cold storage [16]. Additionally, according to a number of studies, the pH value of fermented milk and pasteurized milk is highly correlated with cold storage duration. Fauziah et al. [17] found that prolonged cold storage of pasteurized milk in a refrigerator led to a decrease in pH. At 9 days of cold storage, the pH was at its lowest due to an increase in lactic acid production by acid-forming bacteria, which caused the pH to decrease. To summarize, the pH of fermented milk varies during cold storage based on the initial pH level, storage temperature and duration, and probiotic culture activity.
The acidity of fermented and pasteurized milk was determined by using the titratable acidity value. The total acidity of fermented milk ranges from 0.7 to 1.1, whereas that of pasteurized milk ranges from 0.15 to 0.41, with a significant difference (p < 0.05). The International Dairy Federation has recommended that the minimum value of acidity in yogurt is 0.70% [18]. A decrease in pH causes an increase in titratable acid during cold storage due to the ability of LAB to utilize carbohydrates to produce lactic acid [19]. A rise in titratable acid with a fall in pH was observed throughout the 28-day cold storage of fermented milk (0.7 to 0.9). However, it was reported that the taste of fermented milk is acceptable when the titratable acidity is maintained at 70-110 T [20]. Our findings are comparable with the findings that reported a substantial difference in the titratable acidity value of fermented soymilk before and after cold storage, which was 36.02 T to 77.50 T, respectively [21]. In contrast, during the cold storage of soy probiotic yoghurt at 4 • C, no change was observed in the pH or titratable acid value of the fermented products due to lower activity of probiotic culture during refrigerated storage [22]. Table 2 shows how the acidity of pasteurized milk changes during cold storage period at 4 • C. Fauziah et al. [17] reported that the acidity of pasteurized milk decreased significantly from the first day of storage from 0.15 to 0.38. The acidity of pasteurized milk after 9 days of cold storage ranges from 0.12 to 0.15. The LAB still alive after pasteurization cause an increase in acid production. Therefore, the storage period has an effect on the bacteria population in milk, resulting in increased lactic acid formation. According to Korean Food Standards Codex, the acidity value of milk should be less than 0.18%.
Values are expressed as mean ± standard deviation (n = 5). Different superscript letters represent significant differences within the row (p < 0.05). Values are expressed as mean ± standard deviation (n = 5). Different superscript letters represent significant differences within the row (p < 0.05).
Microbial Analysis during Cold Storage
The microbiological quality is essential for determining the quality of the food and protecting customers from any health risks caused by microorganisms [2]. Thus, it is essential for the dairy sector to preserve live bacteria in its final products. The results of the microbial analysis are shown in Table 1. Figure S1 depicts the usage of selective media (MRS gar) with white colony appearance for enumeration of LAB. The viable cell counts of LAB in fermented coconut milk significantly increased (p ≤ 0.05) during the fermentation and storage period (1 to 14 days), reaching 6.4 × 10 8 CFU/mL, and then decreased significantly after 14 days, reaching 1.6 × 10 8 CFU/mL at 28 days. Similarly, Han et al. [16] found that the S. salivarius ATCC 13419 had a higher survival growth rate, reaching 13.66 CFU/mL during fermentation of coconut milk. However, no changes were found in the quantity of LAB in coconut yoghurt over 14 days of storage at 4 • C [7]. The increase in viable cells in probiotics bacteria during fermentation and cold storage indicated that the coconut milk is rich in nutrients such as sugar, fat, protein, and minerals, which support the growth of the probiotics. On the other hand, the increase in probiotic population in fermented milk that was observed during cold storage is possibly related to an increase in the acidity of the fermented milk and to the presence of oxygen [23]. A decrease in the number of L. bulgaricus was detected from 6.11-2.27 log CFU/mL and 9.85-3.59 CFU/mL in whole-fat (WFY) and reduced-fat coconut yoghurt (RFY), respectively, for 28 days of cold storage. After 14 days of storage, the total viable count of LAB and pH declined dramatically ( Table 1). The rise in acidity and reduction in refrigeration temperature generated a harsh environment that inhibited the development of LAB. The large reduction of the probiotic culture in Lycium barbarum yoghurt toward the end of cold storage might be attributable to an increase in the generation of hydrogen peroxide and lactic acid and a decrease in lactose level, which is a primary source of energy for LAB [24]. This could result in a decrease in the growth rate of the probiotic culture. This is consistent with an earlier finding that LAB counts in Bambara yoghurt decrease after refrigerated storage [25]. Furthermore, several studies have shown that the pH value of fermented milk and pasteurized milk is strongly linked to the duration of cold storage. Changes in the pH of pasteurized milk during cold storage are possibly due to the buildup of metabolites such as organic acid generated by microorganisms that were naturally present in milk before pasteurization [26].
The total viable count of LAB in fermented coconut milk obtained in this study was within the acceptable standard, with the minimum number more than 10 6 cfu/mL. The minimum number of probiotic bacteria 10 7 CFU/mL in fermented food has been strongly recommended to provide health benefits during consumption. However, the LAB were absent in pasteurized coconut milk at 1 day and then increased significantly (p ≤ 0.05) after 7 days, reaching 6.2 × 10 4 CFU/mL during cold storage at 28 days. The longer the duration of cold storage, the greater the number of bacteria resistant to pasteurization temperatures and able to live at cooling temperatures [26]. This study indicated that the pasteurization temperature reduces the amount of naturally occurring LAB in coconut milk. The number of LAB organisms that survive pasteurization increases with the length of time milk is stored, which may have an impact on the milk's quality [27]. The pasteurization may lower the amount of LAB in milk by 2.2 log CFU/mL after one day of cold storage, with the level increasing to 7.92 log CFU/mL after 21 days of cold storage [27].
Molds and yeasts are responsible for the spoiling of fermented milk during cold storage since they can live in acidic yoghurt and at low temperatures. Table 1 shows the microbial analysis of yeast and mold for fermented and nonfermented coconut milk. Figure S2 shows the media (DRBC agar) with white colony appearance for molds and pink colony for yeast, as shown in ( Figure S2). The total yeast and mold count were not discovered in pasteurized milk in this study. The high-pressure processing (HPP) and pasteurization caused fully inactivated yeast and mold in cow and goat milk [26]. In addition, molds and yeast can cause spoilage and quality deterioration of coconut milk at low pH. This discovery contradicts our pH result of pasteurized milk during cold storage. Yeast and molds in fermented coconut milk were not detected on the zero, 7 th , or 14 th day during cold storage but were detected and significantly increased (p ≤ 0.05) on the 21 st and 28 th day of storage, with levels that ranged from 1.7 × 10 2 to 1.2 × 10 4 CFU/mL, respectively. This finding is consistent with the previous findings [28], which discovered a rise in yeast and molds in plant-based yoghurt after storage. Similarly, the total yeast and molds in soy yogurt showed an increase during cold storage, reaching 5.9 log CFU/mL [29]. An increase in the viable count of yeast and molds in fermented milk and yogurt during cold storage is related to an increase in acidity or decrease in potential oxygen during the fermentation process, which provide suitable conditions for yeast and molds growth [4].
The high yeast and mold counts causing spoilage could be attributed to poor cleaning practices, air incorporation, unhygienic practices, and ingredient contaminations. Further, postcontamination occurred during processing, storage, and transportation [30]. According to our results, the acceptable level of yeast and molds in fermented coconut milk were achieved during the 21 st day of cold storage. The acceptable value of mold and yeast in beverages and yogurt should be between 10 2 -10 3 CFU/mL; exceeding this range cause potential health hazards and imminent spoilage due to the ability of yeast and mold to produce toxic compounds including mycotoxin, e.g., aflatoxin. However, changes in flavor, texture, taste, and discoloration occurred during the growth of yeast and molds more than the acceptable level [29].
Coliform bacteria are gram-negative bacteria, such as Escherichia coli, which are often used to test the quality of milk. This category of bacteria may create acid and gas through lactose fermentation with or without oxygen [31]. To enumerate coliform and E. coli, we used selective media (coliform agar) with pink colony appearance for coliform and blue colony for E. coli ( Figure S3). As stated in (Table 1), no growth of coliforms or E. coli was seen in coconut milk on the first and seventh days. However, the growth of coliform and E. coli significantly increased (p ≤ 0.05) and was observed in the 14 th until the 28 th day of cold storage, recorded as 2.7 × 10 1 , 3.7 × 10 2 , 4.7 × 10 2 , 1.4 × 10 1 , 2.6 × 10 2 , and 3.4 × 10 2 CFU/mL for coliform and E. coli during the 14 th , 21 st , and 28 th day, respectively. Similarly, a rise in coliform count during the 6 th day of refrigerated storage reaching 3.72 (±0.17) log CFU/mL was observed [32]. Certain coliform and E. coli bacteria, as well as other heat-resistant gram-negative bacteria that enter milk after pasteurization, may reach spoiling levels in refrigerated storage as soon as the seventh day after pasteurization [32]. The presence of coliforms indicates fecal contamination and a poor level of hygiene after processing [32]. In addition, a high level of these bacteria is an indicator of the inefficacy of hand washing and contamination of raw foods, equipment, and the place where food is prepared [31]. Sanitation and cleaning programs can reduce this type of pathogenic bacteria and extend the shelf life of milk up to the 17 th day [33]. According to the standards of GSO 1016, pasteurized milk produced in 2015 shall be free of E. coli colonies [34]. However, according to Turkish Standard Institute (TSI330), milk products should contain no more than 10 coliform colonies [35].
The coliform and E. coli test gave negative results, and no growth was observed in fermented coconut milk during the 28 th day of cold storage (Table 1). This is consistent with [28] investigation, which found no Salmonella, coliform, E. coli, or fecal enterococci in plain yoghurt. Similarly, no growth for coliform or E. coli was observed in soy yogurt during cold storage [21]. In addition, mung bean-enriched stirred yoghurt was coliform-free after 28 days of cold storage [36]. There was an absence of coliform and E. coli in fermented milk and yogurt due to the ability of probiotic bacteria to produce organic acid, hydrogen peroxide, and secondary metabolites such as bacteriocin that can inhibit the growth of pathogenic bacteria. In addition, the refrigeration temperature and acidic environment create undesirable and harsh conditions for coliform growth [37]. According to our results, we conclude that the fermentation extended the shelf life of coconut milk to 21 days under refrigeration condition compared to fresh coconut milk with 7 days of shelf life under refrigeration condition.
Proximate Analysis of Fermented Coconut Milk and Fresh Coconut Milk
The predominant constituents of nonfermented and fermented coconut milk were fat, followed by protein and carbohydrates. The effect of fermentation on the proximate composition of fermenting coconut milk is presented in Table 2. Proximate composition for coconut milk is recorded as total fat (19.27%), carbohydrate (4.13%), energy (197.3%), protein (1.86%), moisture (74.05%), total ash (0.68%), and total solid (25.95%). These compositions in fermented coconut milk were 19.37, 2.51, 193.3, 2.31,75.14, 0.66, and 24.85%, respectively. After fermentation, ash, total solid carbohydrate, and energy content decreased, while protein and moisture content increased; no change in fat content was found. The proximate composition of coconut milk is ash (0.52%), moisture (65.00%), fat (15.02%), and crude protein (7.17%) [38]. The proximate composition of Malaysian coconut milk contains fat (15.44%), protein (3.40%), moisture (73.57%), and ash (0.71%). In contrast to our results, the fat content of soymilk decreases after natural fermentation due to the action of lipolytic enzymes, which hydrolyze fat components [39]. In addition, the fermentation process is ascribed to bacteria-producing enzymes that are classified as protein and break down these molecules. During fermentation, this causes an increase in protein content. However, L. plantarum may create proteinaceous enzymes during fermentation, boosting the protein concentration [40]. Furthermore, the increased protein content of coconut milk after fermentation might be related to the production of various substances during fermentation, such as peptides and amino acids, which are considered protein [41].
The carbohydrate content of coconut milk decreases from 4.13 to 2.51% during fermentation. It was reported that the bacteria used glucose as a source of energy during fermentation, which explains the decline in carbohydrates after fermentation [40,41]. The moisture percentage was somewhat higher than that previously reported [7], which found out that the moisture content of coconut-flavored yoghurt was about 71.31%. The quantity of moisture content in our research rose somewhat after fermentation, which agrees with prior evidence that a rise in moisture content during fermentation is attributable to the generation of a little amount of water [42]. A decrease in moisture content during fermentation was possibly due to the increase in the formation of dry matter content [39].
The protein content of fermented coconut milk increased from 1.86% to 2.3% after fermentation. Our findings are consistent with the previous study, which demonstrated that the protein content of yoghurt rose as fermentation duration increased [43]. Furthermore, another study revealed that the fermentation duration had a substantial effect on the protein content of horse gramme flour during fermentation [44]. The rise in protein concentration during fermentation may be ascribed to the creation of protein-based enzymes by bacteria, as well as the breakdown of certain molecules. Furthermore, L. plantarum may create proteinaceous enzymes during fermentation, hence boosting the protein content [40]. Additionally, the increase in protein content of coconut milk after fermentation might be related to the production of peptides and amino acids during fermentation [40].
Antibacterial Activity of Fermented and Nonfermented Coconut Milk
The antibacterial activity of fermented and pasteurized extracts was studied during 21st and 7th cold storage, respectively, against three gram-positive bacterial strains (S. au-reus ATCC ® 25923™, B. subtilis ATCC ® 6633™, and B. cereus ATCC ® 33019™) and three gram-negative bacteria strains (E. coli 157:H7 IMR E9, C. sakazakii ATCC ® 25944™, and S. typhimurium ATCC ® 14028™) using the well diffusion technique and microtiter plate assay, as shown in Tables 3 and 4. The microtiter plate assay was used to measure the absorbance at 630 nm using Elisa plate reader at 0 and 24 h. The well diffusion method of the antibacterial was used to measure the zone inhibition around the well (Figure 1). The microtiter plate assay results show that the highest antibacterial activity recorded in fermented coconut milk were: F14 95.3% and 94.73 against E. coli and B. cereus, respectively, F1 90.6% against B. cereus, F7 94.04% against B. cereus, and F21 88.63% against B. subtilis. In comparison to fermented coconut milk, pasteurized coconut milk had a modest antibacterial impact on target microorganisms. The highest antibacterial activity of nonfermented milk was at M1 45.6% against C. sakazakii and M7 44.93% against C. sakazakii. Similarly, the highest antibacterial activity recorded in fermented coconut milk are F14 with zone inhibition (23 mm) against S. Typhi, F7 against E. coli (21.33 mm), F21 against S. typhi (18.66 mm), and F1 against C. sakazakii (15.33 mm). Nonfermented milk had the lowest bactericidal activity: M1 against C. sakazakii and B. subtilis (10.66 mm), and M7 against B. subtilis (9.66 mm) [41]. Milk fermentation's antimicrobial properties may be owing to LAB, which secreted organic acids, such as diacetyl, acetaldehyde, and ethanol, as well as bacteriocins [45]. In addition, kefir and probiotic bacteria consume polysaccharides, peptides, and proteins to generate organic acids and bioactive compounds that prevent the development of pathogenic microorganisms. The organic acid in nondissociated objects may penetrate the cell wall in the form of dissociated organic acid, which is followed by a decrease in intracellular pH, which leads to the destruction of the cytoplasm of pathogens. Similarly to our study, Lakshmi et al. [46] reported that the fermented coconut milk exhibited antimicrobial and antifungal activities against E. coli, S. typhi, saccharomyces cerevisiae, and Aspergillus niger due to the production of organic acids, peptides (bacteriocins), carbon dioxide, and ethanol during fermentation. The fermentation process can increase and improve the antibacterial activity of fermented coconut milk in comparison to pasteurized coconut milk. The fermentation of coconut milk with S. salivarius showed strong antibacterial activity against Streptococcus pyogenes in comparison to pasteurized coconut milk [16]. Furthermore, the antibacterial peptides produced in fermented milks with specific L. plantarum showed activity against S. aureus, E. coli, S. Typhimurium, S. choleraesuis ssp. choleraesuis serovar Choleraesuis, and L. innocua [47].
Our results indicate that pasteurized coconut milk exhibited antibacterial activity against selected target bacteria. Coconut milk's lauric acid has been shown to have antibacterial properties against pathogenic microorganisms [16,47]. In addition, during fermentation, probiotic bacteria are able to convert lauric acid to monolaurin, which has a far higher antibacterial activity than lauric acid [16]. As shown in Tables 3 and 4, the antibacterial activity of fermented milk increased during cold storage until the 14th day, while it decreased on the 21st day toward pathogenic bacteria at the end of the cold storage period. The reduction or rise in antibacterial activity during cold storage might be attributed to antibacterial chemicals produced during fermentation and cold storage interacting with one another to enhance or reduce antimicrobial activity [47]. It was reported that the antibacterial activity of fermented milk increases during cold storage due to the accumulation of antibacterial bioactive peptide as a result from increases in the degree of proteolysis [48].
Antioxidant Activity of Fermented and Pasteurized Coconut Milk
The antioxidant activity of fermented coconut milk was considerably greater than that of pasteurized coconut milk in this research, as assessed by FRAP and DPPH tests ( Table 5). The fermented coconut milk had the greatest DPPH and FRAP values of 67.1% and 61.961 mmol/g at day 14, whereas FCM had the lowest DPPH and FRAP values of 58% and 55.113 mmol/g at day 21. Both coconut milk and coconut oil have been shown to have greater levels of phenolic compounds in prior research [49]; coconut milk is rich in antioxidant compounds such as amino acid and vitamins E and C [8]. Fermentation is a process used to increase the nutritional content and antioxidant activity of food via the synthesis of bioactive molecules such as peptides and phenolic compounds. Moreover, pH is a significant element that might affect antioxidant activity by altering the structure and concentration of bioactive molecules [50]. Lactobacillus strains display proteolytic activity and release bioactive peptides with antioxidant activity from milk protein during milk fermentation [51]. It was also reported that fermented milk produced by L. plantarum exhibited antioxidant activity displayed at a 14.7% to 48.9% (DPPH) inhibition rate [52]. Valero-Cases and Frutos [53] determined the ability of LAB in pomegranate juices to alter and biotransform phenolic compounds into two new phenolic derivatives. Not only does the fermentation process utilizing LAB boost the bioactive molecule, but these LAB also have their own antioxidative capabilities by creating enzymatic and nonenzymatic antioxidants to defend themselves from oxidative damage [54]. Table 5. Antioxidant activity of the fermented and pasteurized coconut milk as determined using DPPH (%) and FRAP (mmol TE/g) assays. Values are expressed as mean ± standard deviation (n = 5). Different superscript letters represent significant differences within the column (p < 0.05).
Sample
The antioxidant activity of fermented coconut milk could be influence by a long cold storage period. The antioxidant activity of Labneh increases with cold storage for up to 20 days due to the proliferation of probiotic bacteria and its proteolytic enzymes secretion [55]. Antioxidant activity in fermented coconut milk rose throughout the fermentation process in this study; however, after 21 days there was a reduction, which is in line with earlier research. The bioactive component may undergo various transformations, degradation, oxidation, and hydrolysis during fermentation and cold storage. A reduction in antioxidant activity may be caused by the oxidation of phenolic compounds as a consequence of dissolved oxygen in fermented cornelian cherry juice with L. casei T4 [56]. Similarly, Kurnia et al. [57] reported that the antioxidant activity of fermented goat milks using L. fermentum PE2 decreases during cold storage due to damage to the structure of the bioactive compounds. Therefore, fermentation of coconut milk with L. plantarum ngue16 can improve and enhance the antioxidant activity in comparison to the nonfermented milk.
Bioactive Metabolites of Fermented Coconut Milk
The discovered metabolites in fermented and pasteurized coconut milk were varied, with components detected spanning from amino acids to fatty acids, carbohydrates, and chemical molecules (Table 6 and Figure S4).
The variation in metabolite contents of fermented coconut milk and pasteurized coconut milk samples extracted during cold storage were assessed using multivariate data analysis (MVDA). The principal component analysis (PCA) was applied to understand the clustering features of the samples and the metabolites that provided the variability. The PCA score plot showed the clustering of the samples, and loading plot indicated that the variable contributed to the sample differences ( Figure 2A). As shown in the PCA score plot, the first principal component (PC1) accounted for 61.1% of the variation in the data, whereas PC2 was able to explain 20.8% of the variation (Figure 2A). The score plot ( Figure 2B) revealed that there are two clear clusters, which are fermented coconut milk and pasteurized coconut milk. The metabolites responsible for this difference were sucrose, glucose, fructose, including ethanol, valine, gamma-aminobutyric acid (GABA), arginine, lactic acid, acetoin, alanine, phenylalanine, acetic acid, methionine, acetone, pyruvate, succinic acid, malic acid, tryptophan, uridine, uracil, and cytosin ( Figure 2B). This result indicates that the fermentation and cold storage time could contribute to the metabolites' variation. The major metabolites observed were ethanol, lactic acid, GABA, acetic acid, pyruvate, and uridine, which increased due to the fermentation with LAB. In addition, acetate, acetoin, and different amino acids were found at different concentrations in the fermented coconut milk. The concentration of the three main sugars in coconut milk, glucose, sucrose, and fructose, were in decline after the fermentation. The lactic acid bacteria utilize mono-and disaccharides during fermentation to produce mainly lactic acid, acetic acid, and several other acids [58] as well as an increase in acetone and ethanol as volatile compounds during fermentation [59]. In addition, the fermentation process reported to enhance and produce essential amino acids during fermentation. Proteolytic activity of L. lactis used in food production as a starter culture due to the ability of some L. lactis strains to generate peptides and amino acids during fermentation such as isoleucine, leucine, valine, histidine, and methionine [60]. However, Das et al. [61] reported that L. plantarum NRRL B-4496 isolated from fermented beverage has the ability to utilize monosodium L-glutamate to produce bioactive GABA through glutamate decarboxylase enzyme (GAD). To observe the variation among fermented coconut milk during cold storage at 1, 7, 14, and 21 days, other models were generated. The processed 1 H NMR data of fermented coconut milk during cold storage at 1, 7, 14, and 21 days were subjected to PCA, and the results are shown in Figure 3. As shown in the PCA score plot, the first principal component (PC1) accounted for 68.4% of the variation in the data, whereas PC2 was able to explain 18.8% of the variation ( Figure 3A). The fermented coconut milk extracts during cold storage were separated into two clusters. The fermented coconut milk at 1 day were well separated from samples at 14 and 21 days. The samples at 14 and 21 days were similar, and a slight difference among their metabolites was observed. The fermented coconut milk at 7 days of storage was a typical group that has most metabolites. The loading plot ( Figure 3B) shows the metabolites responsible for this variation, including alanine, ethanol, o-Phosphoetanolamine, valine, GABA, succinic acid, acetic acid, tyrosine, and uridine in 14 and 21 days during cold storage. However, 1.3-dihydroxyacetone, 3-hydroxyphenylacetate, o-Phosphoetanolamine, oleanolic acid, threonine, glucose, choline, lactic acid, glutamate, Isoleucine, fructose, and phenylalanine were higher in fermented milk at the first day of storage (Figure 4). The result indicates that the fermentation process as well as the cold storage could affect the metabolites' variation during cold storage of fermented coconut milk. The fermentation and cold storage cause an increase in titratable acidity due to the availability of sugar that is utilized by LAB to produce organic compounds, which can contribute to metabolites' activity of LAB. Additionally, the fermentation participated in an increase in metabolites formation [25,60]. Baba et al. [62] reported that the higher proteolytic activity of the L. barbarum in yogurt on day 7 during cold storage was due to the ability of L. barbarum to release an amino acid through the activity of proteinase and peptidase enzymes. Few studies mentioned that the coconut milk, especially when fermented, provides health benefits with different biological activities. However, there has been no information on the metabolites profile of the fermented coconut milk. In this study, metabolomics was performed to correlate antibacterial and antioxidant activity of the fermented coconut milk and pasteurized coconut milk with their metabolites profile. In this study, the PLS model demonstrated the significant correlation between the metabolites of the fermented coconut milk and antioxidant (DPPH and FRAP) and antibacterial activity (S. aureus, B. subtilis, E. coli, C. sakazakii, B. cereus and S. typhi) ( Figure 5). It can be observed that the fermented coconut milk was correlated more toward the antibacterial and antioxidant activity. The compounds of the fermented coconut milk contributing to the antioxidant and antibacterial activity to the pathogenic bacteria were ethanol, GABA, uridine, valine, lactic acid, alanine, arginine, acetic acid, methionine, acetone, pyruvate, succinic acid, malic acid, aspartate, threonine, and phenylalanine. Some of these metabolites have previously been investigated as antibacterial agents against pathogenic bacteria. As shown by the correlation coefficients in Figure 6, acetic acid and alanine have been reported to show strong antimicrobial activity against B. cereus, B. subtilis, B. megaterium, and B. pumilus [8]. These identified compounds were suggested to be responsible for the biological activities of fermented foods, which is in agreement with our study [63]. The Pearson correlation supports the PLS data. Bioactive metabolites with known antimicrobial activity against E. coli, S. typhimurium, Aspergillus flavus, and Penicillium spp. were identified in fermented cantaloupe juice, such as lactic acid, GABA and beta-alanine [63], ethanol, and organic acid against S. Arizonae and S. Typhimurium [64]. However, acetic, malic, lactic, fumaric, benzoic, sorbic acids, sulfite, and succinic acid had the ability to inactivate the growth acid-resistant vegetative pathogens E. coli O157:H7 and S. aureus [63,64]. The antioxidant activity of the metabolites arginine and GABA was reported [65,66]. In addition, uridine identified in black garlic exhibited strong antioxidant activity [67]. The identified α-linolenic acid, g-oryzanol, a-tocopherol, GABA, a-aminobutyric acid, glutamic acid, leucine, hydroxyl-L-proline, 3-hydroxybutyric acid, 2,3-butanediol, fumaric acid, fatty acids, vanillic acid, phenylalanine, and valine were associated with antioxidant activity of germinated rice [68]. The fermentation process is useful as a natural preservation method of food with antibacterial and antioxidant activities via increasing the free total phenolic and organic acids, exopolysaccharides, bacteriocins, and bioactive peptides [69]. The outcome of this study showed that the fermentation process improved the antibacterial and antioxidant activities of the metabolites of the fermented coconut milk. Using the PLS-derived biplots, the variable importance in projections (VIP) values were used to identify the major factors that contribute to the biological activity. The VIP value is calculated by summing the squares of the PLS weights while taking the Y variance in each dimension into account. The VIP values show how much each variable contributed to separating clusters in the PLS biplot (Figure 7). The variables with VIP values more than 0.5 higher are important and significant in the correlation and projection in the PLS model, and thus, they are associated with chemical markers and/or bioactive compounds from the fermented coconut milk [68]. In this study, the Q2 and R2 values were both greater than 0.8, indicating that all models met the validation and prediction performance standards. The 100 permutation tests and regression validation demonstrated that the PLS model was valid. To determine and validate the relationship between the variables, correlation coefficients (R) were determined. The experimental values of the bioactivities were derived as regression plots as a function of the predicted values for sample validation. The experimental values of the bioactivities were derived as a regression plot ( Figure S5) as a function of the predicted values for sample validation. These data also suggested that PLS models were useful for predicting and validating the parameters.
Conclusions
The study reports on the potency of developing probiotic milk with potent biological activity using coconut milk and the fermentation process with L. plantarum ngue16. The fermentation for 15 h at 33 • C significantly extended the shelf life and improved the antibacterial and antioxidant activities of coconut milk compared to pasteurized coconut milk. The fermented coconut milk demonstrated low microbial load and stable shelf life for 21 days at 4 • C. The biological activity of fermented coconut milk was due to the presence of several bioactive compounds, including ethanol, GABA, uridine, valine, lactic acid, alanine, arginine, acetic acid, methionine, acetone, pyruvate, succinic acid, malic acid, aspartate, threonine, and phenylalanine, which were identified using 1 H NMR. The results of the study demonstrated the high potential for L. plantarum ngue16 to develop fermented coconut milk with enhanced antibacterial and antioxidant activities and expand the shelf life. Further study is recommended to optimize the fermentation conditions to enhance the bioactive compounds in the fermented coconut milk.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/foods12101971/s1, Figure S1: LAB from fermented and pasteurized coconut milk appearance in MRS agar; Figure S2: yeast and molds from fermented coconut milk appearance in DRBC agar; Figure S3: Coliform and E. coli from coconut milk appearance in coliform agar; Figure S4: The representative 1 H NMR spectra of fresh and fermented coconut milk; Figure S5:
Data Availability Statement:
The data used to support the findings of this study are included within the article. Any other data can be available upon request. | 2023-05-14T15:19:01.839Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "46b72b93a0e4b6c03600d05d4f89f4d8c4b20d9a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/foods12101971",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c7bc414d42cb5af8a1cc9f7ee321538bff535ea",
"s2fieldsofstudy": [
"Chemistry",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54643944 | pes2o/s2orc | v3-fos-license | Power flow in radial distribution systems in the presence of harmonics
This paper presents the results of power flow calculations in the presence of harmonics in radial distribution systems obtained using the decoupled harmonic power flow (DHPF) algorithm. In this algorithm, the interaction among the harmonic frequencies is assumed to be negligible and hence the calculations are separately performed for every harmonic order of interest. A detailed methodology for calculating current and voltage high order harmonics, harmonic losses and total harmonic distortion of voltage of the electrical distribution networks in the frequency domain is presented. The standard backward/forward sweep method is used for solving the power flow problem at the fundamental frequency. Furthermore, some practical and approximated models of network components in harmonic analysis are given. The performance of the DHPF approach is studied and evaluated on two standard test systems with nonlinear loads, the distorted IEEE 18-bus and IEEE 33-bus. Nonlinear loads are treated as harmonic current sources that inject harmonic currents into the system. The DHPF algorithm is verified by comparing its results with those generated by software tools for the analysis of transmission, distribution and industrial power systems (i.g. ETAP and PCFLO). Simulation results show the accuracy and efficiency of the applied procedure for solving the harmonic power flow problem. Keywordsdecoupled approach; harmonic power flow; distribution system; nonlinear load
INTRODUCTION
In recent years, due to widespread usage of nonlinear loads, the distortion of the waveform of current and voltage increases. Loads with a nonlinear current-voltage characteristic inject a wide range of harmonics into the network, resulting in a deterioration of the power quality. These harmonics can cause various problems, such as the occurrence of series and parallel resonances, a reduction of efficiency in power generation, transmission and utilization, components ageing and capacity decrease, interferences with control devices and communication systems, etc. [1][2][3]. In general, the power flow calculation is carried out only for the fundamental frequency. However, in the power systems with nonlinear loads, the power flow calculation needs to be carried out for each harmonic frequency of interest. By including nonlinear loads into the calculation, the power flow calculation becomes more complex and demanding.
In scientific literature, different approaches have been proposed and implemented to solve the power flow problem in the presence of higher harmonics, named the harmonic power flow (HPF) [4][5][6][7][8][9][10][11][12]. The criteria to classify the harmonic power flow algorithms could be [2]: modeling technique for simulation of power system and nonlinear loads, system condition (single phase, three phase, balanced, unbalanced) and solution approaches.
Modeling techniques which are used for analyzing the harmonic problem include time domain [5], frequency domain [4], [6][7][8][9][10][11][12] and hybrid time-frequency domain [13]. Time domain approaches are based on transient analysis and have great flexibility and high accuracy. However, their use is limited because they usually require long computing time, especially for large power systems with many nonlinear loads. Frequency domain approaches calculate the frequency response of power systems and reduce the computation time of the scan process. The accuracy of the solution depends on the number of harmonics included in the calculation process. Hybrid approaches use a combination of frequency domain (to limit the computing time) and time domain (to increase the accuracy) approaches to simulate the power system and nonlinear loads, respectively. Based on their solution approaches, the harmonic power flow calculations can also be classified as coupled and decoupled methods [2].
Most nonlinear loads and power system components impose couplings between harmonics and call for accurate coupled solution approaches [2]. The coupled approach which is proposed in reference [6] solves simultaneously the calculation for all the harmonic orders and has good accuracy. The main disadvantages of this method are high computational costs and requirements for exact formulation of nonlinear loads. The decoupled approach [11,12] assumes that the coupling between harmonic orders can be rationally neglected and, as a result, the calculation can be separately carried out for every harmonic order. Therefore, this approach requires less computational costs [12]. This paper presents the results of HPF calculations in radial distribution systems with nonlinear loads obtained using the decoupled harmonic power flow (DHPF) approach. The aim was to develop fast and efficient HPF algorithm that can easily be applied to other power system problems such as the problem of optimal placement and sizing of capacitor banks and/or distributed generators in radial distribution systems with nonlinear loads and the optimal filter design problem. The algorithm can be used to estimate the object function value of a considered optimization problem. The procedure is tested on two standard test systems with nonlinear loads, the distorted IEEE 18-bus and distorted IEEE 33-bus. To verify the accuracy of the DHPF approach, the simulation results are compared with those generated by software tools for the analysis of transmission, distribution and industrial power systems (i.g. ETAP [14] that uses the decoupled approach and PCFLO [15] that uses the coupled approach). Calculations showed the efficiency of the applied procedure for solving this complex problem.
II. MODELS FOR DISTRIBUTION NETWORK COMPONENTS
To analyze the industrial distribution systems, it is necessary to describe a detailed representation of component models such as distribution cables, transformers, shunt capacitors, reactors and loads. Instead of using the very accurate models, some practical and approximated models of [2], [9][10][11][12], [16] and [17] are used in this paper. At harmonic frequencies, a power system is modeled as a combination of passive elements and current sources that inject current into the power system. Fig. 1 shows an m-bus radial distribution system where a general bus i contains a load and a shunt capacitor.
A. Distribution lines and cables
The distribution lines and cables might be represented by the lumped parameter elements using a π-connection. If skin and proximity effects are ignored at higher frequencies, the longitudinal and shunt parameters of the lines are given by [9] longitudinal , 1 where Ri,i+1, Li,i+1 and Ci,i+1, represent the resistance, inductance and capacitance of the line segment between busses i and i+1, respectively; f is the fundamental frequency of the system (f=50 Hz) and h is the harmonic order. The skin effect can be included in (1) by modifying the resistive part of the line admittance as follows [9,17] Equation (3) is for overhead lines and (4) for power cables.
B. Linear and nonlinear loads
Linear passive loads that do not produce harmonics have a significant effect on system frequency response primarily near resonant frequencies [17]. Linear loads are basically those loads that can be described as passive loads in terms of harmonics. At the fundamental frequency, linear loads are modeled as PQ buses while shunt admittances are used to model them at harmonic frequencies. Different types of linear load models at harmonic frequencies are recommended in the [17]. The choice of the load model to use depends on the nature of the load and on the information available. The generalized model is suggested for a linear load, which is composed by a resistance in parallel with a reactance. The admittance of the linear load connected at bus i is defined by [2], [9][10][11][12] 22 11 where Pli and Qli are the active and reactive linear loads at bus i, respectively, and 1 i V is the fundamental voltage at bus i.
Nonlinear loads are treated as decoupled harmonic current sources that inject harmonic currents into the system. The type of nonlinear loads is considered to be a three phase, six-pulse converter. Harmonics generated by converters of any pulse number can be expressed as [17]: where k is any integer (1,2,3,…, etc.) and q is the pulse number of the converter (6 in case of a six-pulse converter). According to the (6), it is obvious that the characteristic harmonics for a three phase, six-pulse converter are all odd harmonics except triplens (5th, 7th, 11th, etc.). The fundamental and the hth harmonic currents of the nonlinear load installed at bus i with fundamental real power Pnl and fundamental reactive power Qnl where 1 nli I is the amplitude of the rms value of the fundamental current, h nli I the amplitude of the harmonic current of order h and * denotes the complex conjugate. According to [17], if there is a single source of harmonics in the system, then the phase angles of harmonic currents can be ignored.
C. Shunt capacitors
Shunt capacitors are represented as shunt connected elements: where 1 Ci y is the admittance of shunt capacitor C at bus i.
III. DECOUPLED APPROACH FOR HARMONIC POWER FLOW
The method that is used to analyze and obtain the parameters of the distribution network at the fundamental harmonic is known as backward-forward sweep [18,19]. The power flow analysis at the fundamental frequency is the base for harmonic calculations. For the estimation of harmonic components, the DHPF method is used. In the decoupled method, the interaction among the harmonic frequencies is assumed to be negligible and hence the admittance matrix is formulated individually for all the higher order harmonic components. After modifying admittance matrix and associated harmonic currents, the HPF problem can be calculated using the equation [11]: where Y h BUS is the bus admittance matrix of the hth harmonic; At any bus i, the rms value of the voltage and the total harmonic distortion of voltage (THDV) are given by [11] where hmax is the maximum harmonic order under consideration.
Subsequently, superposition is applied to convert the solved values of each V h into the time domain for each network bus i as follows [17]: where 1 ω 2πf is angular frequency and δ h i is the phase angle of the hth harmonic voltage.
At the hth harmonic frequency, the active power losses in the line section between bus i and i+1 are [11]: The total active power losses of the system for all harmonics are therefore given by the following equation [11]: where m is the total number of buses. Similarly, the reactive power losses can be calculated but they have rarely been used in literature. The flow chart of the DHPF algorithm is shown in Fig. 2 A. Case 1 Fig. 3 shows a single line diagram of the 12.5 kV, IEEE 18bus distorted radial distribution system. The substation voltage magnitude is set to 1.05 p.u. and it is assumed that the substation voltage does not contain any harmonic components. The parameters of the system are taken from [20] and given in Table A.I in the Appendix A. The base voltage for this system is 12.5 kV and base power is 10 MVA. The system contains a six-pulse converter at bus 5 with active and reactive powers of 0.3 p.u. (3 MW) and 0.226 p.u. (2.26 MVAr), respectively. Current harmonic injections of the converter are calculated as fractions of the fundamental component using (8). Phase angles of harmonic currents are neglected since there is a single source of harmonics in the system [17]. At the fundamental frequency, the constant power model is used to model loads, while capacitor banks are modeled as constant admittances (impedances). The generated results of the DHPF method and those generated by ETAP and PCFLO software packages including rms voltage and THD of voltage are shown in Figs. 4-6. Fig. 7 and Table B.I in the Appendix B show the harmonic voltage distortion versus frequency at all test system buses obtained using the DHPF. The deviations of results including the maximum and mean deviations generated by DHPF from those generated by PCFLO and ETAP are indicated in Table I. The main reason of the comparison is to demonstrate accuracy of the proposed DHPF algorithm. From Table I, and from Figs. 4 and 5, it can be seen that the results of the DHPF algorithm and PCFLO and ETAP packages are very similar. There are some differences at some buses, mostly due to the neglected harmonic coupling by DHPF. In addition, the accuracy of the solution depends on method of modeling the elements of the system, and the impacts of skin effect and phase angles of harmonic currents.
Figs. 8 and 9 illustrate the active power losses for each line at the fundamental frequency and higher frequencies obtained using the DHPF method. The total active power losses of the system that are consist of the fundamental frequency component and the harmonic power caused by the presence of the converter are 279.450 kW. The average CPU time of the DHPF algorithm for IEEE 18-bus test system was 0.20 sec.
International Journal of Electrical Engineering and Computing
Vol. 2, No. 1 (2018) Figure 8. The active power losses at the various lines for the fundamental frequency in the IEEE 18-bus test system Figure 9. The active power losses at the various lines for 5th, 7th, 11th and 13th harmonic orders in the IEEE 18-bus test system As could be expected, the losses at 5th and 7th orders are much higher than any others. Figs. 10 and 11 show the waveforms of voltage and current at the bus 5, respectively. Obviously, the waveform of voltage is not a pure sinusoidal waveform with only a 50 Hz frequency component.
B. Case 2
To check the validity of the DHPF algorithm, the IEEE 33bus radial distribution system [21] is considered. The base voltage and power of this system are 12.66 kV and 10 MVA, respectively. A single-line diagram of the system is shown in Fig. 12 and the details of loads and lines are listed in Table A.II in the Appendix A. Table B.II in the Appendix B, shows the spreading of harmonic distortion among of buses due to the distributed nonlinear loads. Table II. The results from Table II, and Figs. 13 and 14, indicate the high compatibility of the results obtained by the DHPF procedure with those generated by PCFLO and ETAP software packages.
The fundamental frequency losses (Fig. 15) and harmonic losses (Fig. 16) of the system are 526.877 kW and 9.011 kW, respectively. Therefore, the total active power losses in the considered network are 535.888 kW. The average CPU time of the DHPF algorithm for IEEE 33bus test system was 0.25 sec.
V. CONCLUSION
In this paper, the application of decoupled approach for harmonic power flow in radial distribution systems with nonlinear loads is presented. The main conclusions that can be drawn from the presented results are: • The DHPF algorithm for harmonic analysis of distribution systems allows quickly, easily and accurate calculation of the voltage and current harmonics, harmonic losses and the total harmonic distortion. • Mean absolute deviations of results obtained by the DHPF algorithm from those generated by ETAP and PCFLO software packages are less than 5%. This means that the accuracy of the DHPF algorithm is high. • The DHPF algorithm requires less CPU time and memory storage than ETAP and PCFLO. • The solution can always be obtained directly, and it is computationally efficient. • The decoupled approach can be applied to harmonic analysis of large distribution systems with multiple nonlinear loads. | 2018-12-10T22:32:15.658Z | 2019-03-13T00:00:00.000 | {
"year": 2019,
"sha1": "fbd2006f0db87d341936b4ea166a3d306b3314aa",
"oa_license": null,
"oa_url": "https://doisrpska.nub.rs/index.php/IJEEC/article/download/5630/5453",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1e669e59e5b4885468ecdb2e63a12432fa4d359a",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257426601 | pes2o/s2orc | v3-fos-license | Long-COVID syndrome: physical–mental interplay in the spotlight
Patients suffering from Long-COVID syndrome experience a variety of different symptoms on a physical, but also on a psychological and social level. Previous psychiatric conditions such as depression and anxiety have been identified as separate risk factors for developing Long-COVID syndrome. This suggests a complex interplay of different physical and mental factors rather than a simple cause–effect relationship of a specific biological pathogenic process. The biopsychosocial model provides a foundation for understanding these interactions and integrating them into a broader perspective of the patient suffering from the disease instead of the individual symptoms, pointing towards the need of treatment options on a psychological as well as social level besides biological targets. This leads to our conclusion, that the biopsychosocial model should be the underlying philosophy of understanding, diagnosing and treating patients suffering from Long-COVID syndrome, moving away from the strictly biomedical understanding suspected by many patients, treaters and the media while also reducing the stigma still associated with the suggestion of a physical–mental interplay.
Introduction
According to the NICE-guidelines, Long-COVID syndrome is defined as "signs and symptoms that develop during or following an infection consistent with COVID-19 and which continue for more than four weeks and are not explained by an alternative diagnosis" (Sivan and Taylor 2020). The prevalence varies greatly between different reports, depending on the patient population, selection process, time after infection and the method of recording symptoms. The most common symptoms include fatigue (Joli et al. 2022), dyspnea and reduced cognitive and physical performance (Lopez-Leon et al. 2021). For many patients, these symptoms recede spontaneously, while others report persisting symptoms over months or even years (Anaya et al. 2021;Lund et al. 2021). The clinical picture is usually complex, lacking specific laboratory values and leading to guidelines recommending an interdisciplinary approach, taking into account the whole person and continuity of treatment (Koczulla et al. 2022). While there are continuing advances in research, the pathogenesis is not clear, multifactorial and probably not the same for all patients (Koczulla et al. 2022). Apart from age, risk factors for developing Long-COVID syndrome are pre-existing conditions, especially hypertension, obesity, psychiatric conditions and immunosuppression (Crook et al. 2021). Interestingly, a French study of 26,823 adults during the COVID-19 pandemic found an association of self-reported COVID-19 infection with persistent physical symptoms, whereas laboratory-confirmed COVID-19 infection was associated only with anosmia (Matta et al. 2022).
Many different factors stress the importance of including a perspective focusing on physical-mental interplay in the diagnostic and treatment process. First, Long-COVID syndrome leads to mental stress and symptoms such as depression, anxiety (Silva Andrade et al. 2021) and in some cases post-traumatic stress disorder, especially if severe dyspnea was experienced during the infection (Harenwall et al. 2022). Second, pre-existing psychiatric illnesses are a separate risk factor of developing Long-COVID syndrome (Yong 2021), 1 3 emphasizing a close connection between pathogenic mechanisms. Third, the COVID-19 pandemic with a fear of infection, social distancing, a rise in unemployment and growing uncertainty sparked a clear rise in both stress and psychiatric diseases worldwide (Chen et al. 2021), leading some authors to speculate about a new diagnostic category for specific mental disorders resulting from the COVID-19 pandemic in the future (Heitzman 2020). Fourth, the resemblance of Long-COVID syndrome to other somatoform disorders such as irritable bowel syndrome are obvious, with both disorders characterized by somatic pathogenetic alterations in close interaction with psychological factors (Koczulla et al. 2022;Chey et al. 2015). Lastly, symptoms of fatigue as well as decreased cognitive and physical performance are key diagnostic criteria for depression.
Especially with a rising complexity of a disease such as in the case of Long-COVID syndrome, the biopsychosocial model can play a central role in helping to understand diseases and as a guideline for the development of treatment plans.
The biopsychosocial model
One of the pillars of a modern comprehensive understanding of diseases is the biopsychosocial model introduced by Engel (1978). Engel proposed a counterpoint to the strictly biomedical model of illness, which is mostly concerned with directly measurable, structural pathologies and is the primary model physicians are trained in and patients come to expect in an increasingly technical world. This, however, neglects to include the vast amount of knowledge about human behavior and other psychological and social influences on the development, course and subjective perception of symptoms. Engel accuses this biomedical model of dualism by strictly separating the mind and body, the mental from the physical, and of reductionism by trying to understand an extremely complex entity such as life itself by analysis of its component parts and explaining them in the language of physics and (bio-)chemistry. Engel did not deny the obvious and amazing advances in biomedical research and treatments, which have progressed even farther since the first introduction of his model. However, this implies a promise of a complete understanding of all diseases and availability of treatment options which until today has not been achieved -even if this has become the expectation of many patients seeking care. This has unfortunately shifted perspective from the comprehensive and individual clinical assessment to various laboratory and diagnostic procedures, which stand in their own right in some cases and are overemphasized in others. In many cases, this leads to a discrepancy between the subjective symptoms and limitations the patients experience in day-to-day life and a lack of measurable biomedical markers, which is extremely frustrating for both physician and patient, especially if this biomedical model dominates both the perception of illnesses and the communication between physician and patient as well as in society in general. Today, this is massively supported by public reporting, which frequently presents visually impressive, sometimes pseudo-scientific diagnostic and treatment methods as highly promising standards of medical care and dismisses the physical-mental interplay, increasing the already existing stigma. This model inevitably leads to the exclusion of the main character -the patient -from their own disease. In some diseases, if the correlations between biomedical alterations and resulting symptoms are relatively linear, appropriate treatment options are available and the existing psychosocial coping strategies outside the medical field are effective enough, this model seems to be sufficient. However, with a rising complexity and especially chronification of illnesses, this approach falls short in both explaining symptoms and treating patients.
This crisis inspired Engel to the introduction of his biopsychosocial model, aiming to be more inclusive and provide a framework to conceptualize all levels of health and disease, "from subatomic particles through molecules, cells, tissues, organs, organ systems, the person, the family, the community, the culture, and ultimately the biosphere" (Engel 1978) where every system is relatively autonomous, but interconnected with every other system through feedback arrangements. Instead of linear causality, it expects reciprocal causal effects, which can carry disturbances from one system to another. In this model, overall health and illness are dependent on the relative intactness and functioning of each component, communication and intraand intersystemic harmony. In addition, every change becomes part of the history of its system, underlining the dynamic quality. Health is not a single state, which is to be achieved, but the quality of overall harmony, which can vary before and after the disturbance. This change of perspective has huge implications both on the perception of illness and of the communication with the patient. Instead of reducing the patient to just a few specific symptoms or measurable parameters and treatment by a purely biomedical intervention, they are understood on a more complete basis, and the biomedical intervention becomes one part of a comprehensive treatment plan on many different levels.
While this model finds its most prominent appliance in somatoform disorders (Kreipe 2006;Henningsen 2018) and psychiatric diseases (Papadimitriou 2017), it is by no means limited to those. From patients with psychiatric disorders being more likely to e.g. develop and have a higher mortality rate for cardiovascular disorders (Hare et al. 2014) and cancer (Pinquart and Duberstein 2010) to psychosocial factors being an important pathogenic factor in the development and outcome of Crohn's disease (Ringel and Drossman 2001) and rheumatological illnesses, increasing fatigue in patients suffering from lupus erythematosus (Aberer 2010), there is a wide range of evidence supporting this model. In addition, it has been shown that biographical trauma and stress substantially influence pain perception and processing (Tesarz et al. 2018).
Applications of the biopsychosocial model to the understanding of Long-COVID syndrome
As the possible biological factors associated with Long-COVID syndrome are discussed in the other articles of this special issue, we will not further embark on those, but concentrate on psychological and social contributors as well as interactions between the different systems.
Apart from the stress on the medical system with doctors and nurses working to the limit of their capacities to treat patients, the coronavirus pandemic had huge effects on society and individuals on many different levels, leading to an increase in mental health problems in the general population, but especially in patients infected with SARS-CoV-2 (Hossain et al. 2020). Although most of those seemed to recover quickly (Manchia et al. 2022), patients suffering from Long-COVID syndrome had an increase in mental health problems such as depression and anxiety (Silva Andrade et al. 2021). This could be argued to simply be a result of the limitations and restrictions due to the disease. However, pre-existing psychiatric conditions were identified as separate risk factors for a prolonged recovery after COVID-19 (Crook et al. 2021) and developing Long-COVID syndrome (Yong 2021), suggesting a much more complex interaction. Patients with mental health problems usually experienced some form of biographical trauma or stress, resulting in inner conflicts and a reduced ability to cope with internal and external stressors. Due to the nature of these conflicts and personality structure, these are at times projected into the body and experienced as somatic symptoms such as fatigue, gastrointestinal problems, pain or cardiac reactions (Mentzos 2017). These symptoms, while very real in their restrictions to the patients' lives and associated with actual somatic symptoms (such as diarrhea, tachycardia or heightened muscle tension as a correlate for pain) are not the result of structural damage of the respective organs, but of dysregulation due to imbalances on a psycho-neurological scale (e.g., "gut-brain-axis"), which have reciprocal effects in both directions and are linked to a "diminished capacity to consciously experience and differentiate affects and express them in an adequate or healthy way" (Waller and Scheidt 2006). Moreover, physical and mental health problems influence each other significantly. It has been shown for example, that depression and pain perception have a bidirectional influence, increasing each other, even holding true for acute pain due to injury or trauma (Michaelides and Zis 2019).
So what could that mean for our understanding of Long-COVID syndrome? It has been well established, that apart from physical symptoms, Long-COVID syndrome is associated with psychological and social aspects, both as separate risk factors and symptoms of the disease. This is not only a finding consistently established in various studies mentioned above, but easily understandable from a biopsychosocial perspective, as many examples show. One of the main symptoms in patients suffering from Long-COVID syndrome is physical and mental fatigue. This often leads to the development of depressive symptoms, including avolition and a general feeling of hopelessness and heaviness, increasing fatigue. In addition, patients are often afraid of the sometimes very intense "crashes" of post-exertional malaise, leading to a reduction of physical and mental exercise, which in turn further decreases the respective abilities and condition. Reduced social participation can increase this dynamic, as social isolation reduces demands and progresses depressive symptoms. With a reduced capacity to cope with the difficulties of the disease, patients with pre-existing clinical (or subclinical) psychiatric diagnoses are much more vulnerable to these processes. This could lead to symptoms persisting much longer than any single biological cause could induce. It is important to understand that these mechanisms are neither a sideline effect nor the fault of the patient but an integral part of the pathogenic process.
In addition, attempts to establish either a single biological cause or treatment option have so far failed, pointing towards the insufficiency of this approach. This leads to our suggestion not to focus merely on Long-COVID syndrome but rather on the patients suffering from Long-COVID syndrome, shifting our perspective to include all biopsychosocial aspects of these patients instead of a list of symptoms to be eliminated (Fig. 1). We, therefore, need to broaden our understanding of how these aspects interact with each other.
Consequences for treatment plans
Even before the symptoms of Long-COVID syndrome became widely known and focus of further research, it was argued that the biopsychosocial model should be the underpinning philosophy in rehabilitation for patients recovering from COVID-19 (Wainwright and Low 2020). The authors recognized patients' needs not only for physical treatment, but psychological and social support as well, while the focus on different aspects may (and should) vary during the course of treatment and between individuals. There is evidence, that post-traumatic stress disorder in patients with Long-COVID syndrome was directly associated with increased fatigue and breathlessness and that improvements in fatigue after rehabilitation were associated with improvements in post-traumatic stress disorder (Harenwall et al. 2022), which led the authors to emphasize the need to apply the biopsychosocial approach in Long-COVID rehabilitation and the integration of psychotherapy as treatment for post-traumatic stress disorder as a priority in treatment. This is mirrored in official consensus statements and guidelines for Long-COVID rehabilitation and treatment, such as from Stanford Hall (Barker-Davies et al. 2020) and the German association of medical societies (Koczulla et al. 2022) explicitly including psychological teams in the rehabilitation and treatment process.
This biopsychosocial model has already been successfully implemented in a digital Long-COVID rehabilitation program consisting of an interdisciplinary team of health professionals led by a clinical psychologist and including a physiotherapist, occupational therapist, dietitian, speech and language therapist, assistant psychologist and personal support navigator, which has greatly improved symptoms (Harenwall et al. 2021).
Not surprisingly, there is evidence suggesting a benefit of antidepressive medication in treating depressive symptoms in patients suffering from Long-COVID syndrome (Fenton and Lee 2023;Mazza et al. 2022). Other mental illnesses -if present -should be treated according to the respective national and international guidelines. However, there is currently no general evidence for psychiatric pharmacotherapy for all Long-COVID cases, therefor the indication, as with all treatment components, should be assessed on an individual basis.
Based on this, we suggest the further development of comprehensive treatment plans, including all aspects of the patient. Apart from treating known diseases on a biological level (e.g., myocarditis) if present, biological treatment should focus on physiotherapy and other rehabilitation methods. This should be augmented with a consistent screening for psychiatric diagnoses as well as basic psychological interventions, such as education on the biopsychosocial model and its implications, motivating the patient to reduce avoidance behavior and limiting anxiety. Neurocognitive training and individual psychotherapy as well as psychiatric pharmacotherapy should be applied where necessary. On a social scale, occupational therapy could be a tool to specifically train necessary abilities to enable participation in both social and work life (Fig. 2).
Conclusion
Especially since the current public and medical understanding as well as scientific practice often encourages the belief in unidirectional cause-effect relationships with a great stigmatization of psychiatric or psychosomatic disorders as well as physical-mental interaction in general, it should be our goal to guide that understanding to include a broader perspective. Individual as well as collective psychological and social risk factors as well as symptoms should be identified alongside the somatic diagnostic process and included in a comprehensive physical-mental treatment Fig. 1 Contributing biological, psychological and social factors and resulting symptoms in patients with Long-COVID syndrome plan on different levels. Therefore, the biopsychosocial model should be the underlying concept in both understanding and treating patients with Long-COVID syndrome. Since somatization is associated with a reduced ability to perceive and communicate affects, healthcare professionals should be open to recognizing, validating and discussing these affects with patients, even and especially if this is difficult or associated with resistance. Access to psychological treatment should not be limited to clinically diagnosed psychiatric illnesses, but also include patients with subclinical depressive, anxious or post-traumatic symptoms to assess possible correlations on an individual scale. Somatic treatment plans should be carefully evaluated concerning their potential benefits and risks equally and applied where indicated. Lastly, we strongly advise against treatment strategies of trying multiple biological treatments one after the other without including a biopsychosocial perspective, especially if these treatments (as is currently the case) lack sufficient scientific evidence.
Author contributions CT researched the information, designed the figures and drafted the manuscript. AS collaborated on designing the structure of the manuscript and corrected the manuscript and figures. Both the authors finalized the manuscript.
Funding Open Access funding enabled and organized by Projekt DEAL. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
Data availability Enquiries about data availability should be directed to the authors.
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not Fig. 2 Treatment plans including the biopsychosocial model included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2023-03-10T06:16:57.512Z | 2023-03-09T00:00:00.000 | {
"year": 2023,
"sha1": "3a8678cc7458caab549174b63a837f60eb0bf22d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10787-023-01174-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "12a6cd354ef27b5a661d956ec34d2945134e8d60",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255952996 | pes2o/s2orc | v3-fos-license | Newcastle disease virus-vectored West Nile fever vaccine is immunogenic in mammals and poultry
West Nile virus (WNV) is an emerging zoonotic pathogen which is harmful to human and animal health. Effective vaccination in susceptible hosts should protect against WNV infection and significantly reduce viral transmission between animals and from animals to humans. A versatile vaccine suitable for different species that can be delivered via flexible routes remains an essential unmet medical need. In this study, we developed a recombinant avirulent Newcastle disease virus (NDV) LaSota strain expressing WNV premembrane/envelope (PrM/E) proteins (designated rLa-WNV-PrM/E) and evaluated its immunogenicity in mice, horses, chickens, ducks and geese. Mouse immunization experiments disclosed that rLa-WNV-PrM/E induces significant levels of WNV-neutralizing antibodies and E protein-specific CD4+ and CD8+ T-cell responses. Moreover, recombinant rLa-WNV-PrM/E elicited significant levels of WNV-specific IgG in horses upon delivery via intramuscular immunization, and in chickens, ducks and geese via intramuscular, oral or intranasal immunization. Our results collectively support the utility of rLa-WNV-PrM/E as a promising WNV veterinary vaccine candidate for mammals and poultry.
Background
West Nile virus (WNV) is the causative agent of West Nile fever (WNF), a major emerging zoonotic disease shown to have a significant negative impact on both human and animal health since the first recorded case in Uganda in 1937. WNV is a member of the genus Flavivirus belonging to the family Flaviviridae. The virus is one of the most widespread arthropod-transmitted pathogens, and is extensively distributed worldwide throughout Africa, Europe, Asia and North America. WNV has a broad host spectrum comprising several species of birds (including poultry), mammals, amphibians and reptiles. Culex mosquitoes play an important role as the primary global WNV transmission vector, and are responsible for the incidental infection of humans and horses, which are considered dead-end hosts of WNV [1][2][3][4].
Vaccination in sensitive host animals, especially those abundant in number and closely associated with humans, such as horses, poultry and other bird species, should protect against WNV infection and significantly reduce transmission between animals and from animals to humans. Currently, several injection-delivered vaccines [5][6][7][8] are licensed for horses, but not other sensitive host animals. A versatile vaccine suitable for different species that can be delivered via flexible administration routes therefore remains an unmet medical requirement.
Newcastle disease virus (NDV) has been actively developed and evaluated as a vaccine vector for the control of human and animal diseases [9][10][11][12][13][14][15][16]. NDV vector vaccines can be effectively delivered via intramuscular or * Correspondence: zywen7@yahoo.com; buzhigao@caas.cn † Equal contributors 1 intratracheal inoculation in mammals and intramuscular, intranasal or oral (through water or feed) inoculation in poultry [11,12,[17][18][19][20][21]. In the current study, we generated a recombinant nonvirulent NDV LaSota virus strain expressing WNV pre-membrane (PrM) and envelope protein (E), two surface glycoproteins that form a heterodimer on the viral surface [22] and are responsible for eliciting the majority of protective immune responses [23]. Immunogenicity of the recombinant NDV in mammals and poultry delivered via different immunization routes was further evaluated.
Construction of recombinant NDV LaSota virus
The chemically synthesized mammalian codonoptimized WNV PrM/E gene (strain NY99, GenBank No. DQ211652.1) was cloned and inserted into the Pme I site between the P and M genes of full-length genomic cDNA of NDV LaSota [11]. The resultant plasmid was co-transfected with eukaryotic plasmids expressing NDV nucleoprotein (NP), phosphate protein (P) and large polymerase protein (L), following an established protocol [11]. The rescued recombinant virus was designated rLa-WNV-PrM/E. Expression of WNV PrM and E proteins was confirmed via indirect immunofluorescence and western blot assays. Mouse anti-WNV E monoclonal antibody (developed in our laboratory), mouse anti-PrM monoclonal antibody [24] and chicken anti-NDV serum [11] was used as primary antibodies. Fluorescein isothiocyanate (FITC)-conjugated goat anti-mouse antibody (Sigma, St. Louis, MO) and Tetramethylrhodamine (TRITC)-conjugated rabbit anti-chicken antibody (Sigma, St. Louis, MO) was used as secondary antibodies for immunofluorescence assay. Chicken anti-NDV serum and mouse anti-WNV serum (developed in our laboratory) were used as primary antibodies, horseradishperoxidase (HRP)-conjugated goat anti-chicken IgG and goat anti-mouse IgG (SouthernBiotech, Birmingham, AL) were used as secondary antibodies for western blot assay.
To determine the pathogenicity of rLa-WNV-PrM/E in poultry, mean death time, intracerebral pathogenicity index, and intravenous pathogenicity index were determined in embryonated specific pathogen-free (SPF) chickens or eggs according to the OIE Manual [25]. To assess pathogenicity in mouse, ten 6-week-old female C57BL/6 mice (Vital River, Beijing, China) were inoculated intramuscularly with 0.1 ml diluted allantoic fluid containing 1 × 10 8 EID 50 (50 % Embryo Infectious Dose) rLa-WNV-PrM/E and intranasally with 0.03 ml diluted allantoic fluid containing 3 × 10 7 EID 50 rLa-WNV-PrM/E. Mice were examined daily for 3 weeks for signs of illness, weight loss or death.
Animal immunization studies
For mouse immunization, ten 6-week-old female C57BL/6 mice (Vital River, Beijing, China) were intramuscularly vaccinated with 0.1 ml diluted allantoic fluid containing 1 × 10 8 EID 50 rLa-WNV-PrM/E twice with a 3-week interval. Splenocytes for assay of E protein-specific CD4+ and CD8+ T-cell responses were harvested 10 days after the first or second dose. Serum samples for the serological assay were prepared 2 weeks after each dose.
For horse immunization, five adult horses were intramuscularly inoculated with 2 ml diluted allantoic fluid containing 2 × 10 9 EID 50 rLa-WNV-PrM/E, and five administered with 2 ml phosphate-buffered saline (PBS) as the control group. Three weeks after the first dose, a booster with the same vaccine was delivered using the same dosage and route. Serum samples were collected for serological assay 2 weeks after each immunization.
For poultry immunization, three groups (ten per group) of 4-week-old SPF chickens were assessed: intramuscular inoculation with 0.1 ml diluted allantoic fluid containing 1 × 10 8 EID 50 rLa-WNV-PrM/E (Group One), oral inoculation with 10 ml diluted allantoic fluid containing 1 × 10 10 EID 50 rLa-WNV-PrM/E mixed with 500 g chicken feed and 300 ml water (Group Two), whereby feeding was stopped 5 h before inoculation, and intramuscular and oral inoculation with PBS (Group Three). Three groups (ten per group) of 4-week-old SPF ducks were immunized following the above procedure. For immunization of geese, four groups (15 per group) of 4-week-old birds were examined: intramuscular inoculation with 0.5 ml diluted allantoic fluid containing 5 × 10 8 EID 50 rLa-WNV-PrM/E (Group One), intranasal inoculation with 0.5 ml diluted allantoic fluid containing 5 × 10 8 EID 50 rLa-WNV-PrM/E via eye drops and nostril instillation (Group Two), oral inoculation with 0.5 ml diluted allantoic fluid containing 5 × 10 8 EID 50 rLa-WNV-PrM/E via buccal cavity instillation (Group Three), and intramuscular inoculation with 0.5 ml PBS (Group Four). Three weeks after the first dose, chickens, ducks and geese were boosted with the vaccine using the same doses and routes. Serum samples were collected for serological assay 2 weeks after each immunization. All poultry were housed in the Experimental Animal Center of Harbin Veterinary Research Institute.
Analysis of WNV-specific IgG, neutralizing and NDV HI antibodies
Enzyme-linked immunosorbent assay (ELISA) for determining antigen-specific IgG in mouse serum was performed as described previously [26]. Briefly, purified mammalian cells producing WNV virus-like particles (4 μg/ml, containing PrM and E proteins, unpublished) were used as coating antigen. Antibodies were detected using HRP-labeled goat anti-mouse IgG (SouthernBiotech, Birmingham, AL) secondary antibody. A standard curve was generated by coating with serially diluted mouse IgG (Southern Biotech, Birmingham, AL) at known concentrations. A linear equation was obtained based on the standard IgG concentration and their O.D values, thus the concentration of WNV-specific IgG was calculated according to the linear equation based on their O.D values and expressed as the amount of IgG per ml of serum (ng/ml). The above coating antigen was also used for ELISA detection of WNV-specific IgG in horse and poultry sera. HRPlabeled goat anti-horse IgG (SouthernBiotech, Birmingham, AL) and goat anti-chicken IgG (SouthernBiotech, Birmingham, AL) were used as secondary antibodies for horse and chicken serum detection, and mouse anti-duck IgG (AbD Serotec, Oxford, UK) and HRP-labeled goat anti-mouse IgG (SouthernBiotech, Birmingham, AL) for duck and goose serum detection. Due to the lack of purified IgG for these animals, results were expressed as O.D. values relative to negative controls.
Mouse serum neutralizing antibody levels were determined using the WNV plaque reduction neutralization test (PRNT) in the Biosafety Level 3 facility of Beijing Institute of Microbiology and Epidemiology. Briefly, 320 μl of 10fold serially diluted mouse serum (heat-inactivated at 56°C for 30 min before use) was mixed with 320 μl medium containing 150 plaque-forming units (PFU) of WNV (strain NY99) and incubated at 37°C for 1 h. Next, the mixture was added to BHK-21 cells in the wells of a six-well plate and incubated at 37°C for 1 h. Following removal of the mixture, cells washed three times with PBS. Cells were overlaid with 2 ml DMEM-agarose, and incubation continued at 37°C. After 72 h, cells were fixed with 4 % paraformaldehyde and subsequently stained with 1.5 % crystal violet to visualize plaques. Neutralization titers were expressed as the reciprocal of the highest dilution of serum showing at least 50 % reduction in number of plaques, compared with the negative control. Neutralizing antibodies of chicken, duck, goose and horse sera were not assessed due to unavailability of the BSL-3 facility during the experimental period. NDV hemagglutinin inhibition (HI) antibodies of immunized animals were determined following a previously described protocol [11].
Flow cytometric analysis of the mouse CD4+ and CD8+ Tcell response The WNV E protein-specific CD4+ and CD8+ T-cell response in C57BL/6 mice was determined via flow cytometry using established protocols [27]. rLa-WNV- PrM/E-immunized mice were sacrificed on day 10 after the first and second immunizations. Mouse splenocytes were prepared as documented by Ye et al. [28]. Briefly, spleens were removed from euthanized mice, cut into small sections, and homogenized by gentle rubbing. After low-speed centrifugation, the supernatant was removed, cells gently re-suspended in red blood cell lysis buffer (Sigma), and incubated on ice for 1 min. Splenocytes (1 × 10 6 ) were stimulated with 20 μg/ml WNV E-specific CD4 peptide (PVGRLVTVNPFVSVA, H-2 b , [29]) or CD8 peptide (LGMSNRDFL, H-2D b , [30] for 6 h in presence of 10 ng/ ml Brefeldin A (eBioscience, San Diego, CA) to assess the CD4+ and CD8+ T-cell response, respectively. Cells were washed twice with PBS containing 3 % fetal calf serum and subsequently stained with Peridinin-Chlorophyll-Protein-Complex (PerCP)-conjugated rat anti-mouse CD4 (or CD8) and phycoerythrin (PE)-conjugated rat anti-mouse CD3 antibody (BD Pharmingen, San Diego, CA). Next, cells were fixed and permeabilized with Fix&Perm Buffer (eBioscience, San Diego, CA) and stained for intracellular interferon-gamma (IFN-γ) with an allophycocyanin (APC)conjugated rat anti-mouse IFN-γ antibody (BD Pharmingen). The levels of CD4+ or CD8+ T-cell responses were determined using flow cytometry on a BD FACSAria Station (BD Immunocytometry Systems, San Jose, CA). Data were analyzed with FlowJo software (Treestar Inc, Ashland, OR).
Statistical analysis
Data on virus titers, antibody titers and mouse T cell responses were analyzed using two-tailed Student's t test with the Excel program (Microsoft, Redmond, WA). To describe the p value significance, the following convention was used: not significant, p > 0.05; significant, p ≤ 0.05; highly significant, p ≤ 0.01.
Generation of rLa-WNV-PrM/E virus and in vitro characterization
Recombinant NDV expressing WNV PrM/E proteins was generated by inserting the PrM/E gene between the P and M genes in NDV genome cDNA (Fig. 1a). The presence of PrM/E was confirmed via RT-PCR. PrM/E protein expression was confirmed via western blot and indirect immunofluorescence staining of rLa-WNV-PrM/E-infected BHK-21 cells. Western blot detected the presence of both E (~45 kDa) and PrM (~25 kDa) proteins (Fig. 1b), which were further confirmed via indirect immunofluorescence with specific monoclonal antibodies against each protein.
The growth titer of rLa-WNV-PrM/E in embryonated chicken eggs was comparable to that of parental rLaSota. Genetic stability of rLa-WNV-PrM/E was assessed by serial passage of the virus in SPF chicken eggs, and confirmed with RT-PCR and immunofluorescence (data not shown). Mean death time (>120 h), intracerebral pathogenicity index (=0), and intravenous pathogenicity index (=0) results demonstrated the lentogenic nature of rLa-WNV-PrM/E in poultry (data not shown). The genetic stability of WNV PrM/E gene within rLa-WNV-PrM/E was assessed by serially passage (at least 10 passages) of the virus in embryonated SPF chicken eggs, the presence and expression of PrM/E was confirmed by RT-PCR and indirect immunofluorescence assay. The results demonstrated the PrM/E gene can be stably maintained and expressed. Ten mice receiving intramuscular inoculation at a dose of 1 × 10 8 EID 50 and intranasal inoculation of 3 × 10 7 EID 50 rLa-WNV-PrM/E survived with no abnormalities during the 2-week observation period. No significant differences in weight gain were observed after inoculation. Our results indicate rLa-WNV-PrM/E is safe for mice (data not shown).
The recombinant virus induces significant WNV-specific humoral and T-cell responses in mice
WNV-specific IgG (Fig. 2a) was detected using ELISA. Notably, the IgG antibody level was significantly boosted after the second dose (p < 0.01). Serum neutralizing antibodies were analyzed with a WNV plaque reduction assay. As shown in Fig. 2b, WNV-neutralizing antibodies were detected after the first dose, and significantly boosted after the second dose (p < 0.01). NDV neutralizing antibodies were also detected after the first dose, and significantly boosted after the second dose (p < 0.01) (Fig. 2c).
rLa-WNV-PrM/E administered via different immunization routes induces significant WNV IgG antibody production in horses and poultry
Given that rLa-WNV-PrM/E induces good humoral responses in mice, we performed horse immunization with the vaccine. Horses received two doses of the vaccine via the intramuscular route with a 3-week interval. WNV-specific IgG was detected after the first dose, and significantly boosted after the second dose (p < 0.01) (Fig. 4a). HI antibodies against NDV were also detected in horse after the first dose, and significantly boosted after the second dose (p < 0.01) (Fig. 4b). To determine whether rLa-WNV-PrM/E induces an immune response in poultry, SPF chickens were intramuscularly or orally inoculated with the recombinant virus twice with a 3week interval. In intramuscularly immunized chickens, the WNV-specific IgG was detected after the first dose, and significantly boosted after the second dose (p < 0.05) (Fig. 5a I). In orally immunized chickens, IgG was also detected after the first dose, but only slightly boosted after the second dose (Fig. 5a II). NDV HI antibodies was detected after the first dose, and significantly boosted after the second dose (p < 0.05) in intramuscularly immunized chickens (Fig. 5b I). In orally immunized chickens, NDV HI antibody was also detected after the first dose, but only slightly boosted after the second dose with no statistical significance (Fig. 5b II). The same immunization procedure was performed for ducks. WNV IgG in intramuscularly (Fig. 6a I) and orally (Fig. 6a II) immunized duck sera was induced after the first dose, and boosted significantly after the second dose (p < 0.05). NDV HI antibodies were detected after the first dose, and significantly boosted after the second dose in intramuscularly (Fig. 6b I) and orally immunized ducks (p < 0.01) (Fig. 6b II). Groups of outbred geese were either intramuscularly, intranasally or orally inoculated with rLa-WNV-PrM/E. Intramuscularly immunized geese produced detectable WNV IgG after the first dose, which was significantly boosted after the second dose (p < 0.05) (Fig. 7a I). Intranasally (Fig. 7a II) and orally (Fig. 7a III) immunized geese also produced a detectable level of IgG after the first dose, which was only slightly boosted after the second dose (p > 0.05). NDV HI antibodies were detected after the first dose, and significantly boosted after the second dose in intramuscularly (Fig. 7b I), intranasally (Fig. 7b II) and orally (Fig. 7b III) immunized geese (p < 0.01).
Discussion
WNV is an important zoonotic pathogen widely distributed geographically, with emergence of increasingly neuroevasive strains. Here, a recombinant NDV LaSota virus expressing WNV PrM and E proteins, rLa-WNV-PrM/E, was constructed as a candidate veterinary vaccine for WNV prevention and control. rLa-WNV-PrM/E elicited significant levels of neutralizing antibodies and WNV-specific T-cell responses in mice, as well as WNV-specific IgG in horses, chickens, ducks, and geese, support the immunogenicity of the newly generated recombinant virus in mammals and poultry.
Mice are sensitive to WNV infection, and thus commonly used as the model animal for WNV vaccine evaluation and other related studies. In our experiments, mice intramuscularly inoculated with rLa-WNV-PrM/E produced significant WNV-neutralizing antibodies and specific IgG. Neutralizing antibodies play a crucial role in WNV control and clearance [31]. We used the 50 % plaque reduction assay for determining the levels of neutralizing antibody against WNV NY99 in mice sera. This method is recommended by WHO for testing the potency of Japanese encephalitis vaccine. The cut-off value for testing serum seroprotection is 1 log 10 (a ten-fold dilution of serum that reduces plaque formation by 50 % is sufficient for protection against viral challenge) [32]. In an earlier study, mice immunized actively or passively that possessed WNV-neutralizing antibodies higher than 1 log 10 were protected against the lethal WNV challenge [33]. In our experiments, the neutralizing antibody titer of rLa-WNV-PrM/E-immunized mice reached up to 1.3 log 10 after the first dose. After administration of the second dose, the neutralizing antibody titer was significantly boosted (~2.2 log 10 ), implying that rLa-WNV-PrM/E confers robust protection against lethal WNV infection in mice. In sera of mice, high levels of anti-WNV E IgG were elicited after the first dose, which were significantly boosted after the second dose. has also been characterized. An earlier study showed that while neutralizing antibodies play a central role in terminating WNV viremia, CD8+ T-cells are essential for preventing sustained WNV infection in peripheral and CNS compartments [35]. Live vector vaccines have a significant advantage in that they effectively elicit cellular responses [36][37][38]. In our experiments, rLa-WNV-PrM/E induced high levels of WNV-specific CD4+ and CD8+ T-cell responses. Although a challenge study was not conducted at this time due to the unavailability of the BSL-3 laboratory, given the neutralizing antibody and T-cell response results, we presume that the rLa-WNV-PrM/E could confer protection in mice. Intramuscularly immunized horses produced WNV specific IgG following the first dose, which was significantly boosted after the second dose.
Considering the close association between IgG and neutralizing antibody levels, we propose that horses acquire protective immunity after vaccination. Further neutralization assays and challenge studies are required to confirm the efficacy of the vaccine in horses. Vaccination of sensitive hosts not only protects the animal itself but also prevents transmission of WNV from animals to humans. Several veterinary WNV vaccines are currently available, including inactivated whole virus vaccine [5,39,40], DNA vaccines [41][42][43], recombinant canarypox-vectored vaccine [6,44] and recombinant Yellow Fever 17D vaccine [45,46]. Notably, canarypox-vectored WNV vaccine has been shown to effectively elicit WNV-specific neutralizing antibodies and confer protection in horses, geese, cats and dogs against lethal WNV challenge [6,44,47,48]. These vaccines require delivery via intramuscular inoculation. The current study demonstrated that rLa-WNV-PrM/E is immunogenic in not only mice, horses and poultry when administered intramuscularly, but also in poultry upon delivery via intranasal or oral inoculation. Based on the collective findings, we propose that rLa-WNV-PrM/E is a promising veterinary candidate vaccine for multiple mammalian and avian species that can be delivered via flexible inoculation routes.
Domestic poultry, such as chickens, ducks and geese, are susceptible to WNV and develop clinical signs and viremia (usually sufficient to infect mosquitoes), thus contributing to WNV transmission. Chickens are widely used as sentinel animals in the early warning of WNV prevalence [2,[49][50][51][52]. Several studies have additionally provided evidence of the susceptibility of domestic or captive ducks to WNV [3,53,54]. Ducks develop high- titer viremia in blood and are capable of shedding virus orally [55]. Geese are also susceptible to WNV, especially young geese, and develop a variety of neurological signs, often resulting in a significant number of deaths [4,56,57]. Geese represent another experimental animal model for WNV vaccine evaluation [58][59][60]. Notably, WNV human infection and isolation of the virus has recently been reported in mosquitoes in the Xinjiang Uygur Autonomous Region in Northwest China [61,62]. China has an estimated 12 billion or more poultry, including four billion ducks and geese. Many domestic birds, especially ducks and geese, are raised in backyards and open ranges under poor biosecurity conditions and live in high-density groups close to large human populations. Since these birds may serve as important amplifying hosts of WNV, control of WNV circulation in birds is important for public health. For economic reasons, poultry farmers are generally unwilling to pay additional vaccine and labor costs for vaccination solely against WNV. As NDV is one of the most lethal and economically important pathogens for poultry (at least chickens and geese), farmers use live vaccines, such as LaSota strain, to protect against infection. In our study, most domestic poultry showed NDV HI antibody titters higher than 4 log 2 after vaccination, irrespective of the delivery route (Figs. 5b, 6b and 7b). A HI antibody titer higher than 3 log 2 is usually sufficient to protect poultry from lethal challenge of NDV. In this scenario, farmers will not need to pay additional vaccine and labor costs to protect against WNV infection by using rLa-WNV-PrM/E instead of NDV live vaccine for routine vaccination. Moreover, oral and intranasal immunization routes are more convenient than intramuscular immunization for poultry as well as water fowl and migratory and resident wild birds.
Conclusions
In summary, rLa-WNV-PrM/E vaccination in susceptible animals is important for protection against WNV infection. Our findings demonstrate that rLa-WNV-PrM/E delivered via multiple immunization routes is immunogenic in both mammals and poultry. | 2023-01-18T15:01:12.375Z | 2016-06-24T00:00:00.000 | {
"year": 2016,
"sha1": "ff81fd43f64a02f05efdcb9e93390c8dae9d8785",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12985-016-0568-5",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "ff81fd43f64a02f05efdcb9e93390c8dae9d8785",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
211091019 | pes2o/s2orc | v3-fos-license | Examining the Effects of Unemployment on Economic Growth in Sri Lanka (1991-2017)
Economic growth of a country can be considered as a major requirement to reduce unemployment. This paper has investigated the relationship between growth and unemployment for the period of 1991-2017 in Sri Lanka through the implementation of Okun’s law, using ARDL Bound test approach. The empirical results confirmed existence of Okun’s Law which is negative relationship between unemployment and economic growth in Sri Lanka. Thus, it can be concluded that the lack of economic growth explain the unemployment problem in Sri Lanka
Introduction
Low unemployment rate and increase of GDP have become the most challenges that threaten the economies of most developed and developing countries. High rates of unemployment which signifies a deficiency in the labour market, a negative influence on individuals, families and the entire economy. On the other hand, there is a widely accepted view in economics that the growth rate of the GDP of an economy leads to increases employment and reduces unemployment. The negative correlation between in unemployment rate and changes in output growth is viewed as one of the most consistent relationship in macroeconomics. This theoretical proposition relating output and unemployment is called "Okun's Law".
In examining unemployment and economic growth nexus around the world, some studies have proved the existent of inverse relationship between economic growth and unemployment (Soylu,2018;Abu, 2016;Abdulkhaliq,2014;Hussain et al, 2010); while some showing a positive relationship (Kreishan ,2017;Sahin et al., 2013). However, very few attempts to test this relationship have been conducted in developing countries.
Considering Sri Lankan economy, GDP growth rate during last period have been reducing erratically with considerable low level of unemployment rate. The unemployment rate in Sri Lanka fluctuated from 4.9 percent to 4.2 to respectively 2010-2017. Despite a slight increases from 2010 to 2012, real GDP growth rate have been continuously decreasing form 5 percent in 2014 to 3.4 percent in 2017 (See table 1). Therefore, the objective of this study is to investigate the relationship between unemployment and economic growth in Sri Lanka. Thus, an investigation of the relationship between unemployment and output will permit analysts to conduct appropriate policies for the Sri Lankan economy.
OKUN'S LAW
The well-known economist Arthur Okun, was the first who started investigates the statistical relationship between a country's unemployment rate in relation to the growth rate in the 1960s. Okun's law predicts a negative relationship between the rate of change in the unemployment rate and the rate of change in output. In his original paper published in 1962 Okun found that every 1percentage point reduction was associated with additional output Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.10, No.20, 2019 growth of 3 percentage point in the US economy. However, the negative correlation between in unemployment rate and changes in output growth is viewed as one of the most consistent relationship in macroeconomics (Adachi, 2007). Theoretically, Okun investigated that the increased workforce must produce more goods and services. Further, Okun found that the unemployment rate declined in the years when the real growth rate was high, whereas the unemployment rate increased in the years when the real growth rate remained low or even negative. Another version of Okun's Law displays the relationship between unemployment and GDP, whereby the percentage increase of 1% in unemployment, causes a 2% fall in GDP. The Okun's Law is not a very exact determiner, however the empirical evidence still holds on its benefits (Misini & Badivuku-pantina, 2017).
Literature Review
A number of studies have followed investigating empirically the relationship between output (GDP) and unemployment (Soylu, 2018;Abu, 2016;Abdul-khaliq, 2014;Sahin et al., 2013Hussain et al, 2010. These studies have discovered the validity of the relation between output and unemployment rate and reported mixed results. Hussain et al. (2010) examined the relationship between growth and unemployment, using time series data for Pakistan from 1972 to 2006. Johanson Cointegration test and Vector Error Correction Model (VECM) were employed. The results indicated that there is short and long run causal relation between growth and unemployment. Kitov (2011) investigated the relationship between employment and real GDP per capita in his study of the US, France, UK, Australia, Canada and Spain economies. As a result of this study, Kitov (2011) has found that high unemployment rates are affected by low growth rates.
Furthermore, other studies observed a reversed relationship between unemployment and output. Sahin et al. (2013) investigated that the nature of the output-employment relationship by using the Turkish quarterly data for the period 1988-2008, employed unit roots, cointegration properties and the error correction models. Research findings reveals that no long-run relationship between aggregate output and total employment. Mosikari (2013) investigated the effect of unemployment on gross domestic product in South Africa in based on annual time series the period for 1980-2011 and it was found that there is no causality found between unemployment rate and GDP growth. Harris and Silverstone (2001) have questioned unemployment and output levels relationship using seven OECD countries but empirical evidences confirmed that there is no long-run relationship between two variables. Seth et al. (2018) also concluded that there is no long run relationship between unemployment rate and economic growth in Nigeria covering the period of 1986 to 2015 using the ARDL Bound Testing model. Alhdiy et al., (2015) indicated that there were no cointegration relationship between the variables of unemployment and GDP, specifically implying there is no long-term relationship between the variables in Egypt between 2006 Q1 -2013Q2.
Particular studies have directly tested Okun's law and reported mixed results because they used different datasets, different techniques, various time periods, and different countries. Certain studies have determined that output growth has a negative effect on unemployment rate. Kukaj (2018) investigated the relationship between unemployment and GDP growth in 7 countries of Western Balkan; precisely it studies the relationship of GDP-growth as a dependent variable with unemployment. Foreign direct investment, and remittances as independent variables. results is it found out that there exists a trade-off between unemployment and economic growth in Western Balkan countries, meanwhile, the model suggests that an increase by one percent point of unemployment will reduce GDP-growth by 0.5 percent points. Soylu (2018) investigated the relation between economic growth and unemployment in Eastern European Countries for the period of 1992-2014, using Panel Unit Root, Pooled Panel OLS and Panel Johansen Co-integration tests. The results showed that the unemployment affected positively by economic growth, in other words 1% rise in GDP will fall the unemployment rate by 0.08%. Abu (2016) employs the autoregressive distributed lag (ARDL) bounds testing technique to examine whether Okun's law exists in Nigeria during 1970-2014. The results demonstrate that in Nigeria, in the long term, unemployment has a negative and significant effect on economic growth. Ruxandra (2015) also examined the relationship between economic growth and unemployment for the post-2007 period. It has been determined that Okun's Law is valid for the Romanian economy. Abdul-khaliq (2014) has explored the relationship between unemployment and GDP growth in 9 Arab Countries, from 1994 to 2010, found that economic growth has negative and significant effect upon the unemployment rate it means that 1% increase in economic Growth will decrease the unemployment rate by 0.16%.
However, some studies refuted the existence of Okun's law. Kreishan (2017) investigates the relationship between unemployment and economic growth in Jordan through the implementation of Okun's law, using annual data covering the period 1970-2008, ADF, Cointergration test and Simple regression are used to test the relation between unemployment and economic growth and to obtain estimates for Okun's coefficient. The empirical results reveal that Okun's law cannot be confirmed for Jordan. Fatai and Bankole (2013) have studied Okun's Law for the Nigerian economy within the period of 1980-2008. The results show that the Okun's Law is not valid for the Nigerian economy. Kreishan (2011) reviewed the relationship between economic growth and unemployment for the Jordanian economy. Kreishan (2011) estimated the Okun's coefficient for the period 1970-2008. As a result of the analysis, it is stated that the Okun's Law is not valid for the Jordian economy.
Data Collection
The study was entirely based on time series secondary data for the period covering from 1991 to 2017. The data were obtained from the Central Bank of Sri Lanka and Census and Statistical Department in Sri Lanka. The two economic variables included in this study are GDP (Sri Lanka's Gross Domestic Product) growth rate and Unemployment rate. The dependent variable was the Gross Domestic Product and unemployment rate was the independent variable. All of the variables have been transformed into differentiated forms as logarithms are a much more useful way to measure economic data and the resulting variables are denoted as LnGDP, LnUNEM.
Model description:
According to the original form of Okun's law, there exists a negative relation between the growth rate of real GDP and the change in unemployment rate. Using the knowledge gained from the studied literature, a model was developed based on the model developed by (Mohseni & Jouzaryan, 2016). The model used to estimate the effect of unemployment on Sri Lankan economy from 1991 to 2017 in framework of this function and the proposed model (Equation 1): = + + …………………………………………………… (01) Where GDP shows Gross Domestic Production, UNEM denotes unemployment rate and shows disturbance term. α, and β1 parameters should also be estimated. Eviews version 6.0 were used to estimate the model.
Equation (1) is re-written following the Autoregressive Distributed Lag (ARDL) bound test suggested by (Wong, 2018) Pesaran et al. (2001) . The reason was due to evidences that the ARDL approach accommodate variables of different order of integration with the exception of variables which integration are higher than one. Additionally, the ARDL approach accommodates both the short run dynamics with the long-run equilibrium without losing long-run information, hence equation 02.
Where: Δ is the first difference operator, φm and βn are the short-run dynamic coefficients, and α is the drift, while ψ1 and 2 are the long-run multipliers, ECM is an error correction model, *t represent white noise errors, + is the lag length which will be chosen optimally by the model using Schwarz information criterion (SIC) and Akaike Information Criterion (AIC).
In first step, null hypothesis of no cointegration that is -: Ψ = Ψ 0 = 0 is tested against the alternative hypothesis of co-integration is -: Ψ ≠ Ψ 0 ≠ 0 .Calculated F-statistic values are compared with critical bounds values provided by Pesaran et al (2001) for determination of the long run relationship. There are two sets of critical values: the lower bound which assumes I (0) and upper bounds assuming I (1). If the calculated value of Fstatistics are greater than upper-critical bound values, we reject the null hypothesis of no cointegration in favour of alternative hypotheses of cointegration. If the calculated value of the F -statistic is smaller than the lower critical bound value than we accept the null hypothesis of no cointegration. If values of the calculated F -statistic lies within upper and lower critical bound values than we neither reject nor accept the null hypothesis.
Estimation Methods
Since time series variables were used in the model, it is necessary to examine and confirm stationarity of the variables in order to avoid spurious regression in model estimation. In order to obtain reliable regression results, we first need to make sure that our model could not be subject to "spurious regression" (Gujarati, 1995). Therefore, we first test the nature of the time series to determine whether they are stationary or non-stationary. Unit root tests were used to determine whether the time series data were stationary. When time series data is non stationary and used for analysis it may give spurious results because estimates obtained from such data will possess non constant mean and variance. In this regard Augmented Dickey Fuller (ADF) and PP test were used to test for unit roots. First, the Dickey-Fuller test was applied to both variables to detect if these variables were stationary or nonstationary. Both variables proved to be non-stationary; regression tests were applied to the first differences.
In the following, Auto-Regressive Distributed Lags (ARDL) Bounds test was used to find cointegration between two variables. The ARDL bounds testing approach to cointegration proposed by Pesaran et al. (2001) does not require a unique order of integration for estimation of cointegration. Long run relationship is investigated using extreme values of bounds test, whereas, short and long run elasticity's are estimated using ARDL model. Therefore, ARDL model was used to assess long-term relationships and short-term dynamics between unemployment and economic growth in Sri Lanka. CUSUM and CUSUM OF SQVARES tests were examined and analyzed for find the stability. Table 2 shows the summary of statistical data of economic growth rate and unemployment rate in Sri Lanka Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.10, No.20, 2019 37 between the periods from 1991 to 2017. Table 2 presented that the Sri Lankan economy was growing on the average of 5.3% between the years 1991 -2017. The maximum economic growth rate in Sri Lanka was 9.1% which occur in year 2012 and the minimum growth rate of the economy is -1.5% which occur in year 2001 between the periods of 1991-2017. In the same period, average unemployment rate is at 8% in Sri Lanka. The maximum unemployment rate from 1990-2018 in Sri Lanka was 14.7% which occur in year 1991 while the minimum unemployment rate stood at 4% in year 2012. 03) that the growth rate of GDP is stationary at level for both the Augmented Dickey-Fuller (ADF) test and Philip-Perron (PP), however unemployment variables has unit root problems at I(0) and is made stationary at first difference for both the ADF and PP. Since the order of integration variables is zero and one, cointegration Johansen method could not be used. Therefore, the ARDL bound test can be applied in testing for the short and long run dynamism on the two variables. Table 04 shows that critical values of bound test. Calculated F -statistic (8.517619) is greater than the critical value at 1%,5% and 10% for the upper bound I(1), then it is concluded that there is cointegration. This implies that there is a long-run relationship between economic growth rate and unemployment rate in Sri Lanka According to the long term model coefficiets (Table 05), the overall results suggested that unemployment had a negative and significant effect on gross national product. The results also showed that 1% increase in unemployment reduced gross national product up to 1.17% in long term.
Results and Discussion
Cointegration between a set of economic variables is a basic rule for using error correction model, which is mainly used to associate short-term fluctuations to relevant long-term values. The Error Correction Term [ECT(-1)] which assesses the speed of adjustment between the short-run disequilibrium (actual) and the long-run equilibrium (expected) has the correct sign and is statistically significant at 5% (see table 06). The estimated coefficient, it will take the speed of 91% in the case of disequilibrium in the short-run to be corrected in the long-run if the right policy measures are put in place. If the statistics were between boundary lines drawn as two separate lines, the null hypothesis claiming stability of parameters would not be rejected. The results of these tests for model estimation are given in Figure 1 and Figure 2. Therefore, model parameters were stable within 5% critical bounds.
Conclusion and recommendations
Economic growth of a country can be considered as a major source to reduce unemployment. This paper has investigated the relationship between growth and unemployment for the period of 1991-2016 in Sri Lanka through the implementation of Okun's law. Time series techniques are used to test the relation between unemployment and economic growth and to obtain estimates for Okun's coefficient. This study used Augmented Dickey-Fuller (ADF) for unit root, ARDL Bounds testing approach.
The findings showed that there is long-run relationship between unemployment rate and economic growth in Sri Lanka. The estimated results of the study conformed existence of stable, long run negative effect, whereas, in short run no relationship is observed. A one percent increase in unemployment is associated with reduction in the growth level by 1.17 percent in the long run. The ECM indicates high speed of adjustment of short run fluctuation as 91 percent short disequilibrium adjusts in a year.
Continued economic growth of a country is the fundamental requirement to reduce unemployment. Macroeconomic stability, investment oriented policies and political stability will be the major source to attain attractive growth rate of a country. It is needed to establish proper long-term industrial policies specially labour intensive policies for Sri Lanka to reduce unemployment. High unemployment rate can be seen in rural areas in Sri Lanka. Therefore, it is needed to establishment of industrial zones in rural areas, improve the human capital and develop infrastructure facilities to reduce unemployment. | 2019-10-31T09:13:38.962Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "85b8fe8d84786fd034834d5a44960309963be4ef",
"oa_license": "CCBY",
"oa_url": "https://www.iiste.org/Journals/index.php/JEDS/article/download/50022/51669",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ae4a9024f4dc763665768aab19994dd77a23fb25",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.